Oct 01 15:41:39 crc systemd[1]: Starting Kubernetes Kubelet... Oct 01 15:41:39 crc restorecon[4728]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:39 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 01 15:41:40 crc restorecon[4728]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Oct 01 15:41:40 crc restorecon[4728]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Oct 01 15:41:41 crc kubenswrapper[4949]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 01 15:41:41 crc kubenswrapper[4949]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Oct 01 15:41:41 crc kubenswrapper[4949]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 01 15:41:41 crc kubenswrapper[4949]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 01 15:41:41 crc kubenswrapper[4949]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Oct 01 15:41:41 crc kubenswrapper[4949]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.347855 4949 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.355639 4949 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.355673 4949 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.355686 4949 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.355696 4949 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.355705 4949 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.355716 4949 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.355726 4949 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.355736 4949 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.355746 4949 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.355755 4949 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.355765 4949 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.355775 4949 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.355784 4949 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.355794 4949 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.355804 4949 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.355815 4949 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.355825 4949 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.355834 4949 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.355844 4949 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.355853 4949 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.355864 4949 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.355873 4949 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.355884 4949 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.355894 4949 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.355904 4949 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.355919 4949 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.355933 4949 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.355944 4949 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.355954 4949 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.355964 4949 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.355973 4949 feature_gate.go:330] unrecognized feature gate: Example Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.355987 4949 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.356029 4949 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.356047 4949 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.356059 4949 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.356069 4949 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.356079 4949 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.356088 4949 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.356100 4949 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.356110 4949 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.356119 4949 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.356168 4949 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.356178 4949 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.356187 4949 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.356198 4949 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.356208 4949 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.356217 4949 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.356226 4949 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.356236 4949 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.356246 4949 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.356255 4949 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.356265 4949 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.356274 4949 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.356284 4949 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.356294 4949 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.356305 4949 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.356315 4949 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.356325 4949 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.356334 4949 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.356343 4949 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.356353 4949 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.356363 4949 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.356373 4949 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.356384 4949 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.356393 4949 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.356403 4949 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.356413 4949 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.356430 4949 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.356473 4949 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.356489 4949 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.356502 4949 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.357409 4949 flags.go:64] FLAG: --address="0.0.0.0" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.357442 4949 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.357477 4949 flags.go:64] FLAG: --anonymous-auth="true" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.357492 4949 flags.go:64] FLAG: --application-metrics-count-limit="100" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.357506 4949 flags.go:64] FLAG: --authentication-token-webhook="false" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.357519 4949 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.357535 4949 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.357549 4949 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.357560 4949 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.357572 4949 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.357584 4949 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.357596 4949 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.357608 4949 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.357619 4949 flags.go:64] FLAG: --cgroup-root="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.357630 4949 flags.go:64] FLAG: --cgroups-per-qos="true" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.357642 4949 flags.go:64] FLAG: --client-ca-file="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.357653 4949 flags.go:64] FLAG: --cloud-config="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.357664 4949 flags.go:64] FLAG: --cloud-provider="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.357675 4949 flags.go:64] FLAG: --cluster-dns="[]" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.357698 4949 flags.go:64] FLAG: --cluster-domain="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.357709 4949 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.357720 4949 flags.go:64] FLAG: --config-dir="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.357732 4949 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.357744 4949 flags.go:64] FLAG: --container-log-max-files="5" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.357758 4949 flags.go:64] FLAG: --container-log-max-size="10Mi" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.357769 4949 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.357781 4949 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.357793 4949 flags.go:64] FLAG: --containerd-namespace="k8s.io" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.357805 4949 flags.go:64] FLAG: --contention-profiling="false" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.357816 4949 flags.go:64] FLAG: --cpu-cfs-quota="true" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.357830 4949 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.357842 4949 flags.go:64] FLAG: --cpu-manager-policy="none" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.357853 4949 flags.go:64] FLAG: --cpu-manager-policy-options="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.357886 4949 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.357899 4949 flags.go:64] FLAG: --enable-controller-attach-detach="true" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.357911 4949 flags.go:64] FLAG: --enable-debugging-handlers="true" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.357922 4949 flags.go:64] FLAG: --enable-load-reader="false" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.357933 4949 flags.go:64] FLAG: --enable-server="true" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.357945 4949 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.357969 4949 flags.go:64] FLAG: --event-burst="100" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.357981 4949 flags.go:64] FLAG: --event-qps="50" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.357992 4949 flags.go:64] FLAG: --event-storage-age-limit="default=0" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358004 4949 flags.go:64] FLAG: --event-storage-event-limit="default=0" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358016 4949 flags.go:64] FLAG: --eviction-hard="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358030 4949 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358043 4949 flags.go:64] FLAG: --eviction-minimum-reclaim="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358054 4949 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358066 4949 flags.go:64] FLAG: --eviction-soft="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358078 4949 flags.go:64] FLAG: --eviction-soft-grace-period="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358089 4949 flags.go:64] FLAG: --exit-on-lock-contention="false" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358103 4949 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358114 4949 flags.go:64] FLAG: --experimental-mounter-path="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358164 4949 flags.go:64] FLAG: --fail-cgroupv1="false" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358177 4949 flags.go:64] FLAG: --fail-swap-on="true" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358189 4949 flags.go:64] FLAG: --feature-gates="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358204 4949 flags.go:64] FLAG: --file-check-frequency="20s" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358216 4949 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358227 4949 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358239 4949 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358251 4949 flags.go:64] FLAG: --healthz-port="10248" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358263 4949 flags.go:64] FLAG: --help="false" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358274 4949 flags.go:64] FLAG: --hostname-override="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358285 4949 flags.go:64] FLAG: --housekeeping-interval="10s" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358297 4949 flags.go:64] FLAG: --http-check-frequency="20s" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358309 4949 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358319 4949 flags.go:64] FLAG: --image-credential-provider-config="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358331 4949 flags.go:64] FLAG: --image-gc-high-threshold="85" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358342 4949 flags.go:64] FLAG: --image-gc-low-threshold="80" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358353 4949 flags.go:64] FLAG: --image-service-endpoint="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358445 4949 flags.go:64] FLAG: --kernel-memcg-notification="false" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358463 4949 flags.go:64] FLAG: --kube-api-burst="100" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358476 4949 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358491 4949 flags.go:64] FLAG: --kube-api-qps="50" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358503 4949 flags.go:64] FLAG: --kube-reserved="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358516 4949 flags.go:64] FLAG: --kube-reserved-cgroup="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358528 4949 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358541 4949 flags.go:64] FLAG: --kubelet-cgroups="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358553 4949 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358566 4949 flags.go:64] FLAG: --lock-file="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358577 4949 flags.go:64] FLAG: --log-cadvisor-usage="false" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358590 4949 flags.go:64] FLAG: --log-flush-frequency="5s" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358603 4949 flags.go:64] FLAG: --log-json-info-buffer-size="0" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358623 4949 flags.go:64] FLAG: --log-json-split-stream="false" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358636 4949 flags.go:64] FLAG: --log-text-info-buffer-size="0" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358648 4949 flags.go:64] FLAG: --log-text-split-stream="false" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358660 4949 flags.go:64] FLAG: --logging-format="text" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358672 4949 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358686 4949 flags.go:64] FLAG: --make-iptables-util-chains="true" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358698 4949 flags.go:64] FLAG: --manifest-url="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358710 4949 flags.go:64] FLAG: --manifest-url-header="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358726 4949 flags.go:64] FLAG: --max-housekeeping-interval="15s" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358739 4949 flags.go:64] FLAG: --max-open-files="1000000" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358755 4949 flags.go:64] FLAG: --max-pods="110" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358767 4949 flags.go:64] FLAG: --maximum-dead-containers="-1" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358780 4949 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358792 4949 flags.go:64] FLAG: --memory-manager-policy="None" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358805 4949 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358818 4949 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358831 4949 flags.go:64] FLAG: --node-ip="192.168.126.11" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358845 4949 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358874 4949 flags.go:64] FLAG: --node-status-max-images="50" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358887 4949 flags.go:64] FLAG: --node-status-update-frequency="10s" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358900 4949 flags.go:64] FLAG: --oom-score-adj="-999" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358912 4949 flags.go:64] FLAG: --pod-cidr="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358924 4949 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358978 4949 flags.go:64] FLAG: --pod-manifest-path="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.358990 4949 flags.go:64] FLAG: --pod-max-pids="-1" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.359003 4949 flags.go:64] FLAG: --pods-per-core="0" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.359015 4949 flags.go:64] FLAG: --port="10250" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.359027 4949 flags.go:64] FLAG: --protect-kernel-defaults="false" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.359039 4949 flags.go:64] FLAG: --provider-id="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.359052 4949 flags.go:64] FLAG: --qos-reserved="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.359065 4949 flags.go:64] FLAG: --read-only-port="10255" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.359077 4949 flags.go:64] FLAG: --register-node="true" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.359089 4949 flags.go:64] FLAG: --register-schedulable="true" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.359103 4949 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.359165 4949 flags.go:64] FLAG: --registry-burst="10" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.359179 4949 flags.go:64] FLAG: --registry-qps="5" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.359191 4949 flags.go:64] FLAG: --reserved-cpus="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.359203 4949 flags.go:64] FLAG: --reserved-memory="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.359219 4949 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.359232 4949 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.359245 4949 flags.go:64] FLAG: --rotate-certificates="false" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.359257 4949 flags.go:64] FLAG: --rotate-server-certificates="false" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.359269 4949 flags.go:64] FLAG: --runonce="false" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.359282 4949 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.359295 4949 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.359307 4949 flags.go:64] FLAG: --seccomp-default="false" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.359318 4949 flags.go:64] FLAG: --serialize-image-pulls="true" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.359372 4949 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.359385 4949 flags.go:64] FLAG: --storage-driver-db="cadvisor" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.359397 4949 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.359411 4949 flags.go:64] FLAG: --storage-driver-password="root" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.359423 4949 flags.go:64] FLAG: --storage-driver-secure="false" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.359436 4949 flags.go:64] FLAG: --storage-driver-table="stats" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.359448 4949 flags.go:64] FLAG: --storage-driver-user="root" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.359460 4949 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.359473 4949 flags.go:64] FLAG: --sync-frequency="1m0s" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.359486 4949 flags.go:64] FLAG: --system-cgroups="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.359498 4949 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.359519 4949 flags.go:64] FLAG: --system-reserved-cgroup="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.359552 4949 flags.go:64] FLAG: --tls-cert-file="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.359566 4949 flags.go:64] FLAG: --tls-cipher-suites="[]" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.359589 4949 flags.go:64] FLAG: --tls-min-version="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.359653 4949 flags.go:64] FLAG: --tls-private-key-file="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.359667 4949 flags.go:64] FLAG: --topology-manager-policy="none" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.359679 4949 flags.go:64] FLAG: --topology-manager-policy-options="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.359692 4949 flags.go:64] FLAG: --topology-manager-scope="container" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.359704 4949 flags.go:64] FLAG: --v="2" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.359722 4949 flags.go:64] FLAG: --version="false" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.359737 4949 flags.go:64] FLAG: --vmodule="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.359751 4949 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.359765 4949 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360195 4949 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360216 4949 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360229 4949 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360240 4949 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360251 4949 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360263 4949 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360278 4949 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360292 4949 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360305 4949 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360316 4949 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360330 4949 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360342 4949 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360368 4949 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360383 4949 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360396 4949 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360408 4949 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360422 4949 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360434 4949 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360446 4949 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360459 4949 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360484 4949 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360496 4949 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360508 4949 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360524 4949 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360557 4949 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360569 4949 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360581 4949 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360593 4949 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360615 4949 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360626 4949 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360638 4949 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360649 4949 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360660 4949 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360670 4949 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360681 4949 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360692 4949 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360703 4949 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360714 4949 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360726 4949 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360736 4949 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360750 4949 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360765 4949 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360777 4949 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360789 4949 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360800 4949 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360810 4949 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360822 4949 feature_gate.go:330] unrecognized feature gate: Example Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360833 4949 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360843 4949 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360854 4949 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360865 4949 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360875 4949 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360886 4949 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360897 4949 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360907 4949 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360923 4949 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360933 4949 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360944 4949 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360955 4949 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360965 4949 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.360994 4949 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.361004 4949 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.361017 4949 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.361031 4949 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.361043 4949 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.361053 4949 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.361064 4949 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.361078 4949 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.361091 4949 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.361102 4949 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.361112 4949 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.362251 4949 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.378683 4949 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.378723 4949 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.378839 4949 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.378852 4949 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.378859 4949 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.378866 4949 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.378872 4949 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.378878 4949 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.378884 4949 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.378890 4949 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.378898 4949 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.378908 4949 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.378922 4949 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.378930 4949 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.378937 4949 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.378944 4949 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.378950 4949 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.378959 4949 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.378966 4949 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.378972 4949 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.378978 4949 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.378984 4949 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.378991 4949 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.378997 4949 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379004 4949 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379010 4949 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379016 4949 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379023 4949 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379029 4949 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379035 4949 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379041 4949 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379047 4949 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379053 4949 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379059 4949 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379065 4949 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379071 4949 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379077 4949 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379083 4949 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379089 4949 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379095 4949 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379101 4949 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379110 4949 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379140 4949 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379148 4949 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379155 4949 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379161 4949 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379167 4949 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379174 4949 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379180 4949 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379187 4949 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379193 4949 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379200 4949 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379205 4949 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379211 4949 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379217 4949 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379224 4949 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379230 4949 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379236 4949 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379245 4949 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379254 4949 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379261 4949 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379268 4949 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379275 4949 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379281 4949 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379287 4949 feature_gate.go:330] unrecognized feature gate: Example Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379294 4949 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379300 4949 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379309 4949 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379350 4949 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379356 4949 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379361 4949 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379367 4949 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379373 4949 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.379381 4949 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379541 4949 feature_gate.go:330] unrecognized feature gate: InsightsConfig Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379551 4949 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379558 4949 feature_gate.go:330] unrecognized feature gate: OVNObservability Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379565 4949 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379571 4949 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379578 4949 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379587 4949 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379596 4949 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379605 4949 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379614 4949 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379622 4949 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379631 4949 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379639 4949 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379646 4949 feature_gate.go:330] unrecognized feature gate: Example Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379652 4949 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379659 4949 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379665 4949 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379671 4949 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379677 4949 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379683 4949 feature_gate.go:330] unrecognized feature gate: NewOLM Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379690 4949 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379695 4949 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379702 4949 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379707 4949 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379714 4949 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379720 4949 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379728 4949 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379734 4949 feature_gate.go:330] unrecognized feature gate: PlatformOperators Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379740 4949 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379747 4949 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379754 4949 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379760 4949 feature_gate.go:330] unrecognized feature gate: PinnedImages Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379767 4949 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379773 4949 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379782 4949 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379794 4949 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379806 4949 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379814 4949 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379822 4949 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379831 4949 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379839 4949 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379845 4949 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379852 4949 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379858 4949 feature_gate.go:330] unrecognized feature gate: GatewayAPI Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379865 4949 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379877 4949 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379889 4949 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379896 4949 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379902 4949 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379909 4949 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379916 4949 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379922 4949 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379928 4949 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379935 4949 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379942 4949 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379948 4949 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379955 4949 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379962 4949 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379968 4949 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379974 4949 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379980 4949 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379987 4949 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.379995 4949 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.380002 4949 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.380009 4949 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.380016 4949 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.380023 4949 feature_gate.go:330] unrecognized feature gate: SignatureStores Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.380029 4949 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.380035 4949 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.380042 4949 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.380047 4949 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.380056 4949 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.381343 4949 server.go:940] "Client rotation is on, will bootstrap in background" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.386471 4949 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.386655 4949 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.389189 4949 server.go:997] "Starting client certificate rotation" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.389222 4949 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.389459 4949 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-10 18:06:11.534441487 +0000 UTC Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.389617 4949 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 1682h24m30.144830006s for next certificate rotation Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.415322 4949 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.417653 4949 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.433536 4949 log.go:25] "Validated CRI v1 runtime API" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.477474 4949 log.go:25] "Validated CRI v1 image API" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.479669 4949 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.486184 4949 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2025-10-01-15-37-24-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.486221 4949 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:41 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.505486 4949 manager.go:217] Machine: {Timestamp:2025-10-01 15:41:41.502993767 +0000 UTC m=+0.808600028 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2799998 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:de71db26-32c1-4956-9e9f-66fc023dcd38 BootID:a4890954-ee04-4573-a52a-d0437f2c0f47 Filesystems:[{Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:41 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:1c:9d:e9 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:1c:9d:e9 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:ec:16:2b Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:49:ad:78 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:4e:28:1c Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:63:b3:ef Speed:-1 Mtu:1496} {Name:ens7.23 MacAddress:52:54:00:5b:d5:0a Speed:-1 Mtu:1496} {Name:eth10 MacAddress:76:25:2f:3d:3f:e2 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:02:c6:a0:6f:0a:b4 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.505982 4949 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.506229 4949 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.507929 4949 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.508275 4949 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.508318 4949 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.508621 4949 topology_manager.go:138] "Creating topology manager with none policy" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.508639 4949 container_manager_linux.go:303] "Creating device plugin manager" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.509279 4949 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.509341 4949 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.509581 4949 state_mem.go:36] "Initialized new in-memory state store" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.509740 4949 server.go:1245] "Using root directory" path="/var/lib/kubelet" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.513878 4949 kubelet.go:418] "Attempting to sync node with API server" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.513912 4949 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.513935 4949 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.513954 4949 kubelet.go:324] "Adding apiserver pod source" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.513971 4949 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.519083 4949 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.520673 4949 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.523099 4949 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.523935 4949 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Oct 01 15:41:41 crc kubenswrapper[4949]: E1001 15:41:41.524162 4949 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.188:6443: connect: connection refused" logger="UnhandledError" Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.524734 4949 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Oct 01 15:41:41 crc kubenswrapper[4949]: E1001 15:41:41.524877 4949 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.188:6443: connect: connection refused" logger="UnhandledError" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.525569 4949 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.525620 4949 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.525650 4949 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.525665 4949 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.525686 4949 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.525699 4949 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.525712 4949 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.525733 4949 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.525747 4949 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.525764 4949 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.525800 4949 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.525814 4949 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.526576 4949 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.527188 4949 server.go:1280] "Started kubelet" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.527665 4949 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.529313 4949 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.529587 4949 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Oct 01 15:41:41 crc systemd[1]: Started Kubernetes Kubelet. Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.530474 4949 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.532273 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.532327 4949 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.532672 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 16:09:21.614138539 +0000 UTC Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.533015 4949 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 1128h27m40.081140391s for next certificate rotation Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.533471 4949 volume_manager.go:287] "The desired_state_of_world populator starts" Oct 01 15:41:41 crc kubenswrapper[4949]: E1001 15:41:41.533499 4949 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.533520 4949 volume_manager.go:289] "Starting Kubelet Volume Manager" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.533542 4949 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.534187 4949 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Oct 01 15:41:41 crc kubenswrapper[4949]: E1001 15:41:41.534286 4949 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.188:6443: connect: connection refused" logger="UnhandledError" Oct 01 15:41:41 crc kubenswrapper[4949]: E1001 15:41:41.539539 4949 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" interval="200ms" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.540034 4949 factory.go:55] Registering systemd factory Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.540070 4949 factory.go:221] Registration of the systemd container factory successfully Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.540739 4949 factory.go:153] Registering CRI-O factory Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.540775 4949 factory.go:221] Registration of the crio container factory successfully Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.540847 4949 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.540879 4949 factory.go:103] Registering Raw factory Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.540897 4949 manager.go:1196] Started watching for new ooms in manager Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.541931 4949 manager.go:319] Starting recovery of all containers Oct 01 15:41:41 crc kubenswrapper[4949]: E1001 15:41:41.541295 4949 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.188:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.186a6850ff0a03bb default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2025-10-01 15:41:41.527118779 +0000 UTC m=+0.832725010,LastTimestamp:2025-10-01 15:41:41.527118779 +0000 UTC m=+0.832725010,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.548472 4949 server.go:460] "Adding debug handlers to kubelet server" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.554402 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.554546 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.554591 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.554621 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.554649 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.554675 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.554700 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.554728 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.554759 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.554800 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.554848 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.554877 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.554977 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.555015 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.555117 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.555268 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.555310 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.555339 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.555367 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.555411 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.555442 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.555489 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.555524 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.555562 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.555642 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.555668 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.555710 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.555744 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.555790 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.555828 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.555853 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.555940 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.555972 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.555998 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.556026 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.556064 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.556091 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.556165 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.556196 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.556222 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.556300 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.556330 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.556357 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.556382 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.556418 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.556447 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.556526 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.556559 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.556649 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.556707 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.556746 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.556795 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.556938 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.556982 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.557022 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.557060 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.557170 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.557204 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.557361 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.557499 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.557545 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.557580 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.557615 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.557642 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.557684 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.557711 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.557736 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.557763 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.557799 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.557828 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.557854 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.557879 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.557905 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.557944 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.557967 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.557998 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.558025 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.558062 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.558088 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.558243 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.558275 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.558299 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.558340 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.558380 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.558409 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.558435 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.558473 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.558584 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.558615 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.558652 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.558683 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.558774 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.558802 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.558839 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.558879 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.558904 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.558934 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.559011 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.559039 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.559066 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.559093 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.559118 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.559190 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.559248 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.559297 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.559326 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.559365 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.559402 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.559455 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.559485 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.559542 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.559583 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.559616 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.567370 4949 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.567436 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.567465 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.567485 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.567506 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.567527 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.567546 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.567565 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.567587 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.567612 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.567633 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.567652 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.567671 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.567691 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.567711 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.567733 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.567752 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.567770 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.567791 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.567809 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.567829 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.567847 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.567866 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.567920 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.567938 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.567939 4949 manager.go:324] Recovery completed Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.567960 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.567981 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.568000 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.568019 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.568037 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.568056 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.568077 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.568094 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.568113 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.568158 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.568181 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.568200 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.568222 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.568283 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.568304 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.568324 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.568357 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.568377 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.568408 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.568427 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.568446 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.568466 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.568488 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.568516 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.568543 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.568565 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.568584 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.568603 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.568621 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.568642 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.568661 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.568682 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.568701 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.568719 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.568738 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.568758 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.568796 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.568816 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.568838 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.568857 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.568879 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.568898 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.568916 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.568935 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.568956 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.568974 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.568995 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.569013 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.569033 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.569051 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.569071 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.569090 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.569110 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.569155 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.569178 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.569196 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.569217 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.569235 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.569252 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.569270 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.569289 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.569309 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.569328 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.569347 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.569367 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.569386 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.569405 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.569424 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.569454 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.569472 4949 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.569493 4949 reconstruct.go:97] "Volume reconstruction finished" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.569505 4949 reconciler.go:26] "Reconciler: start to sync state" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.579184 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.582595 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.582656 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.582685 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.584680 4949 cpu_manager.go:225] "Starting CPU manager" policy="none" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.584713 4949 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.584744 4949 state_mem.go:36] "Initialized new in-memory state store" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.597604 4949 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.599816 4949 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.600319 4949 status_manager.go:217] "Starting to sync pod status with apiserver" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.600352 4949 kubelet.go:2335] "Starting kubelet main sync loop" Oct 01 15:41:41 crc kubenswrapper[4949]: E1001 15:41:41.600425 4949 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Oct 01 15:41:41 crc kubenswrapper[4949]: W1001 15:41:41.602275 4949 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Oct 01 15:41:41 crc kubenswrapper[4949]: E1001 15:41:41.602361 4949 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.188:6443: connect: connection refused" logger="UnhandledError" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.604567 4949 policy_none.go:49] "None policy: Start" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.605787 4949 memory_manager.go:170] "Starting memorymanager" policy="None" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.605854 4949 state_mem.go:35] "Initializing new in-memory state store" Oct 01 15:41:41 crc kubenswrapper[4949]: E1001 15:41:41.633797 4949 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.655584 4949 manager.go:334] "Starting Device Plugin manager" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.655637 4949 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.655649 4949 server.go:79] "Starting device plugin registration server" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.656009 4949 eviction_manager.go:189] "Eviction manager: starting control loop" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.656023 4949 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.656215 4949 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.656279 4949 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.656286 4949 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 01 15:41:41 crc kubenswrapper[4949]: E1001 15:41:41.663665 4949 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.700735 4949 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.700897 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.702561 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.702633 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.702655 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.702913 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.703230 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.703301 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.704651 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.704723 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.704750 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.704662 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.704795 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.704806 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.704915 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.705004 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.705035 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.706132 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.706157 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.706166 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.706318 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.706503 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.706582 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.706702 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.706728 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.706738 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.707197 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.707250 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.707263 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.707517 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.707620 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.707662 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.708348 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.708373 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.708385 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.708921 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.708944 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.708958 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.708944 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.709011 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.709028 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.709223 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.709249 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.710073 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.710098 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.710109 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:41:41 crc kubenswrapper[4949]: E1001 15:41:41.740741 4949 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" interval="400ms" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.756109 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.757459 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.757521 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.757533 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.757568 4949 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 01 15:41:41 crc kubenswrapper[4949]: E1001 15:41:41.758273 4949 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.188:6443: connect: connection refused" node="crc" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.771830 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.771861 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.771882 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.771899 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.771917 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.771937 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.771954 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.771968 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.771982 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.772048 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.772105 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.772210 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.772251 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.772340 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.772404 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.873904 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.873987 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.874023 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.874058 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.874112 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.874213 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.874222 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.874262 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.874299 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.874330 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.874367 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.874232 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.874425 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.874380 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.874315 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.874521 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.874554 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.874558 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.874586 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.874371 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.874620 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.874639 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.874611 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.874659 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.874652 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.874678 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.874771 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.874741 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.874822 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.874911 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.959276 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.961400 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.961440 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.961454 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:41:41 crc kubenswrapper[4949]: I1001 15:41:41.961479 4949 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 01 15:41:41 crc kubenswrapper[4949]: E1001 15:41:41.962009 4949 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.188:6443: connect: connection refused" node="crc" Oct 01 15:41:42 crc kubenswrapper[4949]: I1001 15:41:42.041664 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Oct 01 15:41:42 crc kubenswrapper[4949]: I1001 15:41:42.064928 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 01 15:41:42 crc kubenswrapper[4949]: W1001 15:41:42.083915 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-7d49cd34fc9656fa3dc84f523356330e51040ca9ebf0c6d39ca26c574d8af3e8 WatchSource:0}: Error finding container 7d49cd34fc9656fa3dc84f523356330e51040ca9ebf0c6d39ca26c574d8af3e8: Status 404 returned error can't find the container with id 7d49cd34fc9656fa3dc84f523356330e51040ca9ebf0c6d39ca26c574d8af3e8 Oct 01 15:41:42 crc kubenswrapper[4949]: I1001 15:41:42.095593 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 01 15:41:42 crc kubenswrapper[4949]: W1001 15:41:42.098222 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-7a7b95403b29522f15005cfee6f5ef5e3a058996b93ae20963d32bcd69eaa3e6 WatchSource:0}: Error finding container 7a7b95403b29522f15005cfee6f5ef5e3a058996b93ae20963d32bcd69eaa3e6: Status 404 returned error can't find the container with id 7a7b95403b29522f15005cfee6f5ef5e3a058996b93ae20963d32bcd69eaa3e6 Oct 01 15:41:42 crc kubenswrapper[4949]: W1001 15:41:42.113027 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-c65c2f0481a5ed07ecd6d8ff017b6d4472a03103a21f98bd079284ea773084dd WatchSource:0}: Error finding container c65c2f0481a5ed07ecd6d8ff017b6d4472a03103a21f98bd079284ea773084dd: Status 404 returned error can't find the container with id c65c2f0481a5ed07ecd6d8ff017b6d4472a03103a21f98bd079284ea773084dd Oct 01 15:41:42 crc kubenswrapper[4949]: I1001 15:41:42.116870 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 01 15:41:42 crc kubenswrapper[4949]: I1001 15:41:42.125630 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Oct 01 15:41:42 crc kubenswrapper[4949]: W1001 15:41:42.129901 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-a20fa81eca9df9317969d2953d456a454ffb57a5564a9ca22d7ba056476d6e21 WatchSource:0}: Error finding container a20fa81eca9df9317969d2953d456a454ffb57a5564a9ca22d7ba056476d6e21: Status 404 returned error can't find the container with id a20fa81eca9df9317969d2953d456a454ffb57a5564a9ca22d7ba056476d6e21 Oct 01 15:41:42 crc kubenswrapper[4949]: E1001 15:41:42.141793 4949 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" interval="800ms" Oct 01 15:41:42 crc kubenswrapper[4949]: W1001 15:41:42.142779 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-c777718cf0a4a5b693b5c849159818c41a24c96e157b19f5430d62af2e4a60d9 WatchSource:0}: Error finding container c777718cf0a4a5b693b5c849159818c41a24c96e157b19f5430d62af2e4a60d9: Status 404 returned error can't find the container with id c777718cf0a4a5b693b5c849159818c41a24c96e157b19f5430d62af2e4a60d9 Oct 01 15:41:42 crc kubenswrapper[4949]: I1001 15:41:42.362277 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 01 15:41:42 crc kubenswrapper[4949]: I1001 15:41:42.363687 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:41:42 crc kubenswrapper[4949]: I1001 15:41:42.363761 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:41:42 crc kubenswrapper[4949]: I1001 15:41:42.363779 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:41:42 crc kubenswrapper[4949]: I1001 15:41:42.363820 4949 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 01 15:41:42 crc kubenswrapper[4949]: E1001 15:41:42.364471 4949 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.188:6443: connect: connection refused" node="crc" Oct 01 15:41:42 crc kubenswrapper[4949]: I1001 15:41:42.530871 4949 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Oct 01 15:41:42 crc kubenswrapper[4949]: W1001 15:41:42.558827 4949 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Oct 01 15:41:42 crc kubenswrapper[4949]: E1001 15:41:42.558945 4949 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.188:6443: connect: connection refused" logger="UnhandledError" Oct 01 15:41:42 crc kubenswrapper[4949]: I1001 15:41:42.605912 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c65c2f0481a5ed07ecd6d8ff017b6d4472a03103a21f98bd079284ea773084dd"} Oct 01 15:41:42 crc kubenswrapper[4949]: I1001 15:41:42.607297 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7a7b95403b29522f15005cfee6f5ef5e3a058996b93ae20963d32bcd69eaa3e6"} Oct 01 15:41:42 crc kubenswrapper[4949]: I1001 15:41:42.608763 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7d49cd34fc9656fa3dc84f523356330e51040ca9ebf0c6d39ca26c574d8af3e8"} Oct 01 15:41:42 crc kubenswrapper[4949]: I1001 15:41:42.609933 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"c777718cf0a4a5b693b5c849159818c41a24c96e157b19f5430d62af2e4a60d9"} Oct 01 15:41:42 crc kubenswrapper[4949]: I1001 15:41:42.611081 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a20fa81eca9df9317969d2953d456a454ffb57a5564a9ca22d7ba056476d6e21"} Oct 01 15:41:42 crc kubenswrapper[4949]: W1001 15:41:42.692657 4949 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Oct 01 15:41:42 crc kubenswrapper[4949]: E1001 15:41:42.692952 4949 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.188:6443: connect: connection refused" logger="UnhandledError" Oct 01 15:41:42 crc kubenswrapper[4949]: W1001 15:41:42.785364 4949 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Oct 01 15:41:42 crc kubenswrapper[4949]: E1001 15:41:42.785440 4949 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.188:6443: connect: connection refused" logger="UnhandledError" Oct 01 15:41:42 crc kubenswrapper[4949]: E1001 15:41:42.943173 4949 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" interval="1.6s" Oct 01 15:41:43 crc kubenswrapper[4949]: W1001 15:41:43.117452 4949 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Oct 01 15:41:43 crc kubenswrapper[4949]: E1001 15:41:43.117538 4949 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.188:6443: connect: connection refused" logger="UnhandledError" Oct 01 15:41:43 crc kubenswrapper[4949]: I1001 15:41:43.165173 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 01 15:41:43 crc kubenswrapper[4949]: I1001 15:41:43.167572 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:41:43 crc kubenswrapper[4949]: I1001 15:41:43.167639 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:41:43 crc kubenswrapper[4949]: I1001 15:41:43.167657 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:41:43 crc kubenswrapper[4949]: I1001 15:41:43.167715 4949 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 01 15:41:43 crc kubenswrapper[4949]: E1001 15:41:43.168486 4949 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.188:6443: connect: connection refused" node="crc" Oct 01 15:41:43 crc kubenswrapper[4949]: I1001 15:41:43.531298 4949 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Oct 01 15:41:43 crc kubenswrapper[4949]: I1001 15:41:43.616360 4949 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="dd26ba21f3b352fff240b18114aa47c6185f125ed89be2586e676b766434c3b6" exitCode=0 Oct 01 15:41:43 crc kubenswrapper[4949]: I1001 15:41:43.616480 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 01 15:41:43 crc kubenswrapper[4949]: I1001 15:41:43.616478 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"dd26ba21f3b352fff240b18114aa47c6185f125ed89be2586e676b766434c3b6"} Oct 01 15:41:43 crc kubenswrapper[4949]: I1001 15:41:43.617861 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:41:43 crc kubenswrapper[4949]: I1001 15:41:43.617921 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:41:43 crc kubenswrapper[4949]: I1001 15:41:43.617940 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:41:43 crc kubenswrapper[4949]: I1001 15:41:43.620044 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"dd7235ba9aab9db39a23fe6b99b3278ff4c1ede47a1d58c2c986fe3244a2fa12"} Oct 01 15:41:43 crc kubenswrapper[4949]: I1001 15:41:43.620095 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"da953f01526669a6db92b3ed2d63682b200ea37d86e44ff34a3b7630877fe6ce"} Oct 01 15:41:43 crc kubenswrapper[4949]: I1001 15:41:43.620104 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 01 15:41:43 crc kubenswrapper[4949]: I1001 15:41:43.620105 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d9b08c9c1691470813c489fb61abd198ca2b5a94ddec2f9b5a06b9d6c9b56334"} Oct 01 15:41:43 crc kubenswrapper[4949]: I1001 15:41:43.620228 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e467246736f9943a0524d6786a8c4bd4a567f60f20ffd7259a5124c844c44522"} Oct 01 15:41:43 crc kubenswrapper[4949]: I1001 15:41:43.621356 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:41:43 crc kubenswrapper[4949]: I1001 15:41:43.621390 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:41:43 crc kubenswrapper[4949]: I1001 15:41:43.621400 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:41:43 crc kubenswrapper[4949]: I1001 15:41:43.622524 4949 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c" exitCode=0 Oct 01 15:41:43 crc kubenswrapper[4949]: I1001 15:41:43.622664 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 01 15:41:43 crc kubenswrapper[4949]: I1001 15:41:43.622663 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c"} Oct 01 15:41:43 crc kubenswrapper[4949]: I1001 15:41:43.627213 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:41:43 crc kubenswrapper[4949]: I1001 15:41:43.627393 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:41:43 crc kubenswrapper[4949]: I1001 15:41:43.627484 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:41:43 crc kubenswrapper[4949]: I1001 15:41:43.630074 4949 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c" exitCode=0 Oct 01 15:41:43 crc kubenswrapper[4949]: I1001 15:41:43.630306 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c"} Oct 01 15:41:43 crc kubenswrapper[4949]: I1001 15:41:43.630388 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 01 15:41:43 crc kubenswrapper[4949]: I1001 15:41:43.631163 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 01 15:41:43 crc kubenswrapper[4949]: I1001 15:41:43.631888 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:41:43 crc kubenswrapper[4949]: I1001 15:41:43.631964 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:41:43 crc kubenswrapper[4949]: I1001 15:41:43.631991 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:41:43 crc kubenswrapper[4949]: I1001 15:41:43.632467 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:41:43 crc kubenswrapper[4949]: I1001 15:41:43.632517 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:41:43 crc kubenswrapper[4949]: I1001 15:41:43.632532 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:41:43 crc kubenswrapper[4949]: I1001 15:41:43.632931 4949 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="4cbb0d610e0cf4a4beb09c6beae9121ac6b8f448a5d215da1d15debfaf9a4770" exitCode=0 Oct 01 15:41:43 crc kubenswrapper[4949]: I1001 15:41:43.633004 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"4cbb0d610e0cf4a4beb09c6beae9121ac6b8f448a5d215da1d15debfaf9a4770"} Oct 01 15:41:43 crc kubenswrapper[4949]: I1001 15:41:43.633071 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 01 15:41:43 crc kubenswrapper[4949]: I1001 15:41:43.634738 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:41:43 crc kubenswrapper[4949]: I1001 15:41:43.634795 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:41:43 crc kubenswrapper[4949]: I1001 15:41:43.634814 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:41:44 crc kubenswrapper[4949]: W1001 15:41:44.378154 4949 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Oct 01 15:41:44 crc kubenswrapper[4949]: E1001 15:41:44.378259 4949 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.188:6443: connect: connection refused" logger="UnhandledError" Oct 01 15:41:44 crc kubenswrapper[4949]: W1001 15:41:44.499609 4949 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Oct 01 15:41:44 crc kubenswrapper[4949]: E1001 15:41:44.499702 4949 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.188:6443: connect: connection refused" logger="UnhandledError" Oct 01 15:41:44 crc kubenswrapper[4949]: I1001 15:41:44.530525 4949 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Oct 01 15:41:44 crc kubenswrapper[4949]: E1001 15:41:44.544198 4949 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.188:6443: connect: connection refused" interval="3.2s" Oct 01 15:41:44 crc kubenswrapper[4949]: I1001 15:41:44.642484 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"f2ceea2906c651debca4b72a5654ba5f8291a20d513a96b2fde7f1a0373674c6"} Oct 01 15:41:44 crc kubenswrapper[4949]: I1001 15:41:44.642542 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"513e0be5244aa9da6671888fedd775306c0824650cc687f0faea952c28d1b6b3"} Oct 01 15:41:44 crc kubenswrapper[4949]: I1001 15:41:44.642558 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"c47ab556b4721804041f9abee5effa082f4f29fbffe7681a7fc8efe0cb4f2b7e"} Oct 01 15:41:44 crc kubenswrapper[4949]: I1001 15:41:44.642571 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"110e9a59347a665dfa5d6fcdf9e4937525c5e799c8356b2f28d3ba523f925d93"} Oct 01 15:41:44 crc kubenswrapper[4949]: I1001 15:41:44.646186 4949 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7" exitCode=0 Oct 01 15:41:44 crc kubenswrapper[4949]: I1001 15:41:44.646323 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7"} Oct 01 15:41:44 crc kubenswrapper[4949]: I1001 15:41:44.646353 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 01 15:41:44 crc kubenswrapper[4949]: I1001 15:41:44.647000 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:41:44 crc kubenswrapper[4949]: I1001 15:41:44.647024 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:41:44 crc kubenswrapper[4949]: I1001 15:41:44.647034 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:41:44 crc kubenswrapper[4949]: I1001 15:41:44.648662 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"e3e4ba077e53324e0a6cb25d94228b1a6265878a923ef88934ee0d87967385c2"} Oct 01 15:41:44 crc kubenswrapper[4949]: I1001 15:41:44.648845 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 01 15:41:44 crc kubenswrapper[4949]: I1001 15:41:44.649568 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:41:44 crc kubenswrapper[4949]: I1001 15:41:44.649585 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:41:44 crc kubenswrapper[4949]: I1001 15:41:44.649593 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:41:44 crc kubenswrapper[4949]: I1001 15:41:44.651546 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 01 15:41:44 crc kubenswrapper[4949]: I1001 15:41:44.651907 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 01 15:41:44 crc kubenswrapper[4949]: I1001 15:41:44.652189 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"d21bf7c2c1ae0ae4427bdccaa3cb14eec8034ad6b89498a13e41c346aa10f33d"} Oct 01 15:41:44 crc kubenswrapper[4949]: I1001 15:41:44.652210 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"714ddbb1add5e32f98fea89501df6bfa217b607e3252008af82f130bc66e69c0"} Oct 01 15:41:44 crc kubenswrapper[4949]: I1001 15:41:44.652220 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"3ae2d981e775beb7e08339ee2b24437a4e7faa11462128d69dc0120905075759"} Oct 01 15:41:44 crc kubenswrapper[4949]: I1001 15:41:44.652529 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:41:44 crc kubenswrapper[4949]: I1001 15:41:44.652556 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:41:44 crc kubenswrapper[4949]: I1001 15:41:44.652566 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:41:44 crc kubenswrapper[4949]: I1001 15:41:44.654301 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:41:44 crc kubenswrapper[4949]: I1001 15:41:44.654327 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:41:44 crc kubenswrapper[4949]: I1001 15:41:44.654345 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:41:44 crc kubenswrapper[4949]: W1001 15:41:44.757994 4949 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Oct 01 15:41:44 crc kubenswrapper[4949]: E1001 15:41:44.758088 4949 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.188:6443: connect: connection refused" logger="UnhandledError" Oct 01 15:41:44 crc kubenswrapper[4949]: W1001 15:41:44.759562 4949 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.188:6443: connect: connection refused Oct 01 15:41:44 crc kubenswrapper[4949]: E1001 15:41:44.759601 4949 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.188:6443: connect: connection refused" logger="UnhandledError" Oct 01 15:41:44 crc kubenswrapper[4949]: I1001 15:41:44.768616 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 01 15:41:44 crc kubenswrapper[4949]: I1001 15:41:44.770882 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:41:44 crc kubenswrapper[4949]: I1001 15:41:44.770913 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:41:44 crc kubenswrapper[4949]: I1001 15:41:44.770923 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:41:44 crc kubenswrapper[4949]: I1001 15:41:44.770944 4949 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 01 15:41:44 crc kubenswrapper[4949]: E1001 15:41:44.771341 4949 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.188:6443: connect: connection refused" node="crc" Oct 01 15:41:44 crc kubenswrapper[4949]: I1001 15:41:44.862229 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 01 15:41:45 crc kubenswrapper[4949]: I1001 15:41:45.658797 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"02eba524109342a0990f10dfdf5432c2504acbb394b4f6e23b370becc256f51c"} Oct 01 15:41:45 crc kubenswrapper[4949]: I1001 15:41:45.658869 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 01 15:41:45 crc kubenswrapper[4949]: I1001 15:41:45.659805 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:41:45 crc kubenswrapper[4949]: I1001 15:41:45.659822 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:41:45 crc kubenswrapper[4949]: I1001 15:41:45.659830 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:41:45 crc kubenswrapper[4949]: I1001 15:41:45.662067 4949 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e" exitCode=0 Oct 01 15:41:45 crc kubenswrapper[4949]: I1001 15:41:45.662190 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 01 15:41:45 crc kubenswrapper[4949]: I1001 15:41:45.662214 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e"} Oct 01 15:41:45 crc kubenswrapper[4949]: I1001 15:41:45.662242 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 01 15:41:45 crc kubenswrapper[4949]: I1001 15:41:45.662270 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 01 15:41:45 crc kubenswrapper[4949]: I1001 15:41:45.662193 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 01 15:41:45 crc kubenswrapper[4949]: I1001 15:41:45.662558 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 01 15:41:45 crc kubenswrapper[4949]: I1001 15:41:45.663251 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:41:45 crc kubenswrapper[4949]: I1001 15:41:45.663327 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:41:45 crc kubenswrapper[4949]: I1001 15:41:45.663355 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:41:45 crc kubenswrapper[4949]: I1001 15:41:45.663462 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:41:45 crc kubenswrapper[4949]: I1001 15:41:45.663478 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:41:45 crc kubenswrapper[4949]: I1001 15:41:45.663488 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:41:45 crc kubenswrapper[4949]: I1001 15:41:45.663553 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:41:45 crc kubenswrapper[4949]: I1001 15:41:45.663575 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:41:45 crc kubenswrapper[4949]: I1001 15:41:45.663583 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:41:45 crc kubenswrapper[4949]: I1001 15:41:45.663552 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:41:45 crc kubenswrapper[4949]: I1001 15:41:45.663608 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:41:45 crc kubenswrapper[4949]: I1001 15:41:45.663622 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:41:46 crc kubenswrapper[4949]: I1001 15:41:46.669572 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e34912853a2f81af97497fc66053b02383114c5a42d46e39f2e7a98b48bf9e78"} Oct 01 15:41:46 crc kubenswrapper[4949]: I1001 15:41:46.669657 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"05aabfc445359dba1707d194e9dcf27bb57037b582f1181556cc65e2f2f712f4"} Oct 01 15:41:46 crc kubenswrapper[4949]: I1001 15:41:46.669676 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"02305e2d3e998f2b11185e65d51381d27f69cc544cba61b0878457e4816e5371"} Oct 01 15:41:46 crc kubenswrapper[4949]: I1001 15:41:46.669691 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2c11c5a8bd9444a04b072ad656edec0e3f7d287392a2dffd509f3d7c0d1d4601"} Oct 01 15:41:46 crc kubenswrapper[4949]: I1001 15:41:46.669712 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 01 15:41:46 crc kubenswrapper[4949]: I1001 15:41:46.669787 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 01 15:41:46 crc kubenswrapper[4949]: I1001 15:41:46.669716 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 01 15:41:46 crc kubenswrapper[4949]: I1001 15:41:46.670821 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:41:46 crc kubenswrapper[4949]: I1001 15:41:46.670860 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:41:46 crc kubenswrapper[4949]: I1001 15:41:46.670831 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:41:46 crc kubenswrapper[4949]: I1001 15:41:46.670872 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:41:46 crc kubenswrapper[4949]: I1001 15:41:46.670894 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:41:46 crc kubenswrapper[4949]: I1001 15:41:46.670910 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:41:46 crc kubenswrapper[4949]: I1001 15:41:46.702278 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 01 15:41:46 crc kubenswrapper[4949]: I1001 15:41:46.702510 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 01 15:41:46 crc kubenswrapper[4949]: I1001 15:41:46.703779 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:41:46 crc kubenswrapper[4949]: I1001 15:41:46.703851 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:41:46 crc kubenswrapper[4949]: I1001 15:41:46.703869 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:41:46 crc kubenswrapper[4949]: I1001 15:41:46.708897 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 01 15:41:47 crc kubenswrapper[4949]: I1001 15:41:47.676495 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b31382514a5f6e8d234356e87d2c1aba55e02a4edb54835382fe810f10e8df83"} Oct 01 15:41:47 crc kubenswrapper[4949]: I1001 15:41:47.676533 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 01 15:41:47 crc kubenswrapper[4949]: I1001 15:41:47.676585 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 01 15:41:47 crc kubenswrapper[4949]: I1001 15:41:47.676534 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 01 15:41:47 crc kubenswrapper[4949]: I1001 15:41:47.677918 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:41:47 crc kubenswrapper[4949]: I1001 15:41:47.677959 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:41:47 crc kubenswrapper[4949]: I1001 15:41:47.677971 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:41:47 crc kubenswrapper[4949]: I1001 15:41:47.678113 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:41:47 crc kubenswrapper[4949]: I1001 15:41:47.678193 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:41:47 crc kubenswrapper[4949]: I1001 15:41:47.678208 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:41:47 crc kubenswrapper[4949]: I1001 15:41:47.679615 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:41:47 crc kubenswrapper[4949]: I1001 15:41:47.679652 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:41:47 crc kubenswrapper[4949]: I1001 15:41:47.679664 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:41:47 crc kubenswrapper[4949]: I1001 15:41:47.736556 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 01 15:41:47 crc kubenswrapper[4949]: I1001 15:41:47.971746 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 01 15:41:47 crc kubenswrapper[4949]: I1001 15:41:47.973913 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:41:47 crc kubenswrapper[4949]: I1001 15:41:47.973985 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:41:47 crc kubenswrapper[4949]: I1001 15:41:47.974001 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:41:47 crc kubenswrapper[4949]: I1001 15:41:47.974053 4949 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 01 15:41:48 crc kubenswrapper[4949]: I1001 15:41:48.679391 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 01 15:41:48 crc kubenswrapper[4949]: I1001 15:41:48.679548 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 01 15:41:48 crc kubenswrapper[4949]: I1001 15:41:48.680972 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:41:48 crc kubenswrapper[4949]: I1001 15:41:48.681018 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:41:48 crc kubenswrapper[4949]: I1001 15:41:48.681030 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:41:48 crc kubenswrapper[4949]: I1001 15:41:48.681072 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:41:48 crc kubenswrapper[4949]: I1001 15:41:48.681102 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:41:48 crc kubenswrapper[4949]: I1001 15:41:48.681115 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:41:49 crc kubenswrapper[4949]: I1001 15:41:49.866026 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 01 15:41:49 crc kubenswrapper[4949]: I1001 15:41:49.866530 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 01 15:41:49 crc kubenswrapper[4949]: I1001 15:41:49.868213 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:41:49 crc kubenswrapper[4949]: I1001 15:41:49.868280 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:41:49 crc kubenswrapper[4949]: I1001 15:41:49.868302 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:41:50 crc kubenswrapper[4949]: I1001 15:41:50.079087 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Oct 01 15:41:50 crc kubenswrapper[4949]: I1001 15:41:50.079442 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 01 15:41:50 crc kubenswrapper[4949]: I1001 15:41:50.081351 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:41:50 crc kubenswrapper[4949]: I1001 15:41:50.081411 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:41:50 crc kubenswrapper[4949]: I1001 15:41:50.081432 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:41:51 crc kubenswrapper[4949]: I1001 15:41:51.244800 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 01 15:41:51 crc kubenswrapper[4949]: I1001 15:41:51.245042 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 01 15:41:51 crc kubenswrapper[4949]: I1001 15:41:51.246550 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:41:51 crc kubenswrapper[4949]: I1001 15:41:51.246633 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:41:51 crc kubenswrapper[4949]: I1001 15:41:51.246661 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:41:51 crc kubenswrapper[4949]: E1001 15:41:51.663774 4949 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Oct 01 15:41:51 crc kubenswrapper[4949]: I1001 15:41:51.783839 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 01 15:41:51 crc kubenswrapper[4949]: I1001 15:41:51.784502 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 01 15:41:51 crc kubenswrapper[4949]: I1001 15:41:51.786309 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:41:51 crc kubenswrapper[4949]: I1001 15:41:51.786367 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:41:51 crc kubenswrapper[4949]: I1001 15:41:51.786380 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:41:52 crc kubenswrapper[4949]: I1001 15:41:52.788744 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Oct 01 15:41:52 crc kubenswrapper[4949]: I1001 15:41:52.789051 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 01 15:41:52 crc kubenswrapper[4949]: I1001 15:41:52.791369 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:41:52 crc kubenswrapper[4949]: I1001 15:41:52.791442 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:41:52 crc kubenswrapper[4949]: I1001 15:41:52.791463 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:41:54 crc kubenswrapper[4949]: I1001 15:41:54.784662 4949 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Oct 01 15:41:54 crc kubenswrapper[4949]: I1001 15:41:54.785620 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Oct 01 15:41:54 crc kubenswrapper[4949]: I1001 15:41:54.869084 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 01 15:41:54 crc kubenswrapper[4949]: I1001 15:41:54.869526 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 01 15:41:54 crc kubenswrapper[4949]: I1001 15:41:54.870830 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:41:54 crc kubenswrapper[4949]: I1001 15:41:54.870936 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:41:54 crc kubenswrapper[4949]: I1001 15:41:54.870982 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:41:55 crc kubenswrapper[4949]: I1001 15:41:55.508782 4949 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Oct 01 15:41:55 crc kubenswrapper[4949]: I1001 15:41:55.509547 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Oct 01 15:41:55 crc kubenswrapper[4949]: I1001 15:41:55.514638 4949 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Oct 01 15:41:55 crc kubenswrapper[4949]: I1001 15:41:55.514731 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Oct 01 15:41:55 crc kubenswrapper[4949]: I1001 15:41:55.703053 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Oct 01 15:41:55 crc kubenswrapper[4949]: I1001 15:41:55.705073 4949 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="02eba524109342a0990f10dfdf5432c2504acbb394b4f6e23b370becc256f51c" exitCode=255 Oct 01 15:41:55 crc kubenswrapper[4949]: I1001 15:41:55.705110 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"02eba524109342a0990f10dfdf5432c2504acbb394b4f6e23b370becc256f51c"} Oct 01 15:41:55 crc kubenswrapper[4949]: I1001 15:41:55.705381 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 01 15:41:55 crc kubenswrapper[4949]: I1001 15:41:55.706241 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:41:55 crc kubenswrapper[4949]: I1001 15:41:55.706359 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:41:55 crc kubenswrapper[4949]: I1001 15:41:55.706426 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:41:55 crc kubenswrapper[4949]: I1001 15:41:55.707015 4949 scope.go:117] "RemoveContainer" containerID="02eba524109342a0990f10dfdf5432c2504acbb394b4f6e23b370becc256f51c" Oct 01 15:41:56 crc kubenswrapper[4949]: I1001 15:41:56.710246 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Oct 01 15:41:56 crc kubenswrapper[4949]: I1001 15:41:56.712155 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"41f38cee75135e15635fdda772af2bdd1588e57025d28247223af8b5c34202e3"} Oct 01 15:41:56 crc kubenswrapper[4949]: I1001 15:41:56.712379 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 01 15:41:56 crc kubenswrapper[4949]: I1001 15:41:56.713313 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:41:56 crc kubenswrapper[4949]: I1001 15:41:56.713364 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:41:56 crc kubenswrapper[4949]: I1001 15:41:56.713373 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:41:59 crc kubenswrapper[4949]: I1001 15:41:59.874684 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 01 15:41:59 crc kubenswrapper[4949]: I1001 15:41:59.874902 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 01 15:41:59 crc kubenswrapper[4949]: I1001 15:41:59.875035 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 01 15:41:59 crc kubenswrapper[4949]: I1001 15:41:59.876648 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:41:59 crc kubenswrapper[4949]: I1001 15:41:59.876690 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:41:59 crc kubenswrapper[4949]: I1001 15:41:59.876702 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:41:59 crc kubenswrapper[4949]: I1001 15:41:59.881814 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 01 15:42:00 crc kubenswrapper[4949]: E1001 15:42:00.483046 4949 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.485377 4949 trace.go:236] Trace[495182763]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (01-Oct-2025 15:41:49.486) (total time: 10998ms): Oct 01 15:42:00 crc kubenswrapper[4949]: Trace[495182763]: ---"Objects listed" error: 10998ms (15:42:00.485) Oct 01 15:42:00 crc kubenswrapper[4949]: Trace[495182763]: [10.998843717s] [10.998843717s] END Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.485423 4949 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.485892 4949 trace.go:236] Trace[249763717]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (01-Oct-2025 15:41:50.414) (total time: 10071ms): Oct 01 15:42:00 crc kubenswrapper[4949]: Trace[249763717]: ---"Objects listed" error: 10071ms (15:42:00.485) Oct 01 15:42:00 crc kubenswrapper[4949]: Trace[249763717]: [10.071324469s] [10.071324469s] END Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.485922 4949 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.486618 4949 trace.go:236] Trace[1869478936]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (01-Oct-2025 15:41:49.182) (total time: 11303ms): Oct 01 15:42:00 crc kubenswrapper[4949]: Trace[1869478936]: ---"Objects listed" error: 11303ms (15:42:00.486) Oct 01 15:42:00 crc kubenswrapper[4949]: Trace[1869478936]: [11.303659973s] [11.303659973s] END Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.486650 4949 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.487297 4949 trace.go:236] Trace[176239710]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (01-Oct-2025 15:41:49.915) (total time: 10572ms): Oct 01 15:42:00 crc kubenswrapper[4949]: Trace[176239710]: ---"Objects listed" error: 10572ms (15:42:00.487) Oct 01 15:42:00 crc kubenswrapper[4949]: Trace[176239710]: [10.572221351s] [10.572221351s] END Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.487331 4949 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.488831 4949 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Oct 01 15:42:00 crc kubenswrapper[4949]: E1001 15:42:00.489987 4949 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.527205 4949 apiserver.go:52] "Watching apiserver" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.531200 4949 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.531477 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf"] Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.531786 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.531910 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.531971 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.532051 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.532060 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 01 15:42:00 crc kubenswrapper[4949]: E1001 15:42:00.532118 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:42:00 crc kubenswrapper[4949]: E1001 15:42:00.532315 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.532411 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:42:00 crc kubenswrapper[4949]: E1001 15:42:00.532484 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.533304 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.533747 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.533777 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.533780 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.534409 4949 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.534500 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.534748 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.534788 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.534798 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.535190 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.564284 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.581654 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.589727 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.589779 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.589804 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.589834 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.589863 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.589893 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.589925 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.589958 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.589991 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.590025 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.590056 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.590087 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.590118 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.590179 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.590210 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.590243 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.590258 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.590266 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.590276 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.590351 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.590380 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.590372 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.590401 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.590468 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.590463 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.590485 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.590554 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.590657 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.590661 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.590695 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.590705 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.590683 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.590730 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.590761 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.590793 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.590823 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.590851 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.590881 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.590942 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.590955 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.590974 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.591005 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.591029 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.591037 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.591072 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.591105 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.591149 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.591161 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.591174 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.591195 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.591230 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.591264 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.591270 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.591300 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.591324 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.591336 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.591441 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.591477 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.591499 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.591498 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.591509 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.591568 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.591580 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.591626 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.591662 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.591694 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.591726 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.591734 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.591749 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.591760 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.591796 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.591829 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.591848 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.591894 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.591903 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.592018 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.592051 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.592192 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.592248 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.592272 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.592337 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.592351 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.592440 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.592470 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.592495 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.592633 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.592632 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.592651 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.592702 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.592705 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.592814 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.592839 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.592905 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.592956 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.592991 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.592946 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.593044 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.593068 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.594466 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.591861 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.594521 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.594542 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.594593 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.594611 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.594627 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.594623 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.594643 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.594659 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.594679 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.594695 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.594713 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.594729 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.594747 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.594764 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.594782 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.594799 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.594815 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.594833 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.594834 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.594849 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.594869 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.594918 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.594935 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.594951 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.594969 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.594987 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.595004 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.595021 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.595040 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.595040 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.595060 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.595080 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.595099 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.595117 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.595166 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.595185 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.595203 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.595223 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.595240 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.595252 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.595259 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.595311 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.595333 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.595352 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.595369 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.595418 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.595435 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.595381 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.595657 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.595564 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.595816 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.595932 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.595455 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.596060 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.596104 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.596174 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.596188 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.596212 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.596250 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.596279 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.596289 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.596326 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.596365 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.596401 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.596435 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.596469 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.596502 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.596536 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.596569 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.596603 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.596636 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.596670 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.596703 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.596737 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.596771 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.596803 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.596838 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.596957 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.597011 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.597045 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.597083 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.597145 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.597184 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.597220 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.597255 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.597292 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.597331 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.597364 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.597399 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.597437 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.597473 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.597511 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.597592 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.597628 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.597663 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.597702 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.597736 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.597769 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.597807 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.597840 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.597876 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.597926 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.597977 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.598018 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.598070 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.598178 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.598231 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.598266 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.598332 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.598382 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.598436 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.598484 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.598535 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.598605 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.598669 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.598722 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.598777 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.598832 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.598904 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.598970 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.599028 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.599085 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.599176 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.599232 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.599287 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.599342 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.599400 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.599448 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.599485 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.599523 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.599560 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.596277 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.596417 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.596565 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.596606 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.596699 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.596792 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.596843 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.597344 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.597374 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.597384 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.597845 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.597679 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.597861 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.598065 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.598166 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.598423 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.599777 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.598740 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.598755 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.598779 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.598921 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.598969 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.599292 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.599576 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.599592 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.599646 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.601754 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.601987 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.602258 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.602414 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.602651 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.602801 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.602935 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.603105 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.603259 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.603313 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.603430 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.603574 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.603629 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.604008 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.604376 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.604407 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.604428 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.604726 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.604826 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.605163 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.605253 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.605360 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.605790 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.606091 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.606231 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.606605 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.606826 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.607164 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.607373 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.607668 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.607711 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.607840 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.607846 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.608054 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.608212 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.608361 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.608825 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.608822 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.608839 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: E1001 15:42:00.608968 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:42:01.108938641 +0000 UTC m=+20.414545042 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.609147 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.609365 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.609415 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.609446 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.610314 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.610316 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.610370 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.610433 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.610483 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.610565 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.610448 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.610664 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.610717 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.610869 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.610884 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.611004 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.611236 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.611242 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.599596 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.611346 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.611356 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.611388 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.611414 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.611439 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.611465 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.611491 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.611514 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.611539 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.611564 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.611587 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.611611 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.611634 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.611659 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.611683 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.611695 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.611708 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.611740 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.611777 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.611821 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.611860 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.611898 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.611938 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.612007 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.612049 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.612087 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.612162 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.612210 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.612257 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.612300 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.612344 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.612381 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.612424 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.612462 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.612498 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.612532 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.612571 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.612719 4949 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.612741 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.612760 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.612777 4949 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.612793 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.612812 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.612829 4949 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.612831 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.612847 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.612874 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.612895 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.612915 4949 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.612934 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.612951 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.612967 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.612984 4949 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613003 4949 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613019 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613036 4949 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613054 4949 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613071 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613087 4949 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613104 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613151 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613171 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613188 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613210 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613229 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613249 4949 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613267 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613284 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613300 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613317 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613334 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613351 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613367 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613383 4949 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613399 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613415 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613433 4949 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613449 4949 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613465 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613480 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613496 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613511 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613528 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613581 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613600 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613616 4949 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613631 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613648 4949 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613665 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613681 4949 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613698 4949 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.611781 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613714 4949 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.611808 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613730 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613749 4949 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613765 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613781 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613798 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613815 4949 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613831 4949 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613848 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613864 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613881 4949 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613899 4949 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613915 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613931 4949 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613947 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613962 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613982 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613999 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614015 4949 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614032 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614049 4949 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614065 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614083 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614098 4949 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614114 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614165 4949 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614182 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614197 4949 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614213 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614230 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614248 4949 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614263 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614279 4949 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614295 4949 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614310 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614328 4949 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614343 4949 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614357 4949 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614372 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614387 4949 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614404 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614432 4949 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614448 4949 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614464 4949 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614481 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614497 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614511 4949 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614529 4949 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614545 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614560 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614576 4949 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614591 4949 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614606 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614623 4949 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614639 4949 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614655 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614672 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614689 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614705 4949 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614720 4949 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614735 4949 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614749 4949 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614764 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614780 4949 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614796 4949 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614813 4949 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614827 4949 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614842 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614857 4949 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614871 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614886 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614901 4949 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614916 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614931 4949 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614946 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614962 4949 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614978 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614994 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.615011 4949 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.615025 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.615040 4949 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.615053 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.615069 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.615085 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.615103 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.615119 4949 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.615443 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.615462 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.615478 4949 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.611813 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.611853 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.611897 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.611970 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.612533 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.612761 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.612960 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613204 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613216 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613242 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613381 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613481 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613507 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613631 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613675 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613701 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613860 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.613963 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.615711 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614303 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614548 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614629 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614715 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.614924 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: E1001 15:42:00.616509 4949 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 01 15:42:00 crc kubenswrapper[4949]: E1001 15:42:00.616590 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-01 15:42:01.116567401 +0000 UTC m=+20.422173782 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.616624 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.616648 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.616697 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.616851 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.616918 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.617267 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.617309 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 01 15:42:00 crc kubenswrapper[4949]: E1001 15:42:00.617752 4949 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.617907 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 01 15:42:00 crc kubenswrapper[4949]: E1001 15:42:00.618245 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-01 15:42:01.118225986 +0000 UTC m=+20.423832197 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.617972 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.618391 4949 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.618688 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.618709 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.618765 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.618934 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.619199 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.619297 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.619337 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.619868 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.620213 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.620489 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.622705 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.634475 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.634652 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.634629 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.634583 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.634595 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 01 15:42:00 crc kubenswrapper[4949]: E1001 15:42:00.634921 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 01 15:42:00 crc kubenswrapper[4949]: E1001 15:42:00.635170 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 01 15:42:00 crc kubenswrapper[4949]: E1001 15:42:00.635254 4949 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.632753 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.635176 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: E1001 15:42:00.635998 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-01 15:42:01.135978276 +0000 UTC m=+20.441584487 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.646959 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.649502 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.650099 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 01 15:42:00 crc kubenswrapper[4949]: E1001 15:42:00.650349 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 01 15:42:00 crc kubenswrapper[4949]: E1001 15:42:00.650381 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 01 15:42:00 crc kubenswrapper[4949]: E1001 15:42:00.650394 4949 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.650404 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 01 15:42:00 crc kubenswrapper[4949]: E1001 15:42:00.650455 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-01 15:42:01.150436804 +0000 UTC m=+20.456042995 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.653966 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.654795 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.666893 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.667672 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.672108 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.681010 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.685612 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.716675 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.716735 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.716740 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.716876 4949 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.716887 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.716890 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.716964 4949 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.717092 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.717176 4949 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.717207 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.717225 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.717242 4949 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.717259 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.717273 4949 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.717287 4949 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.717304 4949 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.717317 4949 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.717345 4949 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.717363 4949 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.717379 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.717393 4949 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.717405 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.717417 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.717429 4949 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.717441 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.717453 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.717465 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.717477 4949 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.717489 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.717501 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.717513 4949 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.717524 4949 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.717536 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.717550 4949 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.717598 4949 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.717609 4949 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.717621 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.717634 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.717647 4949 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.717659 4949 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.717671 4949 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.717684 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.717696 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.717709 4949 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.717721 4949 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.717736 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.717748 4949 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.717760 4949 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.717772 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.717784 4949 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.717795 4949 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.717807 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.717819 4949 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.717832 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.717844 4949 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.717856 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.735521 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.849656 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.862274 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Oct 01 15:42:00 crc kubenswrapper[4949]: W1001 15:42:00.865442 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-d34e25b7b98e9d47555c6542d9b88503b918240858679500c7ca44d510b9de73 WatchSource:0}: Error finding container d34e25b7b98e9d47555c6542d9b88503b918240858679500c7ca44d510b9de73: Status 404 returned error can't find the container with id d34e25b7b98e9d47555c6542d9b88503b918240858679500c7ca44d510b9de73 Oct 01 15:42:00 crc kubenswrapper[4949]: I1001 15:42:00.874925 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Oct 01 15:42:00 crc kubenswrapper[4949]: W1001 15:42:00.876443 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75a4c96_2883_4a0b_bab2_0fab2b6c0b49.slice/crio-36e26fc0f683d6644298998dc5c218d0cdb191577025b7ef75482dde9d617a16 WatchSource:0}: Error finding container 36e26fc0f683d6644298998dc5c218d0cdb191577025b7ef75482dde9d617a16: Status 404 returned error can't find the container with id 36e26fc0f683d6644298998dc5c218d0cdb191577025b7ef75482dde9d617a16 Oct 01 15:42:00 crc kubenswrapper[4949]: W1001 15:42:00.886554 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-18af5cc0d30edd8faae8cb460aaa8fdcd5c39b6cd218ae54eedd062f62227ed5 WatchSource:0}: Error finding container 18af5cc0d30edd8faae8cb460aaa8fdcd5c39b6cd218ae54eedd062f62227ed5: Status 404 returned error can't find the container with id 18af5cc0d30edd8faae8cb460aaa8fdcd5c39b6cd218ae54eedd062f62227ed5 Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.120595 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:42:01 crc kubenswrapper[4949]: E1001 15:42:01.120921 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:42:02.120882733 +0000 UTC m=+21.426488995 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.121105 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:42:01 crc kubenswrapper[4949]: E1001 15:42:01.121231 4949 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 01 15:42:01 crc kubenswrapper[4949]: E1001 15:42:01.121284 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-01 15:42:02.121271704 +0000 UTC m=+21.426877905 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.121227 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:42:01 crc kubenswrapper[4949]: E1001 15:42:01.121356 4949 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 01 15:42:01 crc kubenswrapper[4949]: E1001 15:42:01.121453 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-01 15:42:02.121428438 +0000 UTC m=+21.427034669 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.222329 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.222398 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:42:01 crc kubenswrapper[4949]: E1001 15:42:01.222572 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 01 15:42:01 crc kubenswrapper[4949]: E1001 15:42:01.222615 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 01 15:42:01 crc kubenswrapper[4949]: E1001 15:42:01.222633 4949 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 01 15:42:01 crc kubenswrapper[4949]: E1001 15:42:01.222701 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-01 15:42:02.222680037 +0000 UTC m=+21.528286238 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 01 15:42:01 crc kubenswrapper[4949]: E1001 15:42:01.222577 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 01 15:42:01 crc kubenswrapper[4949]: E1001 15:42:01.222777 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 01 15:42:01 crc kubenswrapper[4949]: E1001 15:42:01.222794 4949 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 01 15:42:01 crc kubenswrapper[4949]: E1001 15:42:01.222879 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-01 15:42:02.222861602 +0000 UTC m=+21.528467813 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.605879 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.606425 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.607350 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.608012 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.608621 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.609212 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.609916 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.610533 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.611180 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.611711 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.612265 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.612875 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.613349 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.613851 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.614353 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.614834 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.616406 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.616976 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.617996 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.618659 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.619214 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.619813 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.620325 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.620417 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:01Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.621068 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.621563 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.622268 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.623009 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.623580 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.624223 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.624698 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.625185 4949 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.625292 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.627933 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.629177 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.630375 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.632001 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.632761 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.633913 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.634767 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.636527 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:01Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.636615 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.637266 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.638673 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.640022 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.640811 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.641935 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.642679 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.643936 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.645165 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.646445 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.647012 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.647637 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.648753 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.649495 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.650709 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.655720 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439d2f19-631a-4560-aa58-70cdeef997a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://110e9a59347a665dfa5d6fcdf9e4937525c5e799c8356b2f28d3ba523f925d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513e0be5244aa9da6671888fedd775306c0824650cc687f0faea952c28d1b6b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47ab556b4721804041f9abee5effa082f4f29fbffe7681a7fc8efe0cb4f2b7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41f38cee75135e15635fdda772af2bdd1588e57025d28247223af8b5c34202e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02eba524109342a0990f10dfdf5432c2504acbb394b4f6e23b370becc256f51c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-01T15:41:55Z\\\",\\\"message\\\":\\\"W1001 15:41:44.880099 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1001 15:41:44.880530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759333304 cert, and key in /tmp/serving-cert-2775419030/serving-signer.crt, /tmp/serving-cert-2775419030/serving-signer.key\\\\nI1001 15:41:45.097803 1 observer_polling.go:159] Starting file observer\\\\nW1001 15:41:45.100841 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1001 15:41:45.103153 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1001 15:41:45.104329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2775419030/tls.crt::/tmp/serving-cert-2775419030/tls.key\\\\\\\"\\\\nF1001 15:41:55.441587 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2ceea2906c651debca4b72a5654ba5f8291a20d513a96b2fde7f1a0373674c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:01Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.673083 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:01Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.687635 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:01Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.705032 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:01Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.718893 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:01Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.729486 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"daa37955372c2b350c3cd8103db2d231407c8f853e02d43fcae591d66a1f2bdc"} Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.729904 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"d34e25b7b98e9d47555c6542d9b88503b918240858679500c7ca44d510b9de73"} Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.731598 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"3a28cdb5789930fa13d9e76aeb4dc582efc2f97a117e7d3170909dddd8f6eed2"} Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.731638 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"a081b64cca92126512c81346c1b0005ddefcd11b702b12598c567cc627d3561f"} Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.731675 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"18af5cc0d30edd8faae8cb460aaa8fdcd5c39b6cd218ae54eedd062f62227ed5"} Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.733095 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"36e26fc0f683d6644298998dc5c218d0cdb191577025b7ef75482dde9d617a16"} Oct 01 15:42:01 crc kubenswrapper[4949]: E1001 15:42:01.744802 4949 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.749097 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:01Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.769615 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439d2f19-631a-4560-aa58-70cdeef997a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://110e9a59347a665dfa5d6fcdf9e4937525c5e799c8356b2f28d3ba523f925d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513e0be5244aa9da6671888fedd775306c0824650cc687f0faea952c28d1b6b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47ab556b4721804041f9abee5effa082f4f29fbffe7681a7fc8efe0cb4f2b7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41f38cee75135e15635fdda772af2bdd1588e57025d28247223af8b5c34202e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02eba524109342a0990f10dfdf5432c2504acbb394b4f6e23b370becc256f51c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-01T15:41:55Z\\\",\\\"message\\\":\\\"W1001 15:41:44.880099 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1001 15:41:44.880530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759333304 cert, and key in /tmp/serving-cert-2775419030/serving-signer.crt, /tmp/serving-cert-2775419030/serving-signer.key\\\\nI1001 15:41:45.097803 1 observer_polling.go:159] Starting file observer\\\\nW1001 15:41:45.100841 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1001 15:41:45.103153 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1001 15:41:45.104329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2775419030/tls.crt::/tmp/serving-cert-2775419030/tls.key\\\\\\\"\\\\nF1001 15:41:55.441587 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2ceea2906c651debca4b72a5654ba5f8291a20d513a96b2fde7f1a0373674c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:01Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.799645 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.816796 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa37955372c2b350c3cd8103db2d231407c8f853e02d43fcae591d66a1f2bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:01Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.816972 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.817096 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/kube-controller-manager-crc"] Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.835367 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:01Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.853319 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:01Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.870103 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:01Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.882486 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:01Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.896925 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a28cdb5789930fa13d9e76aeb4dc582efc2f97a117e7d3170909dddd8f6eed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a081b64cca92126512c81346c1b0005ddefcd11b702b12598c567cc627d3561f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:01Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.910338 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:01Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.929461 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:01Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.953870 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:01Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.973345 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439d2f19-631a-4560-aa58-70cdeef997a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://110e9a59347a665dfa5d6fcdf9e4937525c5e799c8356b2f28d3ba523f925d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513e0be5244aa9da6671888fedd775306c0824650cc687f0faea952c28d1b6b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47ab556b4721804041f9abee5effa082f4f29fbffe7681a7fc8efe0cb4f2b7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41f38cee75135e15635fdda772af2bdd1588e57025d28247223af8b5c34202e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02eba524109342a0990f10dfdf5432c2504acbb394b4f6e23b370becc256f51c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-01T15:41:55Z\\\",\\\"message\\\":\\\"W1001 15:41:44.880099 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1001 15:41:44.880530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759333304 cert, and key in /tmp/serving-cert-2775419030/serving-signer.crt, /tmp/serving-cert-2775419030/serving-signer.key\\\\nI1001 15:41:45.097803 1 observer_polling.go:159] Starting file observer\\\\nW1001 15:41:45.100841 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1001 15:41:45.103153 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1001 15:41:45.104329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2775419030/tls.crt::/tmp/serving-cert-2775419030/tls.key\\\\\\\"\\\\nF1001 15:41:55.441587 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2ceea2906c651debca4b72a5654ba5f8291a20d513a96b2fde7f1a0373674c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:01Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:01 crc kubenswrapper[4949]: I1001 15:42:01.988883 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d76d2a7d-139b-40b6-91f8-c2c8ae0ecc71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b08c9c1691470813c489fb61abd198ca2b5a94ddec2f9b5a06b9d6c9b56334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e467246736f9943a0524d6786a8c4bd4a567f60f20ffd7259a5124c844c44522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da953f01526669a6db92b3ed2d63682b200ea37d86e44ff34a3b7630877fe6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7235ba9aab9db39a23fe6b99b3278ff4c1ede47a1d58c2c986fe3244a2fa12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:01Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:02 crc kubenswrapper[4949]: I1001 15:42:02.006372 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa37955372c2b350c3cd8103db2d231407c8f853e02d43fcae591d66a1f2bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:02Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:02 crc kubenswrapper[4949]: I1001 15:42:02.042609 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:02Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:02 crc kubenswrapper[4949]: I1001 15:42:02.131289 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:42:02 crc kubenswrapper[4949]: I1001 15:42:02.131427 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:42:02 crc kubenswrapper[4949]: E1001 15:42:02.131471 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:42:04.131440791 +0000 UTC m=+23.437046982 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:42:02 crc kubenswrapper[4949]: I1001 15:42:02.131545 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:42:02 crc kubenswrapper[4949]: E1001 15:42:02.131615 4949 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 01 15:42:02 crc kubenswrapper[4949]: E1001 15:42:02.131705 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-01 15:42:04.131680997 +0000 UTC m=+23.437287378 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 01 15:42:02 crc kubenswrapper[4949]: E1001 15:42:02.131707 4949 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 01 15:42:02 crc kubenswrapper[4949]: E1001 15:42:02.131766 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-01 15:42:04.13175649 +0000 UTC m=+23.437362681 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 01 15:42:02 crc kubenswrapper[4949]: I1001 15:42:02.232601 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:42:02 crc kubenswrapper[4949]: I1001 15:42:02.232660 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:42:02 crc kubenswrapper[4949]: E1001 15:42:02.232869 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 01 15:42:02 crc kubenswrapper[4949]: E1001 15:42:02.232904 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 01 15:42:02 crc kubenswrapper[4949]: E1001 15:42:02.232923 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 01 15:42:02 crc kubenswrapper[4949]: E1001 15:42:02.232940 4949 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 01 15:42:02 crc kubenswrapper[4949]: E1001 15:42:02.232958 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 01 15:42:02 crc kubenswrapper[4949]: E1001 15:42:02.232976 4949 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 01 15:42:02 crc kubenswrapper[4949]: E1001 15:42:02.233029 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-01 15:42:04.233008399 +0000 UTC m=+23.538614590 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 01 15:42:02 crc kubenswrapper[4949]: E1001 15:42:02.233105 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-01 15:42:04.23304088 +0000 UTC m=+23.538647261 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 01 15:42:02 crc kubenswrapper[4949]: I1001 15:42:02.600686 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:42:02 crc kubenswrapper[4949]: I1001 15:42:02.600780 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:42:02 crc kubenswrapper[4949]: I1001 15:42:02.600692 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:42:02 crc kubenswrapper[4949]: E1001 15:42:02.600898 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:42:02 crc kubenswrapper[4949]: E1001 15:42:02.601086 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:42:02 crc kubenswrapper[4949]: E1001 15:42:02.601112 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:42:02 crc kubenswrapper[4949]: I1001 15:42:02.829324 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Oct 01 15:42:02 crc kubenswrapper[4949]: I1001 15:42:02.845854 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa37955372c2b350c3cd8103db2d231407c8f853e02d43fcae591d66a1f2bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:02Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:02 crc kubenswrapper[4949]: I1001 15:42:02.846172 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Oct 01 15:42:02 crc kubenswrapper[4949]: I1001 15:42:02.859605 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Oct 01 15:42:02 crc kubenswrapper[4949]: I1001 15:42:02.859777 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:02Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:02 crc kubenswrapper[4949]: I1001 15:42:02.875651 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439d2f19-631a-4560-aa58-70cdeef997a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://110e9a59347a665dfa5d6fcdf9e4937525c5e799c8356b2f28d3ba523f925d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513e0be5244aa9da6671888fedd775306c0824650cc687f0faea952c28d1b6b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47ab556b4721804041f9abee5effa082f4f29fbffe7681a7fc8efe0cb4f2b7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41f38cee75135e15635fdda772af2bdd1588e57025d28247223af8b5c34202e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02eba524109342a0990f10dfdf5432c2504acbb394b4f6e23b370becc256f51c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-01T15:41:55Z\\\",\\\"message\\\":\\\"W1001 15:41:44.880099 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1001 15:41:44.880530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759333304 cert, and key in /tmp/serving-cert-2775419030/serving-signer.crt, /tmp/serving-cert-2775419030/serving-signer.key\\\\nI1001 15:41:45.097803 1 observer_polling.go:159] Starting file observer\\\\nW1001 15:41:45.100841 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1001 15:41:45.103153 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1001 15:41:45.104329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2775419030/tls.crt::/tmp/serving-cert-2775419030/tls.key\\\\\\\"\\\\nF1001 15:41:55.441587 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2ceea2906c651debca4b72a5654ba5f8291a20d513a96b2fde7f1a0373674c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:02Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:02 crc kubenswrapper[4949]: I1001 15:42:02.893239 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d76d2a7d-139b-40b6-91f8-c2c8ae0ecc71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b08c9c1691470813c489fb61abd198ca2b5a94ddec2f9b5a06b9d6c9b56334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e467246736f9943a0524d6786a8c4bd4a567f60f20ffd7259a5124c844c44522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da953f01526669a6db92b3ed2d63682b200ea37d86e44ff34a3b7630877fe6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7235ba9aab9db39a23fe6b99b3278ff4c1ede47a1d58c2c986fe3244a2fa12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:02Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:02 crc kubenswrapper[4949]: I1001 15:42:02.907088 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:02Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:02 crc kubenswrapper[4949]: I1001 15:42:02.922376 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a28cdb5789930fa13d9e76aeb4dc582efc2f97a117e7d3170909dddd8f6eed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a081b64cca92126512c81346c1b0005ddefcd11b702b12598c567cc627d3561f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:02Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:02 crc kubenswrapper[4949]: I1001 15:42:02.934442 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:02Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:02 crc kubenswrapper[4949]: I1001 15:42:02.951200 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:02Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:02 crc kubenswrapper[4949]: I1001 15:42:02.966971 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439d2f19-631a-4560-aa58-70cdeef997a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://110e9a59347a665dfa5d6fcdf9e4937525c5e799c8356b2f28d3ba523f925d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513e0be5244aa9da6671888fedd775306c0824650cc687f0faea952c28d1b6b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47ab556b4721804041f9abee5effa082f4f29fbffe7681a7fc8efe0cb4f2b7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41f38cee75135e15635fdda772af2bdd1588e57025d28247223af8b5c34202e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02eba524109342a0990f10dfdf5432c2504acbb394b4f6e23b370becc256f51c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-01T15:41:55Z\\\",\\\"message\\\":\\\"W1001 15:41:44.880099 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1001 15:41:44.880530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759333304 cert, and key in /tmp/serving-cert-2775419030/serving-signer.crt, /tmp/serving-cert-2775419030/serving-signer.key\\\\nI1001 15:41:45.097803 1 observer_polling.go:159] Starting file observer\\\\nW1001 15:41:45.100841 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1001 15:41:45.103153 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1001 15:41:45.104329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2775419030/tls.crt::/tmp/serving-cert-2775419030/tls.key\\\\\\\"\\\\nF1001 15:41:55.441587 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2ceea2906c651debca4b72a5654ba5f8291a20d513a96b2fde7f1a0373674c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:02Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:02 crc kubenswrapper[4949]: I1001 15:42:02.981517 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d76d2a7d-139b-40b6-91f8-c2c8ae0ecc71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b08c9c1691470813c489fb61abd198ca2b5a94ddec2f9b5a06b9d6c9b56334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e467246736f9943a0524d6786a8c4bd4a567f60f20ffd7259a5124c844c44522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da953f01526669a6db92b3ed2d63682b200ea37d86e44ff34a3b7630877fe6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7235ba9aab9db39a23fe6b99b3278ff4c1ede47a1d58c2c986fe3244a2fa12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:02Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:02 crc kubenswrapper[4949]: I1001 15:42:02.997411 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa37955372c2b350c3cd8103db2d231407c8f853e02d43fcae591d66a1f2bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:02Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:03 crc kubenswrapper[4949]: I1001 15:42:03.011334 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:03Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:03 crc kubenswrapper[4949]: I1001 15:42:03.031913 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5897a864-bb97-4443-a301-b30b22d13a88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02305e2d3e998f2b11185e65d51381d27f69cc544cba61b0878457e4816e5371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05aabfc445359dba1707d194e9dcf27bb57037b582f1181556cc65e2f2f712f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34912853a2f81af97497fc66053b02383114c5a42d46e39f2e7a98b48bf9e78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31382514a5f6e8d234356e87d2c1aba55e02a4edb54835382fe810f10e8df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c11c5a8bd9444a04b072ad656edec0e3f7d287392a2dffd509f3d7c0d1d4601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:03Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:03 crc kubenswrapper[4949]: I1001 15:42:03.044750 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a28cdb5789930fa13d9e76aeb4dc582efc2f97a117e7d3170909dddd8f6eed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a081b64cca92126512c81346c1b0005ddefcd11b702b12598c567cc627d3561f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:03Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:03 crc kubenswrapper[4949]: I1001 15:42:03.055839 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:03Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:03 crc kubenswrapper[4949]: I1001 15:42:03.070063 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:03Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:03 crc kubenswrapper[4949]: I1001 15:42:03.085451 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:03Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:03 crc kubenswrapper[4949]: I1001 15:42:03.739824 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"c5a5390e0a0285f487962def201f71183a2cfdbcc8db56f1bd0c64b2f6014c86"} Oct 01 15:42:03 crc kubenswrapper[4949]: I1001 15:42:03.757627 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:03Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:03 crc kubenswrapper[4949]: I1001 15:42:03.776185 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a28cdb5789930fa13d9e76aeb4dc582efc2f97a117e7d3170909dddd8f6eed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a081b64cca92126512c81346c1b0005ddefcd11b702b12598c567cc627d3561f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:03Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:03 crc kubenswrapper[4949]: I1001 15:42:03.794921 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:03Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:03 crc kubenswrapper[4949]: I1001 15:42:03.809420 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:03Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:03 crc kubenswrapper[4949]: I1001 15:42:03.844513 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5897a864-bb97-4443-a301-b30b22d13a88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02305e2d3e998f2b11185e65d51381d27f69cc544cba61b0878457e4816e5371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05aabfc445359dba1707d194e9dcf27bb57037b582f1181556cc65e2f2f712f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34912853a2f81af97497fc66053b02383114c5a42d46e39f2e7a98b48bf9e78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31382514a5f6e8d234356e87d2c1aba55e02a4edb54835382fe810f10e8df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c11c5a8bd9444a04b072ad656edec0e3f7d287392a2dffd509f3d7c0d1d4601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:03Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:03 crc kubenswrapper[4949]: I1001 15:42:03.867794 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439d2f19-631a-4560-aa58-70cdeef997a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://110e9a59347a665dfa5d6fcdf9e4937525c5e799c8356b2f28d3ba523f925d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513e0be5244aa9da6671888fedd775306c0824650cc687f0faea952c28d1b6b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47ab556b4721804041f9abee5effa082f4f29fbffe7681a7fc8efe0cb4f2b7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41f38cee75135e15635fdda772af2bdd1588e57025d28247223af8b5c34202e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02eba524109342a0990f10dfdf5432c2504acbb394b4f6e23b370becc256f51c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-01T15:41:55Z\\\",\\\"message\\\":\\\"W1001 15:41:44.880099 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1001 15:41:44.880530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759333304 cert, and key in /tmp/serving-cert-2775419030/serving-signer.crt, /tmp/serving-cert-2775419030/serving-signer.key\\\\nI1001 15:41:45.097803 1 observer_polling.go:159] Starting file observer\\\\nW1001 15:41:45.100841 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1001 15:41:45.103153 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1001 15:41:45.104329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2775419030/tls.crt::/tmp/serving-cert-2775419030/tls.key\\\\\\\"\\\\nF1001 15:41:55.441587 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2ceea2906c651debca4b72a5654ba5f8291a20d513a96b2fde7f1a0373674c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:03Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:03 crc kubenswrapper[4949]: I1001 15:42:03.887772 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d76d2a7d-139b-40b6-91f8-c2c8ae0ecc71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b08c9c1691470813c489fb61abd198ca2b5a94ddec2f9b5a06b9d6c9b56334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e467246736f9943a0524d6786a8c4bd4a567f60f20ffd7259a5124c844c44522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da953f01526669a6db92b3ed2d63682b200ea37d86e44ff34a3b7630877fe6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7235ba9aab9db39a23fe6b99b3278ff4c1ede47a1d58c2c986fe3244a2fa12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:03Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:03 crc kubenswrapper[4949]: I1001 15:42:03.909753 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa37955372c2b350c3cd8103db2d231407c8f853e02d43fcae591d66a1f2bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:03Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:03 crc kubenswrapper[4949]: I1001 15:42:03.931373 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a5390e0a0285f487962def201f71183a2cfdbcc8db56f1bd0c64b2f6014c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:03Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.149657 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.149735 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.149783 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:42:04 crc kubenswrapper[4949]: E1001 15:42:04.149863 4949 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 01 15:42:04 crc kubenswrapper[4949]: E1001 15:42:04.149889 4949 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 01 15:42:04 crc kubenswrapper[4949]: E1001 15:42:04.149938 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-01 15:42:08.149917024 +0000 UTC m=+27.455523215 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 01 15:42:04 crc kubenswrapper[4949]: E1001 15:42:04.149958 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-01 15:42:08.149948065 +0000 UTC m=+27.455554256 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 01 15:42:04 crc kubenswrapper[4949]: E1001 15:42:04.150021 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:42:08.149967595 +0000 UTC m=+27.455573806 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.250292 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.250351 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:42:04 crc kubenswrapper[4949]: E1001 15:42:04.250542 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 01 15:42:04 crc kubenswrapper[4949]: E1001 15:42:04.250565 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 01 15:42:04 crc kubenswrapper[4949]: E1001 15:42:04.250580 4949 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 01 15:42:04 crc kubenswrapper[4949]: E1001 15:42:04.250646 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-01 15:42:08.250629138 +0000 UTC m=+27.556235329 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 01 15:42:04 crc kubenswrapper[4949]: E1001 15:42:04.251048 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 01 15:42:04 crc kubenswrapper[4949]: E1001 15:42:04.251069 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 01 15:42:04 crc kubenswrapper[4949]: E1001 15:42:04.251079 4949 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 01 15:42:04 crc kubenswrapper[4949]: E1001 15:42:04.251108 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-01 15:42:08.251099141 +0000 UTC m=+27.556705332 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.390292 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-kg2qk"] Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.390635 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-kg2qk" Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.394858 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.395203 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.395354 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.396782 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-pgg4f"] Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.397206 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-pgg4f" Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.400263 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.403214 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.403227 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.404009 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.427682 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa37955372c2b350c3cd8103db2d231407c8f853e02d43fcae591d66a1f2bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:04Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.449953 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a5390e0a0285f487962def201f71183a2cfdbcc8db56f1bd0c64b2f6014c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:04Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.476188 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5897a864-bb97-4443-a301-b30b22d13a88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02305e2d3e998f2b11185e65d51381d27f69cc544cba61b0878457e4816e5371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05aabfc445359dba1707d194e9dcf27bb57037b582f1181556cc65e2f2f712f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34912853a2f81af97497fc66053b02383114c5a42d46e39f2e7a98b48bf9e78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31382514a5f6e8d234356e87d2c1aba55e02a4edb54835382fe810f10e8df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c11c5a8bd9444a04b072ad656edec0e3f7d287392a2dffd509f3d7c0d1d4601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:04Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.493374 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439d2f19-631a-4560-aa58-70cdeef997a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://110e9a59347a665dfa5d6fcdf9e4937525c5e799c8356b2f28d3ba523f925d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513e0be5244aa9da6671888fedd775306c0824650cc687f0faea952c28d1b6b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47ab556b4721804041f9abee5effa082f4f29fbffe7681a7fc8efe0cb4f2b7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41f38cee75135e15635fdda772af2bdd1588e57025d28247223af8b5c34202e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02eba524109342a0990f10dfdf5432c2504acbb394b4f6e23b370becc256f51c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-01T15:41:55Z\\\",\\\"message\\\":\\\"W1001 15:41:44.880099 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1001 15:41:44.880530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759333304 cert, and key in /tmp/serving-cert-2775419030/serving-signer.crt, /tmp/serving-cert-2775419030/serving-signer.key\\\\nI1001 15:41:45.097803 1 observer_polling.go:159] Starting file observer\\\\nW1001 15:41:45.100841 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1001 15:41:45.103153 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1001 15:41:45.104329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2775419030/tls.crt::/tmp/serving-cert-2775419030/tls.key\\\\\\\"\\\\nF1001 15:41:55.441587 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2ceea2906c651debca4b72a5654ba5f8291a20d513a96b2fde7f1a0373674c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:04Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.509231 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d76d2a7d-139b-40b6-91f8-c2c8ae0ecc71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b08c9c1691470813c489fb61abd198ca2b5a94ddec2f9b5a06b9d6c9b56334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e467246736f9943a0524d6786a8c4bd4a567f60f20ffd7259a5124c844c44522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da953f01526669a6db92b3ed2d63682b200ea37d86e44ff34a3b7630877fe6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7235ba9aab9db39a23fe6b99b3278ff4c1ede47a1d58c2c986fe3244a2fa12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:04Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.519872 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kg2qk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75c430b-f863-470c-b57f-def53bf840db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kg2qk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:04Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.531693 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:04Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.543850 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a28cdb5789930fa13d9e76aeb4dc582efc2f97a117e7d3170909dddd8f6eed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a081b64cca92126512c81346c1b0005ddefcd11b702b12598c567cc627d3561f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:04Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.553386 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d75c430b-f863-470c-b57f-def53bf840db-hosts-file\") pod \"node-resolver-kg2qk\" (UID: \"d75c430b-f863-470c-b57f-def53bf840db\") " pod="openshift-dns/node-resolver-kg2qk" Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.553442 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r42ms\" (UniqueName: \"kubernetes.io/projected/25c0759d-c94a-438c-b478-48161acbb035-kube-api-access-r42ms\") pod \"node-ca-pgg4f\" (UID: \"25c0759d-c94a-438c-b478-48161acbb035\") " pod="openshift-image-registry/node-ca-pgg4f" Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.553477 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/25c0759d-c94a-438c-b478-48161acbb035-host\") pod \"node-ca-pgg4f\" (UID: \"25c0759d-c94a-438c-b478-48161acbb035\") " pod="openshift-image-registry/node-ca-pgg4f" Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.553496 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/25c0759d-c94a-438c-b478-48161acbb035-serviceca\") pod \"node-ca-pgg4f\" (UID: \"25c0759d-c94a-438c-b478-48161acbb035\") " pod="openshift-image-registry/node-ca-pgg4f" Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.553663 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tvfpb\" (UniqueName: \"kubernetes.io/projected/d75c430b-f863-470c-b57f-def53bf840db-kube-api-access-tvfpb\") pod \"node-resolver-kg2qk\" (UID: \"d75c430b-f863-470c-b57f-def53bf840db\") " pod="openshift-dns/node-resolver-kg2qk" Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.554260 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:04Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.565094 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:04Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.580312 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d76d2a7d-139b-40b6-91f8-c2c8ae0ecc71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b08c9c1691470813c489fb61abd198ca2b5a94ddec2f9b5a06b9d6c9b56334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e467246736f9943a0524d6786a8c4bd4a567f60f20ffd7259a5124c844c44522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da953f01526669a6db92b3ed2d63682b200ea37d86e44ff34a3b7630877fe6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7235ba9aab9db39a23fe6b99b3278ff4c1ede47a1d58c2c986fe3244a2fa12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:04Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.596259 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa37955372c2b350c3cd8103db2d231407c8f853e02d43fcae591d66a1f2bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:04Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.600959 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.601021 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:42:04 crc kubenswrapper[4949]: E1001 15:42:04.601064 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.601259 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:42:04 crc kubenswrapper[4949]: E1001 15:42:04.601399 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:42:04 crc kubenswrapper[4949]: E1001 15:42:04.601493 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.609739 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a5390e0a0285f487962def201f71183a2cfdbcc8db56f1bd0c64b2f6014c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:04Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.623717 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pgg4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25c0759d-c94a-438c-b478-48161acbb035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r42ms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pgg4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:04Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.651454 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5897a864-bb97-4443-a301-b30b22d13a88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02305e2d3e998f2b11185e65d51381d27f69cc544cba61b0878457e4816e5371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05aabfc445359dba1707d194e9dcf27bb57037b582f1181556cc65e2f2f712f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34912853a2f81af97497fc66053b02383114c5a42d46e39f2e7a98b48bf9e78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31382514a5f6e8d234356e87d2c1aba55e02a4edb54835382fe810f10e8df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c11c5a8bd9444a04b072ad656edec0e3f7d287392a2dffd509f3d7c0d1d4601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:04Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.654060 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d75c430b-f863-470c-b57f-def53bf840db-hosts-file\") pod \"node-resolver-kg2qk\" (UID: \"d75c430b-f863-470c-b57f-def53bf840db\") " pod="openshift-dns/node-resolver-kg2qk" Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.654091 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r42ms\" (UniqueName: \"kubernetes.io/projected/25c0759d-c94a-438c-b478-48161acbb035-kube-api-access-r42ms\") pod \"node-ca-pgg4f\" (UID: \"25c0759d-c94a-438c-b478-48161acbb035\") " pod="openshift-image-registry/node-ca-pgg4f" Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.654134 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/25c0759d-c94a-438c-b478-48161acbb035-host\") pod \"node-ca-pgg4f\" (UID: \"25c0759d-c94a-438c-b478-48161acbb035\") " pod="openshift-image-registry/node-ca-pgg4f" Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.654156 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/25c0759d-c94a-438c-b478-48161acbb035-serviceca\") pod \"node-ca-pgg4f\" (UID: \"25c0759d-c94a-438c-b478-48161acbb035\") " pod="openshift-image-registry/node-ca-pgg4f" Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.654235 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/25c0759d-c94a-438c-b478-48161acbb035-host\") pod \"node-ca-pgg4f\" (UID: \"25c0759d-c94a-438c-b478-48161acbb035\") " pod="openshift-image-registry/node-ca-pgg4f" Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.654279 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tvfpb\" (UniqueName: \"kubernetes.io/projected/d75c430b-f863-470c-b57f-def53bf840db-kube-api-access-tvfpb\") pod \"node-resolver-kg2qk\" (UID: \"d75c430b-f863-470c-b57f-def53bf840db\") " pod="openshift-dns/node-resolver-kg2qk" Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.655654 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/d75c430b-f863-470c-b57f-def53bf840db-hosts-file\") pod \"node-resolver-kg2qk\" (UID: \"d75c430b-f863-470c-b57f-def53bf840db\") " pod="openshift-dns/node-resolver-kg2qk" Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.656806 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/25c0759d-c94a-438c-b478-48161acbb035-serviceca\") pod \"node-ca-pgg4f\" (UID: \"25c0759d-c94a-438c-b478-48161acbb035\") " pod="openshift-image-registry/node-ca-pgg4f" Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.672239 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r42ms\" (UniqueName: \"kubernetes.io/projected/25c0759d-c94a-438c-b478-48161acbb035-kube-api-access-r42ms\") pod \"node-ca-pgg4f\" (UID: \"25c0759d-c94a-438c-b478-48161acbb035\") " pod="openshift-image-registry/node-ca-pgg4f" Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.672567 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439d2f19-631a-4560-aa58-70cdeef997a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://110e9a59347a665dfa5d6fcdf9e4937525c5e799c8356b2f28d3ba523f925d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513e0be5244aa9da6671888fedd775306c0824650cc687f0faea952c28d1b6b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47ab556b4721804041f9abee5effa082f4f29fbffe7681a7fc8efe0cb4f2b7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41f38cee75135e15635fdda772af2bdd1588e57025d28247223af8b5c34202e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02eba524109342a0990f10dfdf5432c2504acbb394b4f6e23b370becc256f51c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-01T15:41:55Z\\\",\\\"message\\\":\\\"W1001 15:41:44.880099 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1001 15:41:44.880530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759333304 cert, and key in /tmp/serving-cert-2775419030/serving-signer.crt, /tmp/serving-cert-2775419030/serving-signer.key\\\\nI1001 15:41:45.097803 1 observer_polling.go:159] Starting file observer\\\\nW1001 15:41:45.100841 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1001 15:41:45.103153 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1001 15:41:45.104329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2775419030/tls.crt::/tmp/serving-cert-2775419030/tls.key\\\\\\\"\\\\nF1001 15:41:55.441587 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2ceea2906c651debca4b72a5654ba5f8291a20d513a96b2fde7f1a0373674c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:04Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.677723 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tvfpb\" (UniqueName: \"kubernetes.io/projected/d75c430b-f863-470c-b57f-def53bf840db-kube-api-access-tvfpb\") pod \"node-resolver-kg2qk\" (UID: \"d75c430b-f863-470c-b57f-def53bf840db\") " pod="openshift-dns/node-resolver-kg2qk" Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.686451 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:04Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.698461 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:04Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.712005 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-kg2qk" Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.711992 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kg2qk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75c430b-f863-470c-b57f-def53bf840db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kg2qk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:04Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.722718 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-pgg4f" Oct 01 15:42:04 crc kubenswrapper[4949]: W1001 15:42:04.724299 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd75c430b_f863_470c_b57f_def53bf840db.slice/crio-18069dc0067af8030dbb56e81d2dde55df64434c22828de040b201ea4cdf790e WatchSource:0}: Error finding container 18069dc0067af8030dbb56e81d2dde55df64434c22828de040b201ea4cdf790e: Status 404 returned error can't find the container with id 18069dc0067af8030dbb56e81d2dde55df64434c22828de040b201ea4cdf790e Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.724588 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:04Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:04 crc kubenswrapper[4949]: W1001 15:42:04.736377 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod25c0759d_c94a_438c_b478_48161acbb035.slice/crio-dd88af27b06a7ee08a2181bfa9a3f8942158271de221a209aaba7578f4a9fa73 WatchSource:0}: Error finding container dd88af27b06a7ee08a2181bfa9a3f8942158271de221a209aaba7578f4a9fa73: Status 404 returned error can't find the container with id dd88af27b06a7ee08a2181bfa9a3f8942158271de221a209aaba7578f4a9fa73 Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.737318 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a28cdb5789930fa13d9e76aeb4dc582efc2f97a117e7d3170909dddd8f6eed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a081b64cca92126512c81346c1b0005ddefcd11b702b12598c567cc627d3561f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:04Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.743140 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-kg2qk" event={"ID":"d75c430b-f863-470c-b57f-def53bf840db","Type":"ContainerStarted","Data":"18069dc0067af8030dbb56e81d2dde55df64434c22828de040b201ea4cdf790e"} Oct 01 15:42:04 crc kubenswrapper[4949]: I1001 15:42:04.744497 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-pgg4f" event={"ID":"25c0759d-c94a-438c-b478-48161acbb035","Type":"ContainerStarted","Data":"dd88af27b06a7ee08a2181bfa9a3f8942158271de221a209aaba7578f4a9fa73"} Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.167781 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-l6287"] Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.168327 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-l6287" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.170753 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-xr96p"] Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.171445 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-s5r4m"] Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.171586 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-xr96p" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.171706 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-s5r4m" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.172405 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.172952 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.172952 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.173827 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.174456 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.174976 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.176292 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.176516 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pppfm"] Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.182946 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.183216 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.183333 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.183480 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.183608 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.184057 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.185836 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.187191 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.187337 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.187462 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.187547 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.187618 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.187755 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.192468 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439d2f19-631a-4560-aa58-70cdeef997a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://110e9a59347a665dfa5d6fcdf9e4937525c5e799c8356b2f28d3ba523f925d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513e0be5244aa9da6671888fedd775306c0824650cc687f0faea952c28d1b6b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47ab556b4721804041f9abee5effa082f4f29fbffe7681a7fc8efe0cb4f2b7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41f38cee75135e15635fdda772af2bdd1588e57025d28247223af8b5c34202e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02eba524109342a0990f10dfdf5432c2504acbb394b4f6e23b370becc256f51c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-01T15:41:55Z\\\",\\\"message\\\":\\\"W1001 15:41:44.880099 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1001 15:41:44.880530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759333304 cert, and key in /tmp/serving-cert-2775419030/serving-signer.crt, /tmp/serving-cert-2775419030/serving-signer.key\\\\nI1001 15:41:45.097803 1 observer_polling.go:159] Starting file observer\\\\nW1001 15:41:45.100841 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1001 15:41:45.103153 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1001 15:41:45.104329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2775419030/tls.crt::/tmp/serving-cert-2775419030/tls.key\\\\\\\"\\\\nF1001 15:41:55.441587 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2ceea2906c651debca4b72a5654ba5f8291a20d513a96b2fde7f1a0373674c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:05Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.204833 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pgg4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25c0759d-c94a-438c-b478-48161acbb035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r42ms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pgg4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:05Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.217870 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:05Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.230506 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:05Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.244225 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:05Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.260923 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e15cd67-d4ad-49b8-96a6-da114105e558\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6287\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:05Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.286101 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5897a864-bb97-4443-a301-b30b22d13a88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02305e2d3e998f2b11185e65d51381d27f69cc544cba61b0878457e4816e5371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05aabfc445359dba1707d194e9dcf27bb57037b582f1181556cc65e2f2f712f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34912853a2f81af97497fc66053b02383114c5a42d46e39f2e7a98b48bf9e78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31382514a5f6e8d234356e87d2c1aba55e02a4edb54835382fe810f10e8df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c11c5a8bd9444a04b072ad656edec0e3f7d287392a2dffd509f3d7c0d1d4601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:05Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.301991 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d76d2a7d-139b-40b6-91f8-c2c8ae0ecc71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b08c9c1691470813c489fb61abd198ca2b5a94ddec2f9b5a06b9d6c9b56334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e467246736f9943a0524d6786a8c4bd4a567f60f20ffd7259a5124c844c44522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da953f01526669a6db92b3ed2d63682b200ea37d86e44ff34a3b7630877fe6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7235ba9aab9db39a23fe6b99b3278ff4c1ede47a1d58c2c986fe3244a2fa12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:05Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.316494 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa37955372c2b350c3cd8103db2d231407c8f853e02d43fcae591d66a1f2bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:05Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.329712 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a5390e0a0285f487962def201f71183a2cfdbcc8db56f1bd0c64b2f6014c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:05Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.344287 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a28cdb5789930fa13d9e76aeb4dc582efc2f97a117e7d3170909dddd8f6eed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a081b64cca92126512c81346c1b0005ddefcd11b702b12598c567cc627d3561f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:05Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.355656 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kg2qk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75c430b-f863-470c-b57f-def53bf840db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kg2qk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:05Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.359021 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6656de7a-9b8f-4714-81ae-3685c01f11fb-os-release\") pod \"multus-additional-cni-plugins-xr96p\" (UID: \"6656de7a-9b8f-4714-81ae-3685c01f11fb\") " pod="openshift-multus/multus-additional-cni-plugins-xr96p" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.359065 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ffe32683-6bbe-472a-811e-8fe0fd1d1bb6-host-run-netns\") pod \"multus-s5r4m\" (UID: \"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\") " pod="openshift-multus/multus-s5r4m" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.359085 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0e15cd67-d4ad-49b8-96a6-da114105e558-rootfs\") pod \"machine-config-daemon-l6287\" (UID: \"0e15cd67-d4ad-49b8-96a6-da114105e558\") " pod="openshift-machine-config-operator/machine-config-daemon-l6287" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.359184 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6b30af5f-469f-4bee-b77f-4b58edba325b-env-overrides\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.359202 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6656de7a-9b8f-4714-81ae-3685c01f11fb-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xr96p\" (UID: \"6656de7a-9b8f-4714-81ae-3685c01f11fb\") " pod="openshift-multus/multus-additional-cni-plugins-xr96p" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.359217 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ffe32683-6bbe-472a-811e-8fe0fd1d1bb6-host-var-lib-cni-bin\") pod \"multus-s5r4m\" (UID: \"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\") " pod="openshift-multus/multus-s5r4m" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.359233 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ffe32683-6bbe-472a-811e-8fe0fd1d1bb6-hostroot\") pod \"multus-s5r4m\" (UID: \"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\") " pod="openshift-multus/multus-s5r4m" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.359317 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ffe32683-6bbe-472a-811e-8fe0fd1d1bb6-host-run-multus-certs\") pod \"multus-s5r4m\" (UID: \"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\") " pod="openshift-multus/multus-s5r4m" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.359403 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-host-kubelet\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.359438 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6656de7a-9b8f-4714-81ae-3685c01f11fb-cnibin\") pod \"multus-additional-cni-plugins-xr96p\" (UID: \"6656de7a-9b8f-4714-81ae-3685c01f11fb\") " pod="openshift-multus/multus-additional-cni-plugins-xr96p" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.359474 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ffe32683-6bbe-472a-811e-8fe0fd1d1bb6-cni-binary-copy\") pod \"multus-s5r4m\" (UID: \"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\") " pod="openshift-multus/multus-s5r4m" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.359501 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-run-openvswitch\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.359528 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xghl\" (UniqueName: \"kubernetes.io/projected/0e15cd67-d4ad-49b8-96a6-da114105e558-kube-api-access-8xghl\") pod \"machine-config-daemon-l6287\" (UID: \"0e15cd67-d4ad-49b8-96a6-da114105e558\") " pod="openshift-machine-config-operator/machine-config-daemon-l6287" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.359625 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6656de7a-9b8f-4714-81ae-3685c01f11fb-cni-binary-copy\") pod \"multus-additional-cni-plugins-xr96p\" (UID: \"6656de7a-9b8f-4714-81ae-3685c01f11fb\") " pod="openshift-multus/multus-additional-cni-plugins-xr96p" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.359682 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ffe32683-6bbe-472a-811e-8fe0fd1d1bb6-multus-cni-dir\") pod \"multus-s5r4m\" (UID: \"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\") " pod="openshift-multus/multus-s5r4m" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.359708 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-log-socket\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.359772 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ffe32683-6bbe-472a-811e-8fe0fd1d1bb6-system-cni-dir\") pod \"multus-s5r4m\" (UID: \"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\") " pod="openshift-multus/multus-s5r4m" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.359800 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ghb86\" (UniqueName: \"kubernetes.io/projected/ffe32683-6bbe-472a-811e-8fe0fd1d1bb6-kube-api-access-ghb86\") pod \"multus-s5r4m\" (UID: \"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\") " pod="openshift-multus/multus-s5r4m" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.359897 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6656de7a-9b8f-4714-81ae-3685c01f11fb-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xr96p\" (UID: \"6656de7a-9b8f-4714-81ae-3685c01f11fb\") " pod="openshift-multus/multus-additional-cni-plugins-xr96p" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.359969 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ffe32683-6bbe-472a-811e-8fe0fd1d1bb6-etc-kubernetes\") pod \"multus-s5r4m\" (UID: \"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\") " pod="openshift-multus/multus-s5r4m" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.359991 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-systemd-units\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.360005 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-host-run-ovn-kubernetes\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.360021 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ffe32683-6bbe-472a-811e-8fe0fd1d1bb6-host-var-lib-cni-multus\") pod \"multus-s5r4m\" (UID: \"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\") " pod="openshift-multus/multus-s5r4m" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.360037 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6b30af5f-469f-4bee-b77f-4b58edba325b-ovn-node-metrics-cert\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.360098 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6b30af5f-469f-4bee-b77f-4b58edba325b-ovnkube-script-lib\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.360164 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ffe32683-6bbe-472a-811e-8fe0fd1d1bb6-cnibin\") pod \"multus-s5r4m\" (UID: \"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\") " pod="openshift-multus/multus-s5r4m" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.360181 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ffe32683-6bbe-472a-811e-8fe0fd1d1bb6-os-release\") pod \"multus-s5r4m\" (UID: \"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\") " pod="openshift-multus/multus-s5r4m" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.360198 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-host-cni-netd\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.360253 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdnzf\" (UniqueName: \"kubernetes.io/projected/6656de7a-9b8f-4714-81ae-3685c01f11fb-kube-api-access-kdnzf\") pod \"multus-additional-cni-plugins-xr96p\" (UID: \"6656de7a-9b8f-4714-81ae-3685c01f11fb\") " pod="openshift-multus/multus-additional-cni-plugins-xr96p" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.360272 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-host-slash\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.360316 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-node-log\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.360331 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-host-cni-bin\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.360346 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7t8d\" (UniqueName: \"kubernetes.io/projected/6b30af5f-469f-4bee-b77f-4b58edba325b-kube-api-access-f7t8d\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.360422 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0e15cd67-d4ad-49b8-96a6-da114105e558-proxy-tls\") pod \"machine-config-daemon-l6287\" (UID: \"0e15cd67-d4ad-49b8-96a6-da114105e558\") " pod="openshift-machine-config-operator/machine-config-daemon-l6287" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.360443 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ffe32683-6bbe-472a-811e-8fe0fd1d1bb6-multus-socket-dir-parent\") pod \"multus-s5r4m\" (UID: \"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\") " pod="openshift-multus/multus-s5r4m" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.360477 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ffe32683-6bbe-472a-811e-8fe0fd1d1bb6-host-run-k8s-cni-cncf-io\") pod \"multus-s5r4m\" (UID: \"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\") " pod="openshift-multus/multus-s5r4m" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.360581 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ffe32683-6bbe-472a-811e-8fe0fd1d1bb6-host-var-lib-kubelet\") pod \"multus-s5r4m\" (UID: \"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\") " pod="openshift-multus/multus-s5r4m" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.360686 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6b30af5f-469f-4bee-b77f-4b58edba325b-ovnkube-config\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.360725 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-var-lib-openvswitch\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.360785 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0e15cd67-d4ad-49b8-96a6-da114105e558-mcd-auth-proxy-config\") pod \"machine-config-daemon-l6287\" (UID: \"0e15cd67-d4ad-49b8-96a6-da114105e558\") " pod="openshift-machine-config-operator/machine-config-daemon-l6287" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.360806 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ffe32683-6bbe-472a-811e-8fe0fd1d1bb6-multus-conf-dir\") pod \"multus-s5r4m\" (UID: \"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\") " pod="openshift-multus/multus-s5r4m" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.360828 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ffe32683-6bbe-472a-811e-8fe0fd1d1bb6-multus-daemon-config\") pod \"multus-s5r4m\" (UID: \"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\") " pod="openshift-multus/multus-s5r4m" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.360852 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-run-systemd\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.360875 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-run-ovn\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.360901 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6656de7a-9b8f-4714-81ae-3685c01f11fb-system-cni-dir\") pod \"multus-additional-cni-plugins-xr96p\" (UID: \"6656de7a-9b8f-4714-81ae-3685c01f11fb\") " pod="openshift-multus/multus-additional-cni-plugins-xr96p" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.360945 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-host-run-netns\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.360976 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-etc-openvswitch\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.361000 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.368737 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s5r4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghb86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s5r4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:05Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.381789 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:05Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.396164 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:05Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.414602 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:05Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.433713 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e15cd67-d4ad-49b8-96a6-da114105e558\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6287\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:05Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.462565 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0e15cd67-d4ad-49b8-96a6-da114105e558-mcd-auth-proxy-config\") pod \"machine-config-daemon-l6287\" (UID: \"0e15cd67-d4ad-49b8-96a6-da114105e558\") " pod="openshift-machine-config-operator/machine-config-daemon-l6287" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.462612 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ffe32683-6bbe-472a-811e-8fe0fd1d1bb6-multus-conf-dir\") pod \"multus-s5r4m\" (UID: \"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\") " pod="openshift-multus/multus-s5r4m" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.462635 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ffe32683-6bbe-472a-811e-8fe0fd1d1bb6-multus-daemon-config\") pod \"multus-s5r4m\" (UID: \"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\") " pod="openshift-multus/multus-s5r4m" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.462660 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-run-systemd\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.462681 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-run-ovn\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.462742 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6656de7a-9b8f-4714-81ae-3685c01f11fb-system-cni-dir\") pod \"multus-additional-cni-plugins-xr96p\" (UID: \"6656de7a-9b8f-4714-81ae-3685c01f11fb\") " pod="openshift-multus/multus-additional-cni-plugins-xr96p" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.462766 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-host-run-netns\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.462780 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/ffe32683-6bbe-472a-811e-8fe0fd1d1bb6-multus-conf-dir\") pod \"multus-s5r4m\" (UID: \"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\") " pod="openshift-multus/multus-s5r4m" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.462873 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-run-systemd\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.462901 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/6656de7a-9b8f-4714-81ae-3685c01f11fb-system-cni-dir\") pod \"multus-additional-cni-plugins-xr96p\" (UID: \"6656de7a-9b8f-4714-81ae-3685c01f11fb\") " pod="openshift-multus/multus-additional-cni-plugins-xr96p" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.462872 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-etc-openvswitch\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.462807 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-etc-openvswitch\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.462940 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-host-run-netns\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.462952 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.462971 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-run-ovn\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.462987 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6656de7a-9b8f-4714-81ae-3685c01f11fb-os-release\") pod \"multus-additional-cni-plugins-xr96p\" (UID: \"6656de7a-9b8f-4714-81ae-3685c01f11fb\") " pod="openshift-multus/multus-additional-cni-plugins-xr96p" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.463001 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.463007 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ffe32683-6bbe-472a-811e-8fe0fd1d1bb6-host-run-netns\") pod \"multus-s5r4m\" (UID: \"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\") " pod="openshift-multus/multus-s5r4m" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.463042 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0e15cd67-d4ad-49b8-96a6-da114105e558-rootfs\") pod \"machine-config-daemon-l6287\" (UID: \"0e15cd67-d4ad-49b8-96a6-da114105e558\") " pod="openshift-machine-config-operator/machine-config-daemon-l6287" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.463064 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6b30af5f-469f-4bee-b77f-4b58edba325b-env-overrides\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.463082 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6656de7a-9b8f-4714-81ae-3685c01f11fb-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xr96p\" (UID: \"6656de7a-9b8f-4714-81ae-3685c01f11fb\") " pod="openshift-multus/multus-additional-cni-plugins-xr96p" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.463098 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ffe32683-6bbe-472a-811e-8fe0fd1d1bb6-host-var-lib-cni-bin\") pod \"multus-s5r4m\" (UID: \"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\") " pod="openshift-multus/multus-s5r4m" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.463147 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ffe32683-6bbe-472a-811e-8fe0fd1d1bb6-hostroot\") pod \"multus-s5r4m\" (UID: \"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\") " pod="openshift-multus/multus-s5r4m" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.463165 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ffe32683-6bbe-472a-811e-8fe0fd1d1bb6-host-run-multus-certs\") pod \"multus-s5r4m\" (UID: \"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\") " pod="openshift-multus/multus-s5r4m" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.463184 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-host-kubelet\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.463204 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6656de7a-9b8f-4714-81ae-3685c01f11fb-cnibin\") pod \"multus-additional-cni-plugins-xr96p\" (UID: \"6656de7a-9b8f-4714-81ae-3685c01f11fb\") " pod="openshift-multus/multus-additional-cni-plugins-xr96p" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.463219 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ffe32683-6bbe-472a-811e-8fe0fd1d1bb6-cni-binary-copy\") pod \"multus-s5r4m\" (UID: \"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\") " pod="openshift-multus/multus-s5r4m" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.463235 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-run-openvswitch\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.463258 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xghl\" (UniqueName: \"kubernetes.io/projected/0e15cd67-d4ad-49b8-96a6-da114105e558-kube-api-access-8xghl\") pod \"machine-config-daemon-l6287\" (UID: \"0e15cd67-d4ad-49b8-96a6-da114105e558\") " pod="openshift-machine-config-operator/machine-config-daemon-l6287" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.463291 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6656de7a-9b8f-4714-81ae-3685c01f11fb-cni-binary-copy\") pod \"multus-additional-cni-plugins-xr96p\" (UID: \"6656de7a-9b8f-4714-81ae-3685c01f11fb\") " pod="openshift-multus/multus-additional-cni-plugins-xr96p" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.463306 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ffe32683-6bbe-472a-811e-8fe0fd1d1bb6-multus-cni-dir\") pod \"multus-s5r4m\" (UID: \"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\") " pod="openshift-multus/multus-s5r4m" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.463322 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-log-socket\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.463351 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ffe32683-6bbe-472a-811e-8fe0fd1d1bb6-system-cni-dir\") pod \"multus-s5r4m\" (UID: \"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\") " pod="openshift-multus/multus-s5r4m" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.463368 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ghb86\" (UniqueName: \"kubernetes.io/projected/ffe32683-6bbe-472a-811e-8fe0fd1d1bb6-kube-api-access-ghb86\") pod \"multus-s5r4m\" (UID: \"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\") " pod="openshift-multus/multus-s5r4m" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.463393 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6656de7a-9b8f-4714-81ae-3685c01f11fb-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xr96p\" (UID: \"6656de7a-9b8f-4714-81ae-3685c01f11fb\") " pod="openshift-multus/multus-additional-cni-plugins-xr96p" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.463411 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ffe32683-6bbe-472a-811e-8fe0fd1d1bb6-etc-kubernetes\") pod \"multus-s5r4m\" (UID: \"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\") " pod="openshift-multus/multus-s5r4m" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.463427 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-systemd-units\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.463442 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-host-run-ovn-kubernetes\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.463459 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ffe32683-6bbe-472a-811e-8fe0fd1d1bb6-host-var-lib-cni-multus\") pod \"multus-s5r4m\" (UID: \"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\") " pod="openshift-multus/multus-s5r4m" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.463474 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6b30af5f-469f-4bee-b77f-4b58edba325b-ovn-node-metrics-cert\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.463505 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/ffe32683-6bbe-472a-811e-8fe0fd1d1bb6-multus-daemon-config\") pod \"multus-s5r4m\" (UID: \"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\") " pod="openshift-multus/multus-s5r4m" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.463492 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6b30af5f-469f-4bee-b77f-4b58edba325b-ovnkube-script-lib\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.463591 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ffe32683-6bbe-472a-811e-8fe0fd1d1bb6-cnibin\") pod \"multus-s5r4m\" (UID: \"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\") " pod="openshift-multus/multus-s5r4m" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.463613 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ffe32683-6bbe-472a-811e-8fe0fd1d1bb6-os-release\") pod \"multus-s5r4m\" (UID: \"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\") " pod="openshift-multus/multus-s5r4m" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.463629 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-host-cni-netd\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.463652 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kdnzf\" (UniqueName: \"kubernetes.io/projected/6656de7a-9b8f-4714-81ae-3685c01f11fb-kube-api-access-kdnzf\") pod \"multus-additional-cni-plugins-xr96p\" (UID: \"6656de7a-9b8f-4714-81ae-3685c01f11fb\") " pod="openshift-multus/multus-additional-cni-plugins-xr96p" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.463674 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-host-slash\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.463693 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-node-log\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.463707 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-host-cni-bin\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.463726 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f7t8d\" (UniqueName: \"kubernetes.io/projected/6b30af5f-469f-4bee-b77f-4b58edba325b-kube-api-access-f7t8d\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.463746 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0e15cd67-d4ad-49b8-96a6-da114105e558-mcd-auth-proxy-config\") pod \"machine-config-daemon-l6287\" (UID: \"0e15cd67-d4ad-49b8-96a6-da114105e558\") " pod="openshift-machine-config-operator/machine-config-daemon-l6287" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.463762 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0e15cd67-d4ad-49b8-96a6-da114105e558-proxy-tls\") pod \"machine-config-daemon-l6287\" (UID: \"0e15cd67-d4ad-49b8-96a6-da114105e558\") " pod="openshift-machine-config-operator/machine-config-daemon-l6287" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.463863 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ffe32683-6bbe-472a-811e-8fe0fd1d1bb6-multus-socket-dir-parent\") pod \"multus-s5r4m\" (UID: \"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\") " pod="openshift-multus/multus-s5r4m" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.463867 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ffe32683-6bbe-472a-811e-8fe0fd1d1bb6-host-run-netns\") pod \"multus-s5r4m\" (UID: \"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\") " pod="openshift-multus/multus-s5r4m" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.463908 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/0e15cd67-d4ad-49b8-96a6-da114105e558-rootfs\") pod \"machine-config-daemon-l6287\" (UID: \"0e15cd67-d4ad-49b8-96a6-da114105e558\") " pod="openshift-machine-config-operator/machine-config-daemon-l6287" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.463936 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ffe32683-6bbe-472a-811e-8fe0fd1d1bb6-host-run-k8s-cni-cncf-io\") pod \"multus-s5r4m\" (UID: \"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\") " pod="openshift-multus/multus-s5r4m" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.463969 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ffe32683-6bbe-472a-811e-8fe0fd1d1bb6-host-var-lib-kubelet\") pod \"multus-s5r4m\" (UID: \"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\") " pod="openshift-multus/multus-s5r4m" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.463994 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6b30af5f-469f-4bee-b77f-4b58edba325b-ovnkube-config\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.464015 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-var-lib-openvswitch\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.464142 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-var-lib-openvswitch\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.463521 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/6656de7a-9b8f-4714-81ae-3685c01f11fb-os-release\") pod \"multus-additional-cni-plugins-xr96p\" (UID: \"6656de7a-9b8f-4714-81ae-3685c01f11fb\") " pod="openshift-multus/multus-additional-cni-plugins-xr96p" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.464199 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/ffe32683-6bbe-472a-811e-8fe0fd1d1bb6-multus-socket-dir-parent\") pod \"multus-s5r4m\" (UID: \"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\") " pod="openshift-multus/multus-s5r4m" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.464248 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/ffe32683-6bbe-472a-811e-8fe0fd1d1bb6-host-var-lib-kubelet\") pod \"multus-s5r4m\" (UID: \"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\") " pod="openshift-multus/multus-s5r4m" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.464384 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ffe32683-6bbe-472a-811e-8fe0fd1d1bb6-host-var-lib-cni-bin\") pod \"multus-s5r4m\" (UID: \"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\") " pod="openshift-multus/multus-s5r4m" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.464413 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/ffe32683-6bbe-472a-811e-8fe0fd1d1bb6-cnibin\") pod \"multus-s5r4m\" (UID: \"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\") " pod="openshift-multus/multus-s5r4m" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.464435 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/ffe32683-6bbe-472a-811e-8fe0fd1d1bb6-hostroot\") pod \"multus-s5r4m\" (UID: \"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\") " pod="openshift-multus/multus-s5r4m" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.464472 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/ffe32683-6bbe-472a-811e-8fe0fd1d1bb6-host-run-multus-certs\") pod \"multus-s5r4m\" (UID: \"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\") " pod="openshift-multus/multus-s5r4m" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.464501 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-host-kubelet\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.464514 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/ffe32683-6bbe-472a-811e-8fe0fd1d1bb6-os-release\") pod \"multus-s5r4m\" (UID: \"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\") " pod="openshift-multus/multus-s5r4m" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.464533 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/6656de7a-9b8f-4714-81ae-3685c01f11fb-cnibin\") pod \"multus-additional-cni-plugins-xr96p\" (UID: \"6656de7a-9b8f-4714-81ae-3685c01f11fb\") " pod="openshift-multus/multus-additional-cni-plugins-xr96p" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.464573 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-host-cni-netd\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.464600 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6b30af5f-469f-4bee-b77f-4b58edba325b-env-overrides\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.464668 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-host-cni-bin\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.464669 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-run-openvswitch\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.464706 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-host-slash\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.464742 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-node-log\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.464772 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-log-socket\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.464812 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/6656de7a-9b8f-4714-81ae-3685c01f11fb-cni-binary-copy\") pod \"multus-additional-cni-plugins-xr96p\" (UID: \"6656de7a-9b8f-4714-81ae-3685c01f11fb\") " pod="openshift-multus/multus-additional-cni-plugins-xr96p" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.464854 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ffe32683-6bbe-472a-811e-8fe0fd1d1bb6-etc-kubernetes\") pod \"multus-s5r4m\" (UID: \"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\") " pod="openshift-multus/multus-s5r4m" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.464858 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6b30af5f-469f-4bee-b77f-4b58edba325b-ovnkube-config\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.464880 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-systemd-units\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.464921 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/ffe32683-6bbe-472a-811e-8fe0fd1d1bb6-host-var-lib-cni-multus\") pod \"multus-s5r4m\" (UID: \"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\") " pod="openshift-multus/multus-s5r4m" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.464949 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ffe32683-6bbe-472a-811e-8fe0fd1d1bb6-multus-cni-dir\") pod \"multus-s5r4m\" (UID: \"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\") " pod="openshift-multus/multus-s5r4m" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.464985 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/ffe32683-6bbe-472a-811e-8fe0fd1d1bb6-system-cni-dir\") pod \"multus-s5r4m\" (UID: \"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\") " pod="openshift-multus/multus-s5r4m" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.464185 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6b30af5f-469f-4bee-b77f-4b58edba325b-ovnkube-script-lib\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.465053 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/ffe32683-6bbe-472a-811e-8fe0fd1d1bb6-cni-binary-copy\") pod \"multus-s5r4m\" (UID: \"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\") " pod="openshift-multus/multus-s5r4m" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.465055 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/ffe32683-6bbe-472a-811e-8fe0fd1d1bb6-host-run-k8s-cni-cncf-io\") pod \"multus-s5r4m\" (UID: \"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\") " pod="openshift-multus/multus-s5r4m" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.465085 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-host-run-ovn-kubernetes\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.465169 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/6656de7a-9b8f-4714-81ae-3685c01f11fb-tuning-conf-dir\") pod \"multus-additional-cni-plugins-xr96p\" (UID: \"6656de7a-9b8f-4714-81ae-3685c01f11fb\") " pod="openshift-multus/multus-additional-cni-plugins-xr96p" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.465319 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/6656de7a-9b8f-4714-81ae-3685c01f11fb-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-xr96p\" (UID: \"6656de7a-9b8f-4714-81ae-3685c01f11fb\") " pod="openshift-multus/multus-additional-cni-plugins-xr96p" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.468057 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a5390e0a0285f487962def201f71183a2cfdbcc8db56f1bd0c64b2f6014c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:05Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.468246 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0e15cd67-d4ad-49b8-96a6-da114105e558-proxy-tls\") pod \"machine-config-daemon-l6287\" (UID: \"0e15cd67-d4ad-49b8-96a6-da114105e558\") " pod="openshift-machine-config-operator/machine-config-daemon-l6287" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.468273 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6b30af5f-469f-4bee-b77f-4b58edba325b-ovn-node-metrics-cert\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.495743 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kdnzf\" (UniqueName: \"kubernetes.io/projected/6656de7a-9b8f-4714-81ae-3685c01f11fb-kube-api-access-kdnzf\") pod \"multus-additional-cni-plugins-xr96p\" (UID: \"6656de7a-9b8f-4714-81ae-3685c01f11fb\") " pod="openshift-multus/multus-additional-cni-plugins-xr96p" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.504615 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xghl\" (UniqueName: \"kubernetes.io/projected/0e15cd67-d4ad-49b8-96a6-da114105e558-kube-api-access-8xghl\") pod \"machine-config-daemon-l6287\" (UID: \"0e15cd67-d4ad-49b8-96a6-da114105e558\") " pod="openshift-machine-config-operator/machine-config-daemon-l6287" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.518613 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f7t8d\" (UniqueName: \"kubernetes.io/projected/6b30af5f-469f-4bee-b77f-4b58edba325b-kube-api-access-f7t8d\") pod \"ovnkube-node-pppfm\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.520482 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ghb86\" (UniqueName: \"kubernetes.io/projected/ffe32683-6bbe-472a-811e-8fe0fd1d1bb6-kube-api-access-ghb86\") pod \"multus-s5r4m\" (UID: \"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\") " pod="openshift-multus/multus-s5r4m" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.530922 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xr96p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6656de7a-9b8f-4714-81ae-3685c01f11fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xr96p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:05Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.578178 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5897a864-bb97-4443-a301-b30b22d13a88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02305e2d3e998f2b11185e65d51381d27f69cc544cba61b0878457e4816e5371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05aabfc445359dba1707d194e9dcf27bb57037b582f1181556cc65e2f2f712f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34912853a2f81af97497fc66053b02383114c5a42d46e39f2e7a98b48bf9e78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31382514a5f6e8d234356e87d2c1aba55e02a4edb54835382fe810f10e8df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c11c5a8bd9444a04b072ad656edec0e3f7d287392a2dffd509f3d7c0d1d4601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:05Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.594019 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d76d2a7d-139b-40b6-91f8-c2c8ae0ecc71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b08c9c1691470813c489fb61abd198ca2b5a94ddec2f9b5a06b9d6c9b56334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e467246736f9943a0524d6786a8c4bd4a567f60f20ffd7259a5124c844c44522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da953f01526669a6db92b3ed2d63682b200ea37d86e44ff34a3b7630877fe6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7235ba9aab9db39a23fe6b99b3278ff4c1ede47a1d58c2c986fe3244a2fa12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:05Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.607458 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa37955372c2b350c3cd8103db2d231407c8f853e02d43fcae591d66a1f2bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:05Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.620006 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a28cdb5789930fa13d9e76aeb4dc582efc2f97a117e7d3170909dddd8f6eed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a081b64cca92126512c81346c1b0005ddefcd11b702b12598c567cc627d3561f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:05Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.632157 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kg2qk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75c430b-f863-470c-b57f-def53bf840db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kg2qk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:05Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.652978 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b30af5f-469f-4bee-b77f-4b58edba325b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pppfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:05Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.667514 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439d2f19-631a-4560-aa58-70cdeef997a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://110e9a59347a665dfa5d6fcdf9e4937525c5e799c8356b2f28d3ba523f925d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513e0be5244aa9da6671888fedd775306c0824650cc687f0faea952c28d1b6b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47ab556b4721804041f9abee5effa082f4f29fbffe7681a7fc8efe0cb4f2b7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41f38cee75135e15635fdda772af2bdd1588e57025d28247223af8b5c34202e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02eba524109342a0990f10dfdf5432c2504acbb394b4f6e23b370becc256f51c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-01T15:41:55Z\\\",\\\"message\\\":\\\"W1001 15:41:44.880099 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1001 15:41:44.880530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759333304 cert, and key in /tmp/serving-cert-2775419030/serving-signer.crt, /tmp/serving-cert-2775419030/serving-signer.key\\\\nI1001 15:41:45.097803 1 observer_polling.go:159] Starting file observer\\\\nW1001 15:41:45.100841 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1001 15:41:45.103153 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1001 15:41:45.104329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2775419030/tls.crt::/tmp/serving-cert-2775419030/tls.key\\\\\\\"\\\\nF1001 15:41:55.441587 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2ceea2906c651debca4b72a5654ba5f8291a20d513a96b2fde7f1a0373674c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:05Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.682083 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pgg4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25c0759d-c94a-438c-b478-48161acbb035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r42ms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pgg4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:05Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.749759 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-pgg4f" event={"ID":"25c0759d-c94a-438c-b478-48161acbb035","Type":"ContainerStarted","Data":"d9037ea44408cee3d00ee27bb38ea8e49344ab06d6b173d86689a131010eb7f9"} Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.751115 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-kg2qk" event={"ID":"d75c430b-f863-470c-b57f-def53bf840db","Type":"ContainerStarted","Data":"ab6ca47d17c063e35d4f3109e8dbe39f42dbf245b8f439b12464e167fb57dc4a"} Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.770291 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439d2f19-631a-4560-aa58-70cdeef997a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://110e9a59347a665dfa5d6fcdf9e4937525c5e799c8356b2f28d3ba523f925d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513e0be5244aa9da6671888fedd775306c0824650cc687f0faea952c28d1b6b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47ab556b4721804041f9abee5effa082f4f29fbffe7681a7fc8efe0cb4f2b7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41f38cee75135e15635fdda772af2bdd1588e57025d28247223af8b5c34202e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02eba524109342a0990f10dfdf5432c2504acbb394b4f6e23b370becc256f51c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-01T15:41:55Z\\\",\\\"message\\\":\\\"W1001 15:41:44.880099 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1001 15:41:44.880530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759333304 cert, and key in /tmp/serving-cert-2775419030/serving-signer.crt, /tmp/serving-cert-2775419030/serving-signer.key\\\\nI1001 15:41:45.097803 1 observer_polling.go:159] Starting file observer\\\\nW1001 15:41:45.100841 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1001 15:41:45.103153 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1001 15:41:45.104329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2775419030/tls.crt::/tmp/serving-cert-2775419030/tls.key\\\\\\\"\\\\nF1001 15:41:55.441587 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2ceea2906c651debca4b72a5654ba5f8291a20d513a96b2fde7f1a0373674c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:05Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.785472 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pgg4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25c0759d-c94a-438c-b478-48161acbb035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9037ea44408cee3d00ee27bb38ea8e49344ab06d6b173d86689a131010eb7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r42ms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pgg4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:05Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.791096 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-l6287" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.794959 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-xr96p" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.802108 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-s5r4m" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.809412 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:05Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.809446 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:05 crc kubenswrapper[4949]: W1001 15:42:05.823345 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podffe32683_6bbe_472a_811e_8fe0fd1d1bb6.slice/crio-8993a171ed55e337d41ca26554b9cabe941b5a84c2becb69e69f8c9795874ad6 WatchSource:0}: Error finding container 8993a171ed55e337d41ca26554b9cabe941b5a84c2becb69e69f8c9795874ad6: Status 404 returned error can't find the container with id 8993a171ed55e337d41ca26554b9cabe941b5a84c2becb69e69f8c9795874ad6 Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.831109 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:05Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:05 crc kubenswrapper[4949]: W1001 15:42:05.839724 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b30af5f_469f_4bee_b77f_4b58edba325b.slice/crio-2c8aa2de9ec6d307212b2a9111bb246385942ea204db4894c90bb0534930e300 WatchSource:0}: Error finding container 2c8aa2de9ec6d307212b2a9111bb246385942ea204db4894c90bb0534930e300: Status 404 returned error can't find the container with id 2c8aa2de9ec6d307212b2a9111bb246385942ea204db4894c90bb0534930e300 Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.849615 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:05Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.862850 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e15cd67-d4ad-49b8-96a6-da114105e558\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6287\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:05Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.881106 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s5r4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghb86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s5r4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:05Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.901201 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5897a864-bb97-4443-a301-b30b22d13a88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02305e2d3e998f2b11185e65d51381d27f69cc544cba61b0878457e4816e5371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05aabfc445359dba1707d194e9dcf27bb57037b582f1181556cc65e2f2f712f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34912853a2f81af97497fc66053b02383114c5a42d46e39f2e7a98b48bf9e78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31382514a5f6e8d234356e87d2c1aba55e02a4edb54835382fe810f10e8df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c11c5a8bd9444a04b072ad656edec0e3f7d287392a2dffd509f3d7c0d1d4601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:05Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.916768 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d76d2a7d-139b-40b6-91f8-c2c8ae0ecc71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b08c9c1691470813c489fb61abd198ca2b5a94ddec2f9b5a06b9d6c9b56334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e467246736f9943a0524d6786a8c4bd4a567f60f20ffd7259a5124c844c44522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da953f01526669a6db92b3ed2d63682b200ea37d86e44ff34a3b7630877fe6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7235ba9aab9db39a23fe6b99b3278ff4c1ede47a1d58c2c986fe3244a2fa12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:05Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.935400 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa37955372c2b350c3cd8103db2d231407c8f853e02d43fcae591d66a1f2bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:05Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.953285 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a5390e0a0285f487962def201f71183a2cfdbcc8db56f1bd0c64b2f6014c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:05Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.977498 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xr96p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6656de7a-9b8f-4714-81ae-3685c01f11fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xr96p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:05Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:05 crc kubenswrapper[4949]: I1001 15:42:05.999147 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a28cdb5789930fa13d9e76aeb4dc582efc2f97a117e7d3170909dddd8f6eed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a081b64cca92126512c81346c1b0005ddefcd11b702b12598c567cc627d3561f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:05Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.012611 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kg2qk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75c430b-f863-470c-b57f-def53bf840db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kg2qk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:06Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.034462 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b30af5f-469f-4bee-b77f-4b58edba325b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pppfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:06Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.047669 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa37955372c2b350c3cd8103db2d231407c8f853e02d43fcae591d66a1f2bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:06Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.059391 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a5390e0a0285f487962def201f71183a2cfdbcc8db56f1bd0c64b2f6014c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:06Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.073226 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xr96p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6656de7a-9b8f-4714-81ae-3685c01f11fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xr96p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:06Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.093696 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5897a864-bb97-4443-a301-b30b22d13a88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02305e2d3e998f2b11185e65d51381d27f69cc544cba61b0878457e4816e5371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05aabfc445359dba1707d194e9dcf27bb57037b582f1181556cc65e2f2f712f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34912853a2f81af97497fc66053b02383114c5a42d46e39f2e7a98b48bf9e78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31382514a5f6e8d234356e87d2c1aba55e02a4edb54835382fe810f10e8df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c11c5a8bd9444a04b072ad656edec0e3f7d287392a2dffd509f3d7c0d1d4601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:06Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.126955 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d76d2a7d-139b-40b6-91f8-c2c8ae0ecc71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b08c9c1691470813c489fb61abd198ca2b5a94ddec2f9b5a06b9d6c9b56334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e467246736f9943a0524d6786a8c4bd4a567f60f20ffd7259a5124c844c44522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da953f01526669a6db92b3ed2d63682b200ea37d86e44ff34a3b7630877fe6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7235ba9aab9db39a23fe6b99b3278ff4c1ede47a1d58c2c986fe3244a2fa12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:06Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.144554 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a28cdb5789930fa13d9e76aeb4dc582efc2f97a117e7d3170909dddd8f6eed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a081b64cca92126512c81346c1b0005ddefcd11b702b12598c567cc627d3561f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:06Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.155709 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kg2qk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75c430b-f863-470c-b57f-def53bf840db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab6ca47d17c063e35d4f3109e8dbe39f42dbf245b8f439b12464e167fb57dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kg2qk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:06Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.175360 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b30af5f-469f-4bee-b77f-4b58edba325b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pppfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:06Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.201948 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439d2f19-631a-4560-aa58-70cdeef997a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://110e9a59347a665dfa5d6fcdf9e4937525c5e799c8356b2f28d3ba523f925d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513e0be5244aa9da6671888fedd775306c0824650cc687f0faea952c28d1b6b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47ab556b4721804041f9abee5effa082f4f29fbffe7681a7fc8efe0cb4f2b7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41f38cee75135e15635fdda772af2bdd1588e57025d28247223af8b5c34202e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02eba524109342a0990f10dfdf5432c2504acbb394b4f6e23b370becc256f51c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-01T15:41:55Z\\\",\\\"message\\\":\\\"W1001 15:41:44.880099 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1001 15:41:44.880530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759333304 cert, and key in /tmp/serving-cert-2775419030/serving-signer.crt, /tmp/serving-cert-2775419030/serving-signer.key\\\\nI1001 15:41:45.097803 1 observer_polling.go:159] Starting file observer\\\\nW1001 15:41:45.100841 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1001 15:41:45.103153 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1001 15:41:45.104329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2775419030/tls.crt::/tmp/serving-cert-2775419030/tls.key\\\\\\\"\\\\nF1001 15:41:55.441587 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2ceea2906c651debca4b72a5654ba5f8291a20d513a96b2fde7f1a0373674c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:06Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.211175 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pgg4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25c0759d-c94a-438c-b478-48161acbb035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9037ea44408cee3d00ee27bb38ea8e49344ab06d6b173d86689a131010eb7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r42ms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pgg4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:06Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.227450 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e15cd67-d4ad-49b8-96a6-da114105e558\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6287\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:06Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.244604 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s5r4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghb86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s5r4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:06Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.259215 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:06Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.273534 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:06Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.286468 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:06Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.601428 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.601480 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.601578 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:42:06 crc kubenswrapper[4949]: E1001 15:42:06.601588 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:42:06 crc kubenswrapper[4949]: E1001 15:42:06.601791 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:42:06 crc kubenswrapper[4949]: E1001 15:42:06.601914 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.756950 4949 generic.go:334] "Generic (PLEG): container finished" podID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerID="aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056" exitCode=0 Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.757054 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" event={"ID":"6b30af5f-469f-4bee-b77f-4b58edba325b","Type":"ContainerDied","Data":"aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056"} Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.757134 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" event={"ID":"6b30af5f-469f-4bee-b77f-4b58edba325b","Type":"ContainerStarted","Data":"2c8aa2de9ec6d307212b2a9111bb246385942ea204db4894c90bb0534930e300"} Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.758819 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" event={"ID":"0e15cd67-d4ad-49b8-96a6-da114105e558","Type":"ContainerStarted","Data":"a85a240a198e037219428db0de4c633c5e99064f7f13a6a8922eba7fa3fac633"} Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.758880 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" event={"ID":"0e15cd67-d4ad-49b8-96a6-da114105e558","Type":"ContainerStarted","Data":"f937d2f3af7763c0dd253ca413850726658a91db0149cb51943d1cc4e5423253"} Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.760446 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-s5r4m" event={"ID":"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6","Type":"ContainerStarted","Data":"68a54b1244d03fd99a21c9ca116873fe7a80ec3053c95c8b99de3fb62e581419"} Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.760495 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-s5r4m" event={"ID":"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6","Type":"ContainerStarted","Data":"8993a171ed55e337d41ca26554b9cabe941b5a84c2becb69e69f8c9795874ad6"} Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.762083 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xr96p" event={"ID":"6656de7a-9b8f-4714-81ae-3685c01f11fb","Type":"ContainerStarted","Data":"7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7"} Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.762115 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xr96p" event={"ID":"6656de7a-9b8f-4714-81ae-3685c01f11fb","Type":"ContainerStarted","Data":"8179ebd546de45d2598fb9c60d9ff469fb2ef67754919631c6e45e2218e45278"} Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.890433 4949 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.891942 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.891990 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.892004 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.892113 4949 kubelet_node_status.go:76] "Attempting to register node" node="crc" Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.900784 4949 kubelet_node_status.go:115] "Node was previously registered" node="crc" Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.901063 4949 kubelet_node_status.go:79] "Successfully registered node" node="crc" Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.902352 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.902400 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.902413 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.902431 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.902443 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:06Z","lastTransitionTime":"2025-10-01T15:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:06 crc kubenswrapper[4949]: E1001 15:42:06.925408 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a4890954-ee04-4573-a52a-d0437f2c0f47\\\",\\\"systemUUID\\\":\\\"de71db26-32c1-4956-9e9f-66fc023dcd38\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:06Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.928760 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.928791 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.928802 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.928818 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.928829 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:06Z","lastTransitionTime":"2025-10-01T15:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:06 crc kubenswrapper[4949]: E1001 15:42:06.942650 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a4890954-ee04-4573-a52a-d0437f2c0f47\\\",\\\"systemUUID\\\":\\\"de71db26-32c1-4956-9e9f-66fc023dcd38\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:06Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.952448 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.952491 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.952503 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.952520 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.952540 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:06Z","lastTransitionTime":"2025-10-01T15:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:06 crc kubenswrapper[4949]: E1001 15:42:06.964316 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a4890954-ee04-4573-a52a-d0437f2c0f47\\\",\\\"systemUUID\\\":\\\"de71db26-32c1-4956-9e9f-66fc023dcd38\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:06Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.967600 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.967637 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.967646 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.967662 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.967697 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:06Z","lastTransitionTime":"2025-10-01T15:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:06 crc kubenswrapper[4949]: E1001 15:42:06.985768 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a4890954-ee04-4573-a52a-d0437f2c0f47\\\",\\\"systemUUID\\\":\\\"de71db26-32c1-4956-9e9f-66fc023dcd38\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:06Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.989461 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.989505 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.989516 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.989532 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:06 crc kubenswrapper[4949]: I1001 15:42:06.989542 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:06Z","lastTransitionTime":"2025-10-01T15:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:07 crc kubenswrapper[4949]: E1001 15:42:07.001456 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a4890954-ee04-4573-a52a-d0437f2c0f47\\\",\\\"systemUUID\\\":\\\"de71db26-32c1-4956-9e9f-66fc023dcd38\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:06Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:07 crc kubenswrapper[4949]: E1001 15:42:07.001631 4949 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.003399 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.003430 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.003441 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.003457 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.003468 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:07Z","lastTransitionTime":"2025-10-01T15:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.105544 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.105582 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.105596 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.105613 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.105627 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:07Z","lastTransitionTime":"2025-10-01T15:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.207644 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.207687 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.207701 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.207720 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.207736 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:07Z","lastTransitionTime":"2025-10-01T15:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.316353 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.316396 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.316406 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.316443 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.316457 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:07Z","lastTransitionTime":"2025-10-01T15:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.419660 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.419748 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.419761 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.419779 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.419795 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:07Z","lastTransitionTime":"2025-10-01T15:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.523519 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.523561 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.523572 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.523588 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.523598 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:07Z","lastTransitionTime":"2025-10-01T15:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.626224 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.626265 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.626275 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.626299 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.626312 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:07Z","lastTransitionTime":"2025-10-01T15:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.729240 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.729315 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.729330 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.729353 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.729366 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:07Z","lastTransitionTime":"2025-10-01T15:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.769171 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" event={"ID":"0e15cd67-d4ad-49b8-96a6-da114105e558","Type":"ContainerStarted","Data":"8ee03163adf8037e21674f8ff81a9eba22d580bf29b7c9a81d9505dd1e768169"} Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.771275 4949 generic.go:334] "Generic (PLEG): container finished" podID="6656de7a-9b8f-4714-81ae-3685c01f11fb" containerID="7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7" exitCode=0 Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.771389 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xr96p" event={"ID":"6656de7a-9b8f-4714-81ae-3685c01f11fb","Type":"ContainerDied","Data":"7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7"} Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.784758 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e15cd67-d4ad-49b8-96a6-da114105e558\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee03163adf8037e21674f8ff81a9eba22d580bf29b7c9a81d9505dd1e768169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a240a198e037219428db0de4c633c5e99064f7f13a6a8922eba7fa3fac633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6287\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:07Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.803343 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s5r4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghb86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s5r4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:07Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.818530 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:07Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.833498 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.833549 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.833567 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.833587 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.833599 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:07Z","lastTransitionTime":"2025-10-01T15:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.838114 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:07Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.852687 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:07Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.869397 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa37955372c2b350c3cd8103db2d231407c8f853e02d43fcae591d66a1f2bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:07Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.882567 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a5390e0a0285f487962def201f71183a2cfdbcc8db56f1bd0c64b2f6014c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:07Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.896413 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xr96p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6656de7a-9b8f-4714-81ae-3685c01f11fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xr96p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:07Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.917140 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5897a864-bb97-4443-a301-b30b22d13a88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02305e2d3e998f2b11185e65d51381d27f69cc544cba61b0878457e4816e5371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05aabfc445359dba1707d194e9dcf27bb57037b582f1181556cc65e2f2f712f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34912853a2f81af97497fc66053b02383114c5a42d46e39f2e7a98b48bf9e78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31382514a5f6e8d234356e87d2c1aba55e02a4edb54835382fe810f10e8df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c11c5a8bd9444a04b072ad656edec0e3f7d287392a2dffd509f3d7c0d1d4601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:07Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.930983 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d76d2a7d-139b-40b6-91f8-c2c8ae0ecc71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b08c9c1691470813c489fb61abd198ca2b5a94ddec2f9b5a06b9d6c9b56334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e467246736f9943a0524d6786a8c4bd4a567f60f20ffd7259a5124c844c44522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da953f01526669a6db92b3ed2d63682b200ea37d86e44ff34a3b7630877fe6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7235ba9aab9db39a23fe6b99b3278ff4c1ede47a1d58c2c986fe3244a2fa12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:07Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.941554 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.941612 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.941626 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.941644 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.941663 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:07Z","lastTransitionTime":"2025-10-01T15:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.948445 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a28cdb5789930fa13d9e76aeb4dc582efc2f97a117e7d3170909dddd8f6eed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a081b64cca92126512c81346c1b0005ddefcd11b702b12598c567cc627d3561f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:07Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.959307 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kg2qk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75c430b-f863-470c-b57f-def53bf840db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab6ca47d17c063e35d4f3109e8dbe39f42dbf245b8f439b12464e167fb57dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kg2qk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:07Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:07 crc kubenswrapper[4949]: I1001 15:42:07.983071 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b30af5f-469f-4bee-b77f-4b58edba325b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pppfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:07Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.027571 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439d2f19-631a-4560-aa58-70cdeef997a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://110e9a59347a665dfa5d6fcdf9e4937525c5e799c8356b2f28d3ba523f925d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513e0be5244aa9da6671888fedd775306c0824650cc687f0faea952c28d1b6b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47ab556b4721804041f9abee5effa082f4f29fbffe7681a7fc8efe0cb4f2b7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41f38cee75135e15635fdda772af2bdd1588e57025d28247223af8b5c34202e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02eba524109342a0990f10dfdf5432c2504acbb394b4f6e23b370becc256f51c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-01T15:41:55Z\\\",\\\"message\\\":\\\"W1001 15:41:44.880099 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1001 15:41:44.880530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759333304 cert, and key in /tmp/serving-cert-2775419030/serving-signer.crt, /tmp/serving-cert-2775419030/serving-signer.key\\\\nI1001 15:41:45.097803 1 observer_polling.go:159] Starting file observer\\\\nW1001 15:41:45.100841 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1001 15:41:45.103153 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1001 15:41:45.104329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2775419030/tls.crt::/tmp/serving-cert-2775419030/tls.key\\\\\\\"\\\\nF1001 15:41:55.441587 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2ceea2906c651debca4b72a5654ba5f8291a20d513a96b2fde7f1a0373674c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:08Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.044991 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.045038 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.045052 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.045069 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.045082 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:08Z","lastTransitionTime":"2025-10-01T15:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.055290 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pgg4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25c0759d-c94a-438c-b478-48161acbb035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9037ea44408cee3d00ee27bb38ea8e49344ab06d6b173d86689a131010eb7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r42ms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pgg4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:08Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.087320 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b30af5f-469f-4bee-b77f-4b58edba325b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pppfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:08Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.104604 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a28cdb5789930fa13d9e76aeb4dc582efc2f97a117e7d3170909dddd8f6eed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a081b64cca92126512c81346c1b0005ddefcd11b702b12598c567cc627d3561f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:08Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.114983 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kg2qk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75c430b-f863-470c-b57f-def53bf840db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab6ca47d17c063e35d4f3109e8dbe39f42dbf245b8f439b12464e167fb57dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kg2qk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:08Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.125763 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pgg4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25c0759d-c94a-438c-b478-48161acbb035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9037ea44408cee3d00ee27bb38ea8e49344ab06d6b173d86689a131010eb7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r42ms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pgg4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:08Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.138802 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439d2f19-631a-4560-aa58-70cdeef997a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://110e9a59347a665dfa5d6fcdf9e4937525c5e799c8356b2f28d3ba523f925d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513e0be5244aa9da6671888fedd775306c0824650cc687f0faea952c28d1b6b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47ab556b4721804041f9abee5effa082f4f29fbffe7681a7fc8efe0cb4f2b7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41f38cee75135e15635fdda772af2bdd1588e57025d28247223af8b5c34202e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02eba524109342a0990f10dfdf5432c2504acbb394b4f6e23b370becc256f51c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-01T15:41:55Z\\\",\\\"message\\\":\\\"W1001 15:41:44.880099 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1001 15:41:44.880530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759333304 cert, and key in /tmp/serving-cert-2775419030/serving-signer.crt, /tmp/serving-cert-2775419030/serving-signer.key\\\\nI1001 15:41:45.097803 1 observer_polling.go:159] Starting file observer\\\\nW1001 15:41:45.100841 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1001 15:41:45.103153 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1001 15:41:45.104329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2775419030/tls.crt::/tmp/serving-cert-2775419030/tls.key\\\\\\\"\\\\nF1001 15:41:55.441587 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2ceea2906c651debca4b72a5654ba5f8291a20d513a96b2fde7f1a0373674c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:08Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.148201 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.148239 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.148251 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.148270 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.148281 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:08Z","lastTransitionTime":"2025-10-01T15:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.156058 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:08Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.172957 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e15cd67-d4ad-49b8-96a6-da114105e558\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee03163adf8037e21674f8ff81a9eba22d580bf29b7c9a81d9505dd1e768169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a240a198e037219428db0de4c633c5e99064f7f13a6a8922eba7fa3fac633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6287\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:08Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.193057 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.193220 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.193249 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:42:08 crc kubenswrapper[4949]: E1001 15:42:08.193286 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:42:16.193246946 +0000 UTC m=+35.498853287 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:42:08 crc kubenswrapper[4949]: E1001 15:42:08.193383 4949 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 01 15:42:08 crc kubenswrapper[4949]: E1001 15:42:08.193435 4949 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 01 15:42:08 crc kubenswrapper[4949]: E1001 15:42:08.193451 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-01 15:42:16.193433721 +0000 UTC m=+35.499039912 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 01 15:42:08 crc kubenswrapper[4949]: E1001 15:42:08.193549 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-01 15:42:16.193529763 +0000 UTC m=+35.499135954 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.194300 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s5r4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a54b1244d03fd99a21c9ca116873fe7a80ec3053c95c8b99de3fb62e581419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghb86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s5r4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:08Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.209490 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:08Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.223178 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:08Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.240680 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa37955372c2b350c3cd8103db2d231407c8f853e02d43fcae591d66a1f2bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:08Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.250853 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.250908 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.250921 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.250942 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.250954 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:08Z","lastTransitionTime":"2025-10-01T15:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.258473 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a5390e0a0285f487962def201f71183a2cfdbcc8db56f1bd0c64b2f6014c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:08Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.277896 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xr96p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6656de7a-9b8f-4714-81ae-3685c01f11fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xr96p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:08Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.294021 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.294071 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:42:08 crc kubenswrapper[4949]: E1001 15:42:08.294216 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 01 15:42:08 crc kubenswrapper[4949]: E1001 15:42:08.294235 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 01 15:42:08 crc kubenswrapper[4949]: E1001 15:42:08.294247 4949 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 01 15:42:08 crc kubenswrapper[4949]: E1001 15:42:08.294312 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-01 15:42:16.294294279 +0000 UTC m=+35.599900470 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 01 15:42:08 crc kubenswrapper[4949]: E1001 15:42:08.294524 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 01 15:42:08 crc kubenswrapper[4949]: E1001 15:42:08.294588 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 01 15:42:08 crc kubenswrapper[4949]: E1001 15:42:08.294604 4949 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 01 15:42:08 crc kubenswrapper[4949]: E1001 15:42:08.294690 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-01 15:42:16.294664559 +0000 UTC m=+35.600270940 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.301737 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5897a864-bb97-4443-a301-b30b22d13a88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02305e2d3e998f2b11185e65d51381d27f69cc544cba61b0878457e4816e5371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05aabfc445359dba1707d194e9dcf27bb57037b582f1181556cc65e2f2f712f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34912853a2f81af97497fc66053b02383114c5a42d46e39f2e7a98b48bf9e78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31382514a5f6e8d234356e87d2c1aba55e02a4edb54835382fe810f10e8df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c11c5a8bd9444a04b072ad656edec0e3f7d287392a2dffd509f3d7c0d1d4601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:08Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.316434 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d76d2a7d-139b-40b6-91f8-c2c8ae0ecc71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b08c9c1691470813c489fb61abd198ca2b5a94ddec2f9b5a06b9d6c9b56334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e467246736f9943a0524d6786a8c4bd4a567f60f20ffd7259a5124c844c44522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da953f01526669a6db92b3ed2d63682b200ea37d86e44ff34a3b7630877fe6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7235ba9aab9db39a23fe6b99b3278ff4c1ede47a1d58c2c986fe3244a2fa12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:08Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.354755 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.354798 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.354810 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.354828 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.354839 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:08Z","lastTransitionTime":"2025-10-01T15:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.457604 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.457854 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.457862 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.457876 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.457886 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:08Z","lastTransitionTime":"2025-10-01T15:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.560073 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.560117 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.560150 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.560167 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.560180 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:08Z","lastTransitionTime":"2025-10-01T15:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.600856 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.600986 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:42:08 crc kubenswrapper[4949]: E1001 15:42:08.601009 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.601116 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:42:08 crc kubenswrapper[4949]: E1001 15:42:08.601264 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:42:08 crc kubenswrapper[4949]: E1001 15:42:08.601388 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.663066 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.663111 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.663147 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.663166 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.663193 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:08Z","lastTransitionTime":"2025-10-01T15:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.765786 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.765819 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.765831 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.765844 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.765853 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:08Z","lastTransitionTime":"2025-10-01T15:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.778507 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" event={"ID":"6b30af5f-469f-4bee-b77f-4b58edba325b","Type":"ContainerStarted","Data":"e0d4badfc0169a3e0cbd62d6661b7e542f43cbbe9005a56c68c7e0027235b64e"} Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.778643 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" event={"ID":"6b30af5f-469f-4bee-b77f-4b58edba325b","Type":"ContainerStarted","Data":"2df7f85dd23b39e72771fddf6c1ab34df3b9fcea664dffe05a1b8859281fff26"} Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.778714 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" event={"ID":"6b30af5f-469f-4bee-b77f-4b58edba325b","Type":"ContainerStarted","Data":"fde65de2ce60422cb5ddb92562725b8dcd2f8b36bee19cc6e6de8aaab216c7ba"} Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.778769 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" event={"ID":"6b30af5f-469f-4bee-b77f-4b58edba325b","Type":"ContainerStarted","Data":"2eb7bdabc02a6c9442d7f842e078acef313e10cd6dbf7153b289ee93b99c9abc"} Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.780737 4949 generic.go:334] "Generic (PLEG): container finished" podID="6656de7a-9b8f-4714-81ae-3685c01f11fb" containerID="bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f" exitCode=0 Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.780814 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xr96p" event={"ID":"6656de7a-9b8f-4714-81ae-3685c01f11fb","Type":"ContainerDied","Data":"bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f"} Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.800239 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pgg4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25c0759d-c94a-438c-b478-48161acbb035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9037ea44408cee3d00ee27bb38ea8e49344ab06d6b173d86689a131010eb7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r42ms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pgg4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:08Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.819183 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439d2f19-631a-4560-aa58-70cdeef997a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://110e9a59347a665dfa5d6fcdf9e4937525c5e799c8356b2f28d3ba523f925d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513e0be5244aa9da6671888fedd775306c0824650cc687f0faea952c28d1b6b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47ab556b4721804041f9abee5effa082f4f29fbffe7681a7fc8efe0cb4f2b7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41f38cee75135e15635fdda772af2bdd1588e57025d28247223af8b5c34202e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02eba524109342a0990f10dfdf5432c2504acbb394b4f6e23b370becc256f51c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-01T15:41:55Z\\\",\\\"message\\\":\\\"W1001 15:41:44.880099 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1001 15:41:44.880530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759333304 cert, and key in /tmp/serving-cert-2775419030/serving-signer.crt, /tmp/serving-cert-2775419030/serving-signer.key\\\\nI1001 15:41:45.097803 1 observer_polling.go:159] Starting file observer\\\\nW1001 15:41:45.100841 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1001 15:41:45.103153 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1001 15:41:45.104329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2775419030/tls.crt::/tmp/serving-cert-2775419030/tls.key\\\\\\\"\\\\nF1001 15:41:55.441587 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2ceea2906c651debca4b72a5654ba5f8291a20d513a96b2fde7f1a0373674c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:08Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.842333 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:08Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.856079 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:08Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.870226 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.870260 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.870270 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.870284 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.870294 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:08Z","lastTransitionTime":"2025-10-01T15:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.870263 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e15cd67-d4ad-49b8-96a6-da114105e558\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee03163adf8037e21674f8ff81a9eba22d580bf29b7c9a81d9505dd1e768169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a240a198e037219428db0de4c633c5e99064f7f13a6a8922eba7fa3fac633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6287\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:08Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.885071 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s5r4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a54b1244d03fd99a21c9ca116873fe7a80ec3053c95c8b99de3fb62e581419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghb86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s5r4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:08Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.898705 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:08Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.911473 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d76d2a7d-139b-40b6-91f8-c2c8ae0ecc71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b08c9c1691470813c489fb61abd198ca2b5a94ddec2f9b5a06b9d6c9b56334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e467246736f9943a0524d6786a8c4bd4a567f60f20ffd7259a5124c844c44522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da953f01526669a6db92b3ed2d63682b200ea37d86e44ff34a3b7630877fe6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7235ba9aab9db39a23fe6b99b3278ff4c1ede47a1d58c2c986fe3244a2fa12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:08Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.924405 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa37955372c2b350c3cd8103db2d231407c8f853e02d43fcae591d66a1f2bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:08Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.936431 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a5390e0a0285f487962def201f71183a2cfdbcc8db56f1bd0c64b2f6014c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:08Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.950815 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xr96p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6656de7a-9b8f-4714-81ae-3685c01f11fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xr96p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:08Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.972323 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5897a864-bb97-4443-a301-b30b22d13a88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02305e2d3e998f2b11185e65d51381d27f69cc544cba61b0878457e4816e5371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05aabfc445359dba1707d194e9dcf27bb57037b582f1181556cc65e2f2f712f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34912853a2f81af97497fc66053b02383114c5a42d46e39f2e7a98b48bf9e78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31382514a5f6e8d234356e87d2c1aba55e02a4edb54835382fe810f10e8df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c11c5a8bd9444a04b072ad656edec0e3f7d287392a2dffd509f3d7c0d1d4601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:08Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.973993 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.974023 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.974032 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.974072 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.974083 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:08Z","lastTransitionTime":"2025-10-01T15:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:08 crc kubenswrapper[4949]: I1001 15:42:08.987180 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kg2qk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75c430b-f863-470c-b57f-def53bf840db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab6ca47d17c063e35d4f3109e8dbe39f42dbf245b8f439b12464e167fb57dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kg2qk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:08Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.014225 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b30af5f-469f-4bee-b77f-4b58edba325b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pppfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:09Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.029041 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a28cdb5789930fa13d9e76aeb4dc582efc2f97a117e7d3170909dddd8f6eed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a081b64cca92126512c81346c1b0005ddefcd11b702b12598c567cc627d3561f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:09Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.076908 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.076951 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.076961 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.076978 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.076988 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:09Z","lastTransitionTime":"2025-10-01T15:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.179243 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.179309 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.179322 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.179347 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.179360 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:09Z","lastTransitionTime":"2025-10-01T15:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.283001 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.283076 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.283098 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.283163 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.283191 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:09Z","lastTransitionTime":"2025-10-01T15:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.386016 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.386081 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.386096 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.386140 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.386160 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:09Z","lastTransitionTime":"2025-10-01T15:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.489031 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.489085 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.489098 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.489140 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.489163 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:09Z","lastTransitionTime":"2025-10-01T15:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.592006 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.592045 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.592056 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.592072 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.592083 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:09Z","lastTransitionTime":"2025-10-01T15:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.695473 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.695527 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.695537 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.695564 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.695582 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:09Z","lastTransitionTime":"2025-10-01T15:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.788686 4949 generic.go:334] "Generic (PLEG): container finished" podID="6656de7a-9b8f-4714-81ae-3685c01f11fb" containerID="ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204" exitCode=0 Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.788750 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xr96p" event={"ID":"6656de7a-9b8f-4714-81ae-3685c01f11fb","Type":"ContainerDied","Data":"ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204"} Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.795390 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" event={"ID":"6b30af5f-469f-4bee-b77f-4b58edba325b","Type":"ContainerStarted","Data":"728c9faef5d0d63e5dd9cd98520d62db3e75646be0aeec6ff20c71598e35b5a4"} Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.796305 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" event={"ID":"6b30af5f-469f-4bee-b77f-4b58edba325b","Type":"ContainerStarted","Data":"35e991ed458eaa7ea90ab0ae0204b7c9b865ad5c5044709257614edda19865c0"} Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.803853 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.804075 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.804336 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.804487 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.805293 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:09Z","lastTransitionTime":"2025-10-01T15:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.819984 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa37955372c2b350c3cd8103db2d231407c8f853e02d43fcae591d66a1f2bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:09Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.844407 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a5390e0a0285f487962def201f71183a2cfdbcc8db56f1bd0c64b2f6014c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:09Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.868496 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xr96p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6656de7a-9b8f-4714-81ae-3685c01f11fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xr96p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:09Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.897173 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5897a864-bb97-4443-a301-b30b22d13a88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02305e2d3e998f2b11185e65d51381d27f69cc544cba61b0878457e4816e5371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05aabfc445359dba1707d194e9dcf27bb57037b582f1181556cc65e2f2f712f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34912853a2f81af97497fc66053b02383114c5a42d46e39f2e7a98b48bf9e78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31382514a5f6e8d234356e87d2c1aba55e02a4edb54835382fe810f10e8df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c11c5a8bd9444a04b072ad656edec0e3f7d287392a2dffd509f3d7c0d1d4601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:09Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.908028 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.908051 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.908061 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.908074 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.908084 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:09Z","lastTransitionTime":"2025-10-01T15:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.959612 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d76d2a7d-139b-40b6-91f8-c2c8ae0ecc71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b08c9c1691470813c489fb61abd198ca2b5a94ddec2f9b5a06b9d6c9b56334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e467246736f9943a0524d6786a8c4bd4a567f60f20ffd7259a5124c844c44522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da953f01526669a6db92b3ed2d63682b200ea37d86e44ff34a3b7630877fe6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7235ba9aab9db39a23fe6b99b3278ff4c1ede47a1d58c2c986fe3244a2fa12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:09Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.978460 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b30af5f-469f-4bee-b77f-4b58edba325b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pppfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:09Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:09 crc kubenswrapper[4949]: I1001 15:42:09.992322 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a28cdb5789930fa13d9e76aeb4dc582efc2f97a117e7d3170909dddd8f6eed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a081b64cca92126512c81346c1b0005ddefcd11b702b12598c567cc627d3561f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:09Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.002875 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kg2qk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75c430b-f863-470c-b57f-def53bf840db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab6ca47d17c063e35d4f3109e8dbe39f42dbf245b8f439b12464e167fb57dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kg2qk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:10Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.011488 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pgg4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25c0759d-c94a-438c-b478-48161acbb035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9037ea44408cee3d00ee27bb38ea8e49344ab06d6b173d86689a131010eb7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r42ms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pgg4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:10Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.012476 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.012503 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.012515 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.012530 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.012542 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:10Z","lastTransitionTime":"2025-10-01T15:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.025227 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439d2f19-631a-4560-aa58-70cdeef997a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://110e9a59347a665dfa5d6fcdf9e4937525c5e799c8356b2f28d3ba523f925d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513e0be5244aa9da6671888fedd775306c0824650cc687f0faea952c28d1b6b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47ab556b4721804041f9abee5effa082f4f29fbffe7681a7fc8efe0cb4f2b7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41f38cee75135e15635fdda772af2bdd1588e57025d28247223af8b5c34202e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02eba524109342a0990f10dfdf5432c2504acbb394b4f6e23b370becc256f51c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-01T15:41:55Z\\\",\\\"message\\\":\\\"W1001 15:41:44.880099 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1001 15:41:44.880530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759333304 cert, and key in /tmp/serving-cert-2775419030/serving-signer.crt, /tmp/serving-cert-2775419030/serving-signer.key\\\\nI1001 15:41:45.097803 1 observer_polling.go:159] Starting file observer\\\\nW1001 15:41:45.100841 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1001 15:41:45.103153 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1001 15:41:45.104329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2775419030/tls.crt::/tmp/serving-cert-2775419030/tls.key\\\\\\\"\\\\nF1001 15:41:55.441587 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2ceea2906c651debca4b72a5654ba5f8291a20d513a96b2fde7f1a0373674c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:10Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.038227 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:10Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.050035 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e15cd67-d4ad-49b8-96a6-da114105e558\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee03163adf8037e21674f8ff81a9eba22d580bf29b7c9a81d9505dd1e768169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a240a198e037219428db0de4c633c5e99064f7f13a6a8922eba7fa3fac633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6287\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:10Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.065786 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s5r4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a54b1244d03fd99a21c9ca116873fe7a80ec3053c95c8b99de3fb62e581419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghb86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s5r4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:10Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.080872 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:10Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.092707 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:10Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.115348 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.115403 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.115415 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.115438 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.115453 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:10Z","lastTransitionTime":"2025-10-01T15:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.218319 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.218385 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.218402 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.218421 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.218434 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:10Z","lastTransitionTime":"2025-10-01T15:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.320750 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.320819 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.320840 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.320871 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.320892 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:10Z","lastTransitionTime":"2025-10-01T15:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.422849 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.422904 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.422920 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.422943 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.422960 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:10Z","lastTransitionTime":"2025-10-01T15:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.526945 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.527010 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.527046 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.527083 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.527105 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:10Z","lastTransitionTime":"2025-10-01T15:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.600790 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.600859 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.600795 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:42:10 crc kubenswrapper[4949]: E1001 15:42:10.600955 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:42:10 crc kubenswrapper[4949]: E1001 15:42:10.601079 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:42:10 crc kubenswrapper[4949]: E1001 15:42:10.601253 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.631039 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.631107 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.631161 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.631187 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.631204 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:10Z","lastTransitionTime":"2025-10-01T15:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.733820 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.733884 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.733904 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.733932 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.733951 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:10Z","lastTransitionTime":"2025-10-01T15:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.803245 4949 generic.go:334] "Generic (PLEG): container finished" podID="6656de7a-9b8f-4714-81ae-3685c01f11fb" containerID="7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c" exitCode=0 Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.803309 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xr96p" event={"ID":"6656de7a-9b8f-4714-81ae-3685c01f11fb","Type":"ContainerDied","Data":"7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c"} Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.824544 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa37955372c2b350c3cd8103db2d231407c8f853e02d43fcae591d66a1f2bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:10Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.837508 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.837572 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.837595 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.837621 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.837640 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:10Z","lastTransitionTime":"2025-10-01T15:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.837734 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a5390e0a0285f487962def201f71183a2cfdbcc8db56f1bd0c64b2f6014c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:10Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.858213 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xr96p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6656de7a-9b8f-4714-81ae-3685c01f11fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xr96p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:10Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.884302 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5897a864-bb97-4443-a301-b30b22d13a88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02305e2d3e998f2b11185e65d51381d27f69cc544cba61b0878457e4816e5371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05aabfc445359dba1707d194e9dcf27bb57037b582f1181556cc65e2f2f712f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34912853a2f81af97497fc66053b02383114c5a42d46e39f2e7a98b48bf9e78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31382514a5f6e8d234356e87d2c1aba55e02a4edb54835382fe810f10e8df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c11c5a8bd9444a04b072ad656edec0e3f7d287392a2dffd509f3d7c0d1d4601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:10Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.901304 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d76d2a7d-139b-40b6-91f8-c2c8ae0ecc71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b08c9c1691470813c489fb61abd198ca2b5a94ddec2f9b5a06b9d6c9b56334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e467246736f9943a0524d6786a8c4bd4a567f60f20ffd7259a5124c844c44522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da953f01526669a6db92b3ed2d63682b200ea37d86e44ff34a3b7630877fe6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7235ba9aab9db39a23fe6b99b3278ff4c1ede47a1d58c2c986fe3244a2fa12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:10Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.924394 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b30af5f-469f-4bee-b77f-4b58edba325b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pppfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:10Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.939526 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a28cdb5789930fa13d9e76aeb4dc582efc2f97a117e7d3170909dddd8f6eed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a081b64cca92126512c81346c1b0005ddefcd11b702b12598c567cc627d3561f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:10Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.941170 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.941210 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.941221 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.941240 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.941253 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:10Z","lastTransitionTime":"2025-10-01T15:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.953443 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kg2qk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75c430b-f863-470c-b57f-def53bf840db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab6ca47d17c063e35d4f3109e8dbe39f42dbf245b8f439b12464e167fb57dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kg2qk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:10Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.966367 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pgg4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25c0759d-c94a-438c-b478-48161acbb035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9037ea44408cee3d00ee27bb38ea8e49344ab06d6b173d86689a131010eb7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r42ms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pgg4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:10Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.982655 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439d2f19-631a-4560-aa58-70cdeef997a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://110e9a59347a665dfa5d6fcdf9e4937525c5e799c8356b2f28d3ba523f925d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513e0be5244aa9da6671888fedd775306c0824650cc687f0faea952c28d1b6b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47ab556b4721804041f9abee5effa082f4f29fbffe7681a7fc8efe0cb4f2b7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41f38cee75135e15635fdda772af2bdd1588e57025d28247223af8b5c34202e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02eba524109342a0990f10dfdf5432c2504acbb394b4f6e23b370becc256f51c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-01T15:41:55Z\\\",\\\"message\\\":\\\"W1001 15:41:44.880099 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1001 15:41:44.880530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759333304 cert, and key in /tmp/serving-cert-2775419030/serving-signer.crt, /tmp/serving-cert-2775419030/serving-signer.key\\\\nI1001 15:41:45.097803 1 observer_polling.go:159] Starting file observer\\\\nW1001 15:41:45.100841 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1001 15:41:45.103153 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1001 15:41:45.104329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2775419030/tls.crt::/tmp/serving-cert-2775419030/tls.key\\\\\\\"\\\\nF1001 15:41:55.441587 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2ceea2906c651debca4b72a5654ba5f8291a20d513a96b2fde7f1a0373674c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:10Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:10 crc kubenswrapper[4949]: I1001 15:42:10.997171 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:10Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.010509 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e15cd67-d4ad-49b8-96a6-da114105e558\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee03163adf8037e21674f8ff81a9eba22d580bf29b7c9a81d9505dd1e768169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a240a198e037219428db0de4c633c5e99064f7f13a6a8922eba7fa3fac633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6287\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:11Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.027533 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s5r4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a54b1244d03fd99a21c9ca116873fe7a80ec3053c95c8b99de3fb62e581419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghb86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s5r4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:11Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.041554 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:11Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.045795 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.045822 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.045833 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.045848 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.045858 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:11Z","lastTransitionTime":"2025-10-01T15:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.054908 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:11Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.149616 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.149657 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.149668 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.149686 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.149701 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:11Z","lastTransitionTime":"2025-10-01T15:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.253350 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.253402 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.253414 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.253432 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.253445 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:11Z","lastTransitionTime":"2025-10-01T15:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.356266 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.356299 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.356308 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.356320 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.356329 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:11Z","lastTransitionTime":"2025-10-01T15:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.460623 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.460690 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.460702 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.460719 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.460731 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:11Z","lastTransitionTime":"2025-10-01T15:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.563171 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.563215 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.563227 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.563247 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.563259 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:11Z","lastTransitionTime":"2025-10-01T15:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.617593 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa37955372c2b350c3cd8103db2d231407c8f853e02d43fcae591d66a1f2bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:11Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.630710 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a5390e0a0285f487962def201f71183a2cfdbcc8db56f1bd0c64b2f6014c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:11Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.647614 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xr96p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6656de7a-9b8f-4714-81ae-3685c01f11fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xr96p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:11Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.664860 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.664913 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.664921 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.664938 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.664948 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:11Z","lastTransitionTime":"2025-10-01T15:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.678693 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5897a864-bb97-4443-a301-b30b22d13a88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02305e2d3e998f2b11185e65d51381d27f69cc544cba61b0878457e4816e5371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05aabfc445359dba1707d194e9dcf27bb57037b582f1181556cc65e2f2f712f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34912853a2f81af97497fc66053b02383114c5a42d46e39f2e7a98b48bf9e78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31382514a5f6e8d234356e87d2c1aba55e02a4edb54835382fe810f10e8df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c11c5a8bd9444a04b072ad656edec0e3f7d287392a2dffd509f3d7c0d1d4601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:11Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.692402 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d76d2a7d-139b-40b6-91f8-c2c8ae0ecc71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b08c9c1691470813c489fb61abd198ca2b5a94ddec2f9b5a06b9d6c9b56334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e467246736f9943a0524d6786a8c4bd4a567f60f20ffd7259a5124c844c44522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da953f01526669a6db92b3ed2d63682b200ea37d86e44ff34a3b7630877fe6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7235ba9aab9db39a23fe6b99b3278ff4c1ede47a1d58c2c986fe3244a2fa12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:11Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.707495 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a28cdb5789930fa13d9e76aeb4dc582efc2f97a117e7d3170909dddd8f6eed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a081b64cca92126512c81346c1b0005ddefcd11b702b12598c567cc627d3561f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:11Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.718037 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kg2qk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75c430b-f863-470c-b57f-def53bf840db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab6ca47d17c063e35d4f3109e8dbe39f42dbf245b8f439b12464e167fb57dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kg2qk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:11Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.737209 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b30af5f-469f-4bee-b77f-4b58edba325b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pppfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:11Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.755932 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439d2f19-631a-4560-aa58-70cdeef997a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://110e9a59347a665dfa5d6fcdf9e4937525c5e799c8356b2f28d3ba523f925d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513e0be5244aa9da6671888fedd775306c0824650cc687f0faea952c28d1b6b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47ab556b4721804041f9abee5effa082f4f29fbffe7681a7fc8efe0cb4f2b7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41f38cee75135e15635fdda772af2bdd1588e57025d28247223af8b5c34202e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02eba524109342a0990f10dfdf5432c2504acbb394b4f6e23b370becc256f51c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-01T15:41:55Z\\\",\\\"message\\\":\\\"W1001 15:41:44.880099 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1001 15:41:44.880530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759333304 cert, and key in /tmp/serving-cert-2775419030/serving-signer.crt, /tmp/serving-cert-2775419030/serving-signer.key\\\\nI1001 15:41:45.097803 1 observer_polling.go:159] Starting file observer\\\\nW1001 15:41:45.100841 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1001 15:41:45.103153 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1001 15:41:45.104329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2775419030/tls.crt::/tmp/serving-cert-2775419030/tls.key\\\\\\\"\\\\nF1001 15:41:55.441587 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2ceea2906c651debca4b72a5654ba5f8291a20d513a96b2fde7f1a0373674c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:11Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.767649 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.767687 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.767697 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.767713 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.767724 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:11Z","lastTransitionTime":"2025-10-01T15:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.769296 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pgg4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25c0759d-c94a-438c-b478-48161acbb035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9037ea44408cee3d00ee27bb38ea8e49344ab06d6b173d86689a131010eb7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r42ms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pgg4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:11Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.781984 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e15cd67-d4ad-49b8-96a6-da114105e558\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee03163adf8037e21674f8ff81a9eba22d580bf29b7c9a81d9505dd1e768169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a240a198e037219428db0de4c633c5e99064f7f13a6a8922eba7fa3fac633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6287\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:11Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.800274 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s5r4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a54b1244d03fd99a21c9ca116873fe7a80ec3053c95c8b99de3fb62e581419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghb86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s5r4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:11Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.809698 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xr96p" event={"ID":"6656de7a-9b8f-4714-81ae-3685c01f11fb","Type":"ContainerStarted","Data":"2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90"} Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.813883 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" event={"ID":"6b30af5f-469f-4bee-b77f-4b58edba325b","Type":"ContainerStarted","Data":"d08a6304f2c50f5e54adf26554bd2f81aba317d4ee153f55dc25123926bc380e"} Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.818431 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:11Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.831346 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:11Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.845429 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:11Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.871029 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.871068 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.871083 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.871105 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.871145 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:11Z","lastTransitionTime":"2025-10-01T15:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.872684 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa37955372c2b350c3cd8103db2d231407c8f853e02d43fcae591d66a1f2bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:11Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.884737 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a5390e0a0285f487962def201f71183a2cfdbcc8db56f1bd0c64b2f6014c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:11Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.898150 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xr96p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6656de7a-9b8f-4714-81ae-3685c01f11fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xr96p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:11Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.916485 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5897a864-bb97-4443-a301-b30b22d13a88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02305e2d3e998f2b11185e65d51381d27f69cc544cba61b0878457e4816e5371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05aabfc445359dba1707d194e9dcf27bb57037b582f1181556cc65e2f2f712f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34912853a2f81af97497fc66053b02383114c5a42d46e39f2e7a98b48bf9e78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31382514a5f6e8d234356e87d2c1aba55e02a4edb54835382fe810f10e8df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c11c5a8bd9444a04b072ad656edec0e3f7d287392a2dffd509f3d7c0d1d4601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:11Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.941456 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d76d2a7d-139b-40b6-91f8-c2c8ae0ecc71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b08c9c1691470813c489fb61abd198ca2b5a94ddec2f9b5a06b9d6c9b56334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e467246736f9943a0524d6786a8c4bd4a567f60f20ffd7259a5124c844c44522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da953f01526669a6db92b3ed2d63682b200ea37d86e44ff34a3b7630877fe6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7235ba9aab9db39a23fe6b99b3278ff4c1ede47a1d58c2c986fe3244a2fa12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:11Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.961496 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b30af5f-469f-4bee-b77f-4b58edba325b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pppfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:11Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.973538 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.973578 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.973588 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.973603 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.973615 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:11Z","lastTransitionTime":"2025-10-01T15:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.973797 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a28cdb5789930fa13d9e76aeb4dc582efc2f97a117e7d3170909dddd8f6eed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a081b64cca92126512c81346c1b0005ddefcd11b702b12598c567cc627d3561f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:11Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.982824 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kg2qk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75c430b-f863-470c-b57f-def53bf840db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab6ca47d17c063e35d4f3109e8dbe39f42dbf245b8f439b12464e167fb57dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kg2qk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:11Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:11 crc kubenswrapper[4949]: I1001 15:42:11.989828 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pgg4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25c0759d-c94a-438c-b478-48161acbb035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9037ea44408cee3d00ee27bb38ea8e49344ab06d6b173d86689a131010eb7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r42ms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pgg4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:11Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.008990 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439d2f19-631a-4560-aa58-70cdeef997a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://110e9a59347a665dfa5d6fcdf9e4937525c5e799c8356b2f28d3ba523f925d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513e0be5244aa9da6671888fedd775306c0824650cc687f0faea952c28d1b6b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47ab556b4721804041f9abee5effa082f4f29fbffe7681a7fc8efe0cb4f2b7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41f38cee75135e15635fdda772af2bdd1588e57025d28247223af8b5c34202e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02eba524109342a0990f10dfdf5432c2504acbb394b4f6e23b370becc256f51c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-01T15:41:55Z\\\",\\\"message\\\":\\\"W1001 15:41:44.880099 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1001 15:41:44.880530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759333304 cert, and key in /tmp/serving-cert-2775419030/serving-signer.crt, /tmp/serving-cert-2775419030/serving-signer.key\\\\nI1001 15:41:45.097803 1 observer_polling.go:159] Starting file observer\\\\nW1001 15:41:45.100841 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1001 15:41:45.103153 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1001 15:41:45.104329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2775419030/tls.crt::/tmp/serving-cert-2775419030/tls.key\\\\\\\"\\\\nF1001 15:41:55.441587 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2ceea2906c651debca4b72a5654ba5f8291a20d513a96b2fde7f1a0373674c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:12Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.025203 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:12Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.062226 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e15cd67-d4ad-49b8-96a6-da114105e558\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee03163adf8037e21674f8ff81a9eba22d580bf29b7c9a81d9505dd1e768169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a240a198e037219428db0de4c633c5e99064f7f13a6a8922eba7fa3fac633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6287\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:12Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.076599 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.076641 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.076654 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.076675 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.076688 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:12Z","lastTransitionTime":"2025-10-01T15:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.079147 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s5r4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a54b1244d03fd99a21c9ca116873fe7a80ec3053c95c8b99de3fb62e581419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghb86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s5r4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:12Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.095680 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:12Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.109348 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:12Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.190236 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.190333 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.190348 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.190370 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.190385 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:12Z","lastTransitionTime":"2025-10-01T15:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.294403 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.294464 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.294484 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.294511 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.294532 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:12Z","lastTransitionTime":"2025-10-01T15:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.397611 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.397659 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.397672 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.397688 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.397701 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:12Z","lastTransitionTime":"2025-10-01T15:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.500515 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.500591 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.500609 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.500641 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.500661 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:12Z","lastTransitionTime":"2025-10-01T15:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.534820 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.553537 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:12Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.564948 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:12Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.582804 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:12Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.593859 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e15cd67-d4ad-49b8-96a6-da114105e558\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee03163adf8037e21674f8ff81a9eba22d580bf29b7c9a81d9505dd1e768169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a240a198e037219428db0de4c633c5e99064f7f13a6a8922eba7fa3fac633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6287\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:12Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.600520 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:42:12 crc kubenswrapper[4949]: E1001 15:42:12.600624 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.600690 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:42:12 crc kubenswrapper[4949]: E1001 15:42:12.600739 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.600787 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:42:12 crc kubenswrapper[4949]: E1001 15:42:12.600838 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.602526 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.602543 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.602554 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.602566 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.602575 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:12Z","lastTransitionTime":"2025-10-01T15:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.617691 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s5r4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a54b1244d03fd99a21c9ca116873fe7a80ec3053c95c8b99de3fb62e581419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghb86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s5r4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:12Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.645031 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5897a864-bb97-4443-a301-b30b22d13a88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02305e2d3e998f2b11185e65d51381d27f69cc544cba61b0878457e4816e5371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05aabfc445359dba1707d194e9dcf27bb57037b582f1181556cc65e2f2f712f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34912853a2f81af97497fc66053b02383114c5a42d46e39f2e7a98b48bf9e78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31382514a5f6e8d234356e87d2c1aba55e02a4edb54835382fe810f10e8df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c11c5a8bd9444a04b072ad656edec0e3f7d287392a2dffd509f3d7c0d1d4601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:12Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.660879 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d76d2a7d-139b-40b6-91f8-c2c8ae0ecc71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b08c9c1691470813c489fb61abd198ca2b5a94ddec2f9b5a06b9d6c9b56334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e467246736f9943a0524d6786a8c4bd4a567f60f20ffd7259a5124c844c44522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da953f01526669a6db92b3ed2d63682b200ea37d86e44ff34a3b7630877fe6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7235ba9aab9db39a23fe6b99b3278ff4c1ede47a1d58c2c986fe3244a2fa12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:12Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.677366 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa37955372c2b350c3cd8103db2d231407c8f853e02d43fcae591d66a1f2bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:12Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.693329 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a5390e0a0285f487962def201f71183a2cfdbcc8db56f1bd0c64b2f6014c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:12Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.705350 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.705386 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.705402 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.705421 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.705435 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:12Z","lastTransitionTime":"2025-10-01T15:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.707583 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xr96p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6656de7a-9b8f-4714-81ae-3685c01f11fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xr96p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:12Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.727420 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a28cdb5789930fa13d9e76aeb4dc582efc2f97a117e7d3170909dddd8f6eed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a081b64cca92126512c81346c1b0005ddefcd11b702b12598c567cc627d3561f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:12Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.743283 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kg2qk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75c430b-f863-470c-b57f-def53bf840db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab6ca47d17c063e35d4f3109e8dbe39f42dbf245b8f439b12464e167fb57dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kg2qk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:12Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.768462 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b30af5f-469f-4bee-b77f-4b58edba325b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pppfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:12Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.783049 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439d2f19-631a-4560-aa58-70cdeef997a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://110e9a59347a665dfa5d6fcdf9e4937525c5e799c8356b2f28d3ba523f925d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513e0be5244aa9da6671888fedd775306c0824650cc687f0faea952c28d1b6b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47ab556b4721804041f9abee5effa082f4f29fbffe7681a7fc8efe0cb4f2b7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41f38cee75135e15635fdda772af2bdd1588e57025d28247223af8b5c34202e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02eba524109342a0990f10dfdf5432c2504acbb394b4f6e23b370becc256f51c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-01T15:41:55Z\\\",\\\"message\\\":\\\"W1001 15:41:44.880099 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1001 15:41:44.880530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759333304 cert, and key in /tmp/serving-cert-2775419030/serving-signer.crt, /tmp/serving-cert-2775419030/serving-signer.key\\\\nI1001 15:41:45.097803 1 observer_polling.go:159] Starting file observer\\\\nW1001 15:41:45.100841 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1001 15:41:45.103153 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1001 15:41:45.104329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2775419030/tls.crt::/tmp/serving-cert-2775419030/tls.key\\\\\\\"\\\\nF1001 15:41:55.441587 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2ceea2906c651debca4b72a5654ba5f8291a20d513a96b2fde7f1a0373674c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:12Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.794081 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pgg4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25c0759d-c94a-438c-b478-48161acbb035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9037ea44408cee3d00ee27bb38ea8e49344ab06d6b173d86689a131010eb7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r42ms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pgg4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:12Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.808361 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.808419 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.808437 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.808460 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.808473 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:12Z","lastTransitionTime":"2025-10-01T15:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.911861 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.911945 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.911972 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.912002 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:12 crc kubenswrapper[4949]: I1001 15:42:12.912023 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:12Z","lastTransitionTime":"2025-10-01T15:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.015392 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.015513 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.015551 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.015598 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.015628 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:13Z","lastTransitionTime":"2025-10-01T15:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.118399 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.118452 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.118464 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.118487 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.118504 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:13Z","lastTransitionTime":"2025-10-01T15:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.220990 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.221027 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.221037 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.221051 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.221060 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:13Z","lastTransitionTime":"2025-10-01T15:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.324525 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.324884 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.324899 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.324917 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.324930 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:13Z","lastTransitionTime":"2025-10-01T15:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.429649 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.429681 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.429691 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.429705 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.429716 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:13Z","lastTransitionTime":"2025-10-01T15:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.532723 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.532749 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.532762 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.532776 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.532784 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:13Z","lastTransitionTime":"2025-10-01T15:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.636233 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.636301 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.636325 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.636355 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.636378 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:13Z","lastTransitionTime":"2025-10-01T15:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.784943 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.785033 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.785050 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.785065 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.785076 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:13Z","lastTransitionTime":"2025-10-01T15:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.826186 4949 generic.go:334] "Generic (PLEG): container finished" podID="6656de7a-9b8f-4714-81ae-3685c01f11fb" containerID="2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90" exitCode=0 Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.826283 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xr96p" event={"ID":"6656de7a-9b8f-4714-81ae-3685c01f11fb","Type":"ContainerDied","Data":"2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90"} Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.852837 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439d2f19-631a-4560-aa58-70cdeef997a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://110e9a59347a665dfa5d6fcdf9e4937525c5e799c8356b2f28d3ba523f925d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513e0be5244aa9da6671888fedd775306c0824650cc687f0faea952c28d1b6b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47ab556b4721804041f9abee5effa082f4f29fbffe7681a7fc8efe0cb4f2b7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41f38cee75135e15635fdda772af2bdd1588e57025d28247223af8b5c34202e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02eba524109342a0990f10dfdf5432c2504acbb394b4f6e23b370becc256f51c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-01T15:41:55Z\\\",\\\"message\\\":\\\"W1001 15:41:44.880099 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1001 15:41:44.880530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759333304 cert, and key in /tmp/serving-cert-2775419030/serving-signer.crt, /tmp/serving-cert-2775419030/serving-signer.key\\\\nI1001 15:41:45.097803 1 observer_polling.go:159] Starting file observer\\\\nW1001 15:41:45.100841 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1001 15:41:45.103153 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1001 15:41:45.104329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2775419030/tls.crt::/tmp/serving-cert-2775419030/tls.key\\\\\\\"\\\\nF1001 15:41:55.441587 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2ceea2906c651debca4b72a5654ba5f8291a20d513a96b2fde7f1a0373674c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:13Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.868839 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pgg4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25c0759d-c94a-438c-b478-48161acbb035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9037ea44408cee3d00ee27bb38ea8e49344ab06d6b173d86689a131010eb7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r42ms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pgg4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:13Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.886511 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e15cd67-d4ad-49b8-96a6-da114105e558\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee03163adf8037e21674f8ff81a9eba22d580bf29b7c9a81d9505dd1e768169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a240a198e037219428db0de4c633c5e99064f7f13a6a8922eba7fa3fac633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6287\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:13Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.888207 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.888252 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.888270 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.888296 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.888316 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:13Z","lastTransitionTime":"2025-10-01T15:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.904322 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s5r4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a54b1244d03fd99a21c9ca116873fe7a80ec3053c95c8b99de3fb62e581419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghb86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s5r4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:13Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.924797 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:13Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.943326 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:13Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.959453 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:13Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.981032 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa37955372c2b350c3cd8103db2d231407c8f853e02d43fcae591d66a1f2bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:13Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.991578 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.991660 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.991700 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.991736 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.991760 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:13Z","lastTransitionTime":"2025-10-01T15:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:13 crc kubenswrapper[4949]: I1001 15:42:13.999171 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a5390e0a0285f487962def201f71183a2cfdbcc8db56f1bd0c64b2f6014c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:13Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.025410 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xr96p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6656de7a-9b8f-4714-81ae-3685c01f11fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xr96p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:14Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.054339 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5897a864-bb97-4443-a301-b30b22d13a88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02305e2d3e998f2b11185e65d51381d27f69cc544cba61b0878457e4816e5371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05aabfc445359dba1707d194e9dcf27bb57037b582f1181556cc65e2f2f712f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34912853a2f81af97497fc66053b02383114c5a42d46e39f2e7a98b48bf9e78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31382514a5f6e8d234356e87d2c1aba55e02a4edb54835382fe810f10e8df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c11c5a8bd9444a04b072ad656edec0e3f7d287392a2dffd509f3d7c0d1d4601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:14Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.074734 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d76d2a7d-139b-40b6-91f8-c2c8ae0ecc71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b08c9c1691470813c489fb61abd198ca2b5a94ddec2f9b5a06b9d6c9b56334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e467246736f9943a0524d6786a8c4bd4a567f60f20ffd7259a5124c844c44522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da953f01526669a6db92b3ed2d63682b200ea37d86e44ff34a3b7630877fe6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7235ba9aab9db39a23fe6b99b3278ff4c1ede47a1d58c2c986fe3244a2fa12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:14Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.092667 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a28cdb5789930fa13d9e76aeb4dc582efc2f97a117e7d3170909dddd8f6eed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a081b64cca92126512c81346c1b0005ddefcd11b702b12598c567cc627d3561f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:14Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.095922 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.095970 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.095987 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.096011 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.096026 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:14Z","lastTransitionTime":"2025-10-01T15:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.109327 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kg2qk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75c430b-f863-470c-b57f-def53bf840db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab6ca47d17c063e35d4f3109e8dbe39f42dbf245b8f439b12464e167fb57dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kg2qk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:14Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.134507 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b30af5f-469f-4bee-b77f-4b58edba325b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pppfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:14Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.198515 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.198563 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.198578 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.198600 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.198618 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:14Z","lastTransitionTime":"2025-10-01T15:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.306868 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.306941 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.306960 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.306983 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.307000 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:14Z","lastTransitionTime":"2025-10-01T15:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.409384 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.409443 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.409461 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.409482 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.409497 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:14Z","lastTransitionTime":"2025-10-01T15:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.512847 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.512918 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.512936 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.512961 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.512981 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:14Z","lastTransitionTime":"2025-10-01T15:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.600607 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.600756 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:42:14 crc kubenswrapper[4949]: E1001 15:42:14.600868 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.600958 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:42:14 crc kubenswrapper[4949]: E1001 15:42:14.601060 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:42:14 crc kubenswrapper[4949]: E1001 15:42:14.601276 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.616270 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.616325 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.616343 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.616369 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.616387 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:14Z","lastTransitionTime":"2025-10-01T15:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.719461 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.719536 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.719562 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.719592 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.719616 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:14Z","lastTransitionTime":"2025-10-01T15:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.824543 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.824577 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.824585 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.824600 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.824608 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:14Z","lastTransitionTime":"2025-10-01T15:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.837804 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" event={"ID":"6b30af5f-469f-4bee-b77f-4b58edba325b","Type":"ContainerStarted","Data":"16dc5f79df776c5ddb232fece8906b6ab97557ab06bcd8d74033c6c0a6ba9cfa"} Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.927577 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.927630 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.927645 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.927666 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:14 crc kubenswrapper[4949]: I1001 15:42:14.927681 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:14Z","lastTransitionTime":"2025-10-01T15:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.030914 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.030971 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.030993 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.031028 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.031051 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:15Z","lastTransitionTime":"2025-10-01T15:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.135335 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.135637 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.135650 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.135667 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.135679 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:15Z","lastTransitionTime":"2025-10-01T15:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.237458 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.237482 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.237490 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.237502 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.237510 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:15Z","lastTransitionTime":"2025-10-01T15:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.343492 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.343515 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.343524 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.343535 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.343545 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:15Z","lastTransitionTime":"2025-10-01T15:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.446850 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.446874 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.446881 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.446893 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.446901 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:15Z","lastTransitionTime":"2025-10-01T15:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.550009 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.550053 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.550064 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.550081 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.550093 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:15Z","lastTransitionTime":"2025-10-01T15:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.652297 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.652326 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.652335 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.652353 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.652363 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:15Z","lastTransitionTime":"2025-10-01T15:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.755256 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.755529 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.755546 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.755568 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.755582 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:15Z","lastTransitionTime":"2025-10-01T15:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.846932 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xr96p" event={"ID":"6656de7a-9b8f-4714-81ae-3685c01f11fb","Type":"ContainerDied","Data":"0296aee184dcef13f03cc1da4987bef80c13c394f067300167eadd8a4091f4c0"} Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.846840 4949 generic.go:334] "Generic (PLEG): container finished" podID="6656de7a-9b8f-4714-81ae-3685c01f11fb" containerID="0296aee184dcef13f03cc1da4987bef80c13c394f067300167eadd8a4091f4c0" exitCode=0 Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.847872 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.848174 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.858899 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.858941 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.858954 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.858974 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.858989 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:15Z","lastTransitionTime":"2025-10-01T15:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.873391 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa37955372c2b350c3cd8103db2d231407c8f853e02d43fcae591d66a1f2bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:15Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.892952 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a5390e0a0285f487962def201f71183a2cfdbcc8db56f1bd0c64b2f6014c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:15Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.895728 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.895801 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.913513 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xr96p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6656de7a-9b8f-4714-81ae-3685c01f11fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0296aee184dcef13f03cc1da4987bef80c13c394f067300167eadd8a4091f4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0296aee184dcef13f03cc1da4987bef80c13c394f067300167eadd8a4091f4c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xr96p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:15Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.940042 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5897a864-bb97-4443-a301-b30b22d13a88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02305e2d3e998f2b11185e65d51381d27f69cc544cba61b0878457e4816e5371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05aabfc445359dba1707d194e9dcf27bb57037b582f1181556cc65e2f2f712f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34912853a2f81af97497fc66053b02383114c5a42d46e39f2e7a98b48bf9e78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31382514a5f6e8d234356e87d2c1aba55e02a4edb54835382fe810f10e8df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c11c5a8bd9444a04b072ad656edec0e3f7d287392a2dffd509f3d7c0d1d4601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:15Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.953620 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d76d2a7d-139b-40b6-91f8-c2c8ae0ecc71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b08c9c1691470813c489fb61abd198ca2b5a94ddec2f9b5a06b9d6c9b56334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e467246736f9943a0524d6786a8c4bd4a567f60f20ffd7259a5124c844c44522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da953f01526669a6db92b3ed2d63682b200ea37d86e44ff34a3b7630877fe6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7235ba9aab9db39a23fe6b99b3278ff4c1ede47a1d58c2c986fe3244a2fa12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:15Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.961675 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.961732 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.961743 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.961759 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.961794 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:15Z","lastTransitionTime":"2025-10-01T15:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.967275 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a28cdb5789930fa13d9e76aeb4dc582efc2f97a117e7d3170909dddd8f6eed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a081b64cca92126512c81346c1b0005ddefcd11b702b12598c567cc627d3561f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:15Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.976368 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kg2qk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75c430b-f863-470c-b57f-def53bf840db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab6ca47d17c063e35d4f3109e8dbe39f42dbf245b8f439b12464e167fb57dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kg2qk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:15Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:15 crc kubenswrapper[4949]: I1001 15:42:15.996748 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b30af5f-469f-4bee-b77f-4b58edba325b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pppfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:15Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.012101 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439d2f19-631a-4560-aa58-70cdeef997a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://110e9a59347a665dfa5d6fcdf9e4937525c5e799c8356b2f28d3ba523f925d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513e0be5244aa9da6671888fedd775306c0824650cc687f0faea952c28d1b6b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47ab556b4721804041f9abee5effa082f4f29fbffe7681a7fc8efe0cb4f2b7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41f38cee75135e15635fdda772af2bdd1588e57025d28247223af8b5c34202e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02eba524109342a0990f10dfdf5432c2504acbb394b4f6e23b370becc256f51c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-01T15:41:55Z\\\",\\\"message\\\":\\\"W1001 15:41:44.880099 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1001 15:41:44.880530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759333304 cert, and key in /tmp/serving-cert-2775419030/serving-signer.crt, /tmp/serving-cert-2775419030/serving-signer.key\\\\nI1001 15:41:45.097803 1 observer_polling.go:159] Starting file observer\\\\nW1001 15:41:45.100841 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1001 15:41:45.103153 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1001 15:41:45.104329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2775419030/tls.crt::/tmp/serving-cert-2775419030/tls.key\\\\\\\"\\\\nF1001 15:41:55.441587 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2ceea2906c651debca4b72a5654ba5f8291a20d513a96b2fde7f1a0373674c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:16Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.047514 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pgg4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25c0759d-c94a-438c-b478-48161acbb035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9037ea44408cee3d00ee27bb38ea8e49344ab06d6b173d86689a131010eb7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r42ms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pgg4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:16Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.058157 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e15cd67-d4ad-49b8-96a6-da114105e558\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee03163adf8037e21674f8ff81a9eba22d580bf29b7c9a81d9505dd1e768169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a240a198e037219428db0de4c633c5e99064f7f13a6a8922eba7fa3fac633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6287\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:16Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.063737 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.063799 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.063808 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.063821 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.063829 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:16Z","lastTransitionTime":"2025-10-01T15:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.070072 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s5r4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a54b1244d03fd99a21c9ca116873fe7a80ec3053c95c8b99de3fb62e581419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghb86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s5r4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:16Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.087265 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:16Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.109717 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:16Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.122238 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:16Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.133441 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:16Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.146478 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:16Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.157714 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:16Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.166361 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.166423 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.166439 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.166457 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.166471 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:16Z","lastTransitionTime":"2025-10-01T15:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.170780 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e15cd67-d4ad-49b8-96a6-da114105e558\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee03163adf8037e21674f8ff81a9eba22d580bf29b7c9a81d9505dd1e768169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a240a198e037219428db0de4c633c5e99064f7f13a6a8922eba7fa3fac633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6287\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:16Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.185281 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s5r4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a54b1244d03fd99a21c9ca116873fe7a80ec3053c95c8b99de3fb62e581419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghb86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s5r4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:16Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.203601 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5897a864-bb97-4443-a301-b30b22d13a88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02305e2d3e998f2b11185e65d51381d27f69cc544cba61b0878457e4816e5371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05aabfc445359dba1707d194e9dcf27bb57037b582f1181556cc65e2f2f712f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34912853a2f81af97497fc66053b02383114c5a42d46e39f2e7a98b48bf9e78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31382514a5f6e8d234356e87d2c1aba55e02a4edb54835382fe810f10e8df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c11c5a8bd9444a04b072ad656edec0e3f7d287392a2dffd509f3d7c0d1d4601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:16Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.206346 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.206540 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:42:16 crc kubenswrapper[4949]: E1001 15:42:16.206578 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:42:32.206549188 +0000 UTC m=+51.512155389 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.206633 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:42:16 crc kubenswrapper[4949]: E1001 15:42:16.206656 4949 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 01 15:42:16 crc kubenswrapper[4949]: E1001 15:42:16.206768 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-01 15:42:32.206748494 +0000 UTC m=+51.512354685 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 01 15:42:16 crc kubenswrapper[4949]: E1001 15:42:16.206798 4949 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 01 15:42:16 crc kubenswrapper[4949]: E1001 15:42:16.206890 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-01 15:42:32.206874417 +0000 UTC m=+51.512480618 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.223674 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d76d2a7d-139b-40b6-91f8-c2c8ae0ecc71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b08c9c1691470813c489fb61abd198ca2b5a94ddec2f9b5a06b9d6c9b56334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e467246736f9943a0524d6786a8c4bd4a567f60f20ffd7259a5124c844c44522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da953f01526669a6db92b3ed2d63682b200ea37d86e44ff34a3b7630877fe6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7235ba9aab9db39a23fe6b99b3278ff4c1ede47a1d58c2c986fe3244a2fa12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:16Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.236278 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa37955372c2b350c3cd8103db2d231407c8f853e02d43fcae591d66a1f2bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:16Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.247962 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a5390e0a0285f487962def201f71183a2cfdbcc8db56f1bd0c64b2f6014c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:16Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.264196 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xr96p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6656de7a-9b8f-4714-81ae-3685c01f11fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0296aee184dcef13f03cc1da4987bef80c13c394f067300167eadd8a4091f4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0296aee184dcef13f03cc1da4987bef80c13c394f067300167eadd8a4091f4c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xr96p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:16Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.268549 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.268574 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.268582 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.268595 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.268605 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:16Z","lastTransitionTime":"2025-10-01T15:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.277972 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a28cdb5789930fa13d9e76aeb4dc582efc2f97a117e7d3170909dddd8f6eed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a081b64cca92126512c81346c1b0005ddefcd11b702b12598c567cc627d3561f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:16Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.289565 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kg2qk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75c430b-f863-470c-b57f-def53bf840db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab6ca47d17c063e35d4f3109e8dbe39f42dbf245b8f439b12464e167fb57dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kg2qk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:16Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.307406 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.307461 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:42:16 crc kubenswrapper[4949]: E1001 15:42:16.307590 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 01 15:42:16 crc kubenswrapper[4949]: E1001 15:42:16.307619 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 01 15:42:16 crc kubenswrapper[4949]: E1001 15:42:16.307628 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 01 15:42:16 crc kubenswrapper[4949]: E1001 15:42:16.307633 4949 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 01 15:42:16 crc kubenswrapper[4949]: E1001 15:42:16.307646 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 01 15:42:16 crc kubenswrapper[4949]: E1001 15:42:16.307661 4949 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 01 15:42:16 crc kubenswrapper[4949]: E1001 15:42:16.307697 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-01 15:42:32.307674084 +0000 UTC m=+51.613280475 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 01 15:42:16 crc kubenswrapper[4949]: E1001 15:42:16.307726 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-01 15:42:32.307716095 +0000 UTC m=+51.613322286 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.308214 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b30af5f-469f-4bee-b77f-4b58edba325b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2df7f85dd23b39e72771fddf6c1ab34df3b9fcea664dffe05a1b8859281fff26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0d4badfc0169a3e0cbd62d6661b7e542f43cbbe9005a56c68c7e0027235b64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://728c9faef5d0d63e5dd9cd98520d62db3e75646be0aeec6ff20c71598e35b5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35e991ed458eaa7ea90ab0ae0204b7c9b865ad5c5044709257614edda19865c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde65de2ce60422cb5ddb92562725b8dcd2f8b36bee19cc6e6de8aaab216c7ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb7bdabc02a6c9442d7f842e078acef313e10cd6dbf7153b289ee93b99c9abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16dc5f79df776c5ddb232fece8906b6ab97557ab06bcd8d74033c6c0a6ba9cfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08a6304f2c50f5e54adf26554bd2f81aba317d4ee153f55dc25123926bc380e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pppfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:16Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.326517 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439d2f19-631a-4560-aa58-70cdeef997a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://110e9a59347a665dfa5d6fcdf9e4937525c5e799c8356b2f28d3ba523f925d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513e0be5244aa9da6671888fedd775306c0824650cc687f0faea952c28d1b6b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47ab556b4721804041f9abee5effa082f4f29fbffe7681a7fc8efe0cb4f2b7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41f38cee75135e15635fdda772af2bdd1588e57025d28247223af8b5c34202e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02eba524109342a0990f10dfdf5432c2504acbb394b4f6e23b370becc256f51c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-01T15:41:55Z\\\",\\\"message\\\":\\\"W1001 15:41:44.880099 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1001 15:41:44.880530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759333304 cert, and key in /tmp/serving-cert-2775419030/serving-signer.crt, /tmp/serving-cert-2775419030/serving-signer.key\\\\nI1001 15:41:45.097803 1 observer_polling.go:159] Starting file observer\\\\nW1001 15:41:45.100841 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1001 15:41:45.103153 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1001 15:41:45.104329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2775419030/tls.crt::/tmp/serving-cert-2775419030/tls.key\\\\\\\"\\\\nF1001 15:41:55.441587 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2ceea2906c651debca4b72a5654ba5f8291a20d513a96b2fde7f1a0373674c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:16Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.336206 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pgg4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25c0759d-c94a-438c-b478-48161acbb035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9037ea44408cee3d00ee27bb38ea8e49344ab06d6b173d86689a131010eb7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r42ms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pgg4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:16Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.371635 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.371669 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.371678 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.371692 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.371702 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:16Z","lastTransitionTime":"2025-10-01T15:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.473934 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.473981 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.473994 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.474012 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.474025 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:16Z","lastTransitionTime":"2025-10-01T15:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.577148 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.577806 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.577875 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.577954 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.578028 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:16Z","lastTransitionTime":"2025-10-01T15:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.601602 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.601683 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:42:16 crc kubenswrapper[4949]: E1001 15:42:16.601753 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.601621 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:42:16 crc kubenswrapper[4949]: E1001 15:42:16.601813 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:42:16 crc kubenswrapper[4949]: E1001 15:42:16.601851 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.681001 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.681208 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.681229 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.681255 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.681276 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:16Z","lastTransitionTime":"2025-10-01T15:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.784843 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.784925 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.784945 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.784972 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.784991 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:16Z","lastTransitionTime":"2025-10-01T15:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.858197 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-xr96p" event={"ID":"6656de7a-9b8f-4714-81ae-3685c01f11fb","Type":"ContainerStarted","Data":"24da01cfcdff6460cccc5cf06d24aa758d33ee7d086c7b93078b5848c650fcc9"} Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.858634 4949 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.888348 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.888411 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.888429 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.888453 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.888473 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:16Z","lastTransitionTime":"2025-10-01T15:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.899358 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5897a864-bb97-4443-a301-b30b22d13a88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02305e2d3e998f2b11185e65d51381d27f69cc544cba61b0878457e4816e5371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05aabfc445359dba1707d194e9dcf27bb57037b582f1181556cc65e2f2f712f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34912853a2f81af97497fc66053b02383114c5a42d46e39f2e7a98b48bf9e78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31382514a5f6e8d234356e87d2c1aba55e02a4edb54835382fe810f10e8df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c11c5a8bd9444a04b072ad656edec0e3f7d287392a2dffd509f3d7c0d1d4601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:16Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.915257 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d76d2a7d-139b-40b6-91f8-c2c8ae0ecc71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b08c9c1691470813c489fb61abd198ca2b5a94ddec2f9b5a06b9d6c9b56334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e467246736f9943a0524d6786a8c4bd4a567f60f20ffd7259a5124c844c44522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da953f01526669a6db92b3ed2d63682b200ea37d86e44ff34a3b7630877fe6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7235ba9aab9db39a23fe6b99b3278ff4c1ede47a1d58c2c986fe3244a2fa12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:16Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.934076 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa37955372c2b350c3cd8103db2d231407c8f853e02d43fcae591d66a1f2bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:16Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.948227 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a5390e0a0285f487962def201f71183a2cfdbcc8db56f1bd0c64b2f6014c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:16Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.964295 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xr96p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6656de7a-9b8f-4714-81ae-3685c01f11fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24da01cfcdff6460cccc5cf06d24aa758d33ee7d086c7b93078b5848c650fcc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0296aee184dcef13f03cc1da4987bef80c13c394f067300167eadd8a4091f4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0296aee184dcef13f03cc1da4987bef80c13c394f067300167eadd8a4091f4c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xr96p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:16Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.981341 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a28cdb5789930fa13d9e76aeb4dc582efc2f97a117e7d3170909dddd8f6eed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a081b64cca92126512c81346c1b0005ddefcd11b702b12598c567cc627d3561f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:16Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.992577 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.992616 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.992634 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.992653 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.992663 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:16Z","lastTransitionTime":"2025-10-01T15:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:16 crc kubenswrapper[4949]: I1001 15:42:16.996328 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kg2qk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75c430b-f863-470c-b57f-def53bf840db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab6ca47d17c063e35d4f3109e8dbe39f42dbf245b8f439b12464e167fb57dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kg2qk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:16Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.020037 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b30af5f-469f-4bee-b77f-4b58edba325b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2df7f85dd23b39e72771fddf6c1ab34df3b9fcea664dffe05a1b8859281fff26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0d4badfc0169a3e0cbd62d6661b7e542f43cbbe9005a56c68c7e0027235b64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://728c9faef5d0d63e5dd9cd98520d62db3e75646be0aeec6ff20c71598e35b5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35e991ed458eaa7ea90ab0ae0204b7c9b865ad5c5044709257614edda19865c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde65de2ce60422cb5ddb92562725b8dcd2f8b36bee19cc6e6de8aaab216c7ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb7bdabc02a6c9442d7f842e078acef313e10cd6dbf7153b289ee93b99c9abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16dc5f79df776c5ddb232fece8906b6ab97557ab06bcd8d74033c6c0a6ba9cfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08a6304f2c50f5e54adf26554bd2f81aba317d4ee153f55dc25123926bc380e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pppfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:17Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.041371 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439d2f19-631a-4560-aa58-70cdeef997a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://110e9a59347a665dfa5d6fcdf9e4937525c5e799c8356b2f28d3ba523f925d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513e0be5244aa9da6671888fedd775306c0824650cc687f0faea952c28d1b6b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47ab556b4721804041f9abee5effa082f4f29fbffe7681a7fc8efe0cb4f2b7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41f38cee75135e15635fdda772af2bdd1588e57025d28247223af8b5c34202e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02eba524109342a0990f10dfdf5432c2504acbb394b4f6e23b370becc256f51c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-01T15:41:55Z\\\",\\\"message\\\":\\\"W1001 15:41:44.880099 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1001 15:41:44.880530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759333304 cert, and key in /tmp/serving-cert-2775419030/serving-signer.crt, /tmp/serving-cert-2775419030/serving-signer.key\\\\nI1001 15:41:45.097803 1 observer_polling.go:159] Starting file observer\\\\nW1001 15:41:45.100841 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1001 15:41:45.103153 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1001 15:41:45.104329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2775419030/tls.crt::/tmp/serving-cert-2775419030/tls.key\\\\\\\"\\\\nF1001 15:41:55.441587 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2ceea2906c651debca4b72a5654ba5f8291a20d513a96b2fde7f1a0373674c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:17Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.050962 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pgg4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25c0759d-c94a-438c-b478-48161acbb035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9037ea44408cee3d00ee27bb38ea8e49344ab06d6b173d86689a131010eb7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r42ms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pgg4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:17Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.062456 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:17Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.070450 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.070482 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.070494 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.070508 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.070517 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:17Z","lastTransitionTime":"2025-10-01T15:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.073877 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:17Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.085283 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:17Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:17 crc kubenswrapper[4949]: E1001 15:42:17.085434 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a4890954-ee04-4573-a52a-d0437f2c0f47\\\",\\\"systemUUID\\\":\\\"de71db26-32c1-4956-9e9f-66fc023dcd38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:17Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.089044 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.089078 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.089089 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.089105 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.089118 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:17Z","lastTransitionTime":"2025-10-01T15:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:17 crc kubenswrapper[4949]: E1001 15:42:17.102922 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a4890954-ee04-4573-a52a-d0437f2c0f47\\\",\\\"systemUUID\\\":\\\"de71db26-32c1-4956-9e9f-66fc023dcd38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:17Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.103516 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e15cd67-d4ad-49b8-96a6-da114105e558\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee03163adf8037e21674f8ff81a9eba22d580bf29b7c9a81d9505dd1e768169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a240a198e037219428db0de4c633c5e99064f7f13a6a8922eba7fa3fac633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6287\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:17Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.107786 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.108011 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.108074 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.108154 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.108247 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:17Z","lastTransitionTime":"2025-10-01T15:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.117380 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s5r4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a54b1244d03fd99a21c9ca116873fe7a80ec3053c95c8b99de3fb62e581419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghb86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s5r4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:17Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:17 crc kubenswrapper[4949]: E1001 15:42:17.126695 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a4890954-ee04-4573-a52a-d0437f2c0f47\\\",\\\"systemUUID\\\":\\\"de71db26-32c1-4956-9e9f-66fc023dcd38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:17Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.130686 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.130784 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.130848 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.130929 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.131004 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:17Z","lastTransitionTime":"2025-10-01T15:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:17 crc kubenswrapper[4949]: E1001 15:42:17.144253 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a4890954-ee04-4573-a52a-d0437f2c0f47\\\",\\\"systemUUID\\\":\\\"de71db26-32c1-4956-9e9f-66fc023dcd38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:17Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.154408 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.154609 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.154663 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.154720 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.154771 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:17Z","lastTransitionTime":"2025-10-01T15:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:17 crc kubenswrapper[4949]: E1001 15:42:17.167828 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:17Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a4890954-ee04-4573-a52a-d0437f2c0f47\\\",\\\"systemUUID\\\":\\\"de71db26-32c1-4956-9e9f-66fc023dcd38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:17Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:17 crc kubenswrapper[4949]: E1001 15:42:17.167983 4949 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.169607 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.169648 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.169662 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.169682 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.169698 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:17Z","lastTransitionTime":"2025-10-01T15:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.272350 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.272420 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.272439 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.272465 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.272484 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:17Z","lastTransitionTime":"2025-10-01T15:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.375670 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.375727 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.375739 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.375760 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.375775 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:17Z","lastTransitionTime":"2025-10-01T15:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.478012 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.478070 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.478091 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.478115 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.478166 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:17Z","lastTransitionTime":"2025-10-01T15:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.582335 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.583325 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.583479 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.583741 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.583872 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:17Z","lastTransitionTime":"2025-10-01T15:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.688050 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.688568 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.688941 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.689324 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.689660 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:17Z","lastTransitionTime":"2025-10-01T15:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.793485 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.793573 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.793598 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.793630 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.793653 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:17Z","lastTransitionTime":"2025-10-01T15:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.865431 4949 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.897474 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.897519 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.897531 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.897549 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:17 crc kubenswrapper[4949]: I1001 15:42:17.897565 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:17Z","lastTransitionTime":"2025-10-01T15:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.000896 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.000973 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.001002 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.001050 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.001077 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:18Z","lastTransitionTime":"2025-10-01T15:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.103398 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.103463 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.103516 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.103536 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.103548 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:18Z","lastTransitionTime":"2025-10-01T15:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.205874 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.205911 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.205920 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.205937 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.205948 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:18Z","lastTransitionTime":"2025-10-01T15:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.308244 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.308291 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.308302 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.308317 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.308327 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:18Z","lastTransitionTime":"2025-10-01T15:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.356459 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zxj4z"] Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.356948 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zxj4z" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.359076 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.360168 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.374205 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kg2qk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75c430b-f863-470c-b57f-def53bf840db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab6ca47d17c063e35d4f3109e8dbe39f42dbf245b8f439b12464e167fb57dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kg2qk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:18Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.397845 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b30af5f-469f-4bee-b77f-4b58edba325b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2df7f85dd23b39e72771fddf6c1ab34df3b9fcea664dffe05a1b8859281fff26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0d4badfc0169a3e0cbd62d6661b7e542f43cbbe9005a56c68c7e0027235b64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://728c9faef5d0d63e5dd9cd98520d62db3e75646be0aeec6ff20c71598e35b5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35e991ed458eaa7ea90ab0ae0204b7c9b865ad5c5044709257614edda19865c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde65de2ce60422cb5ddb92562725b8dcd2f8b36bee19cc6e6de8aaab216c7ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb7bdabc02a6c9442d7f842e078acef313e10cd6dbf7153b289ee93b99c9abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16dc5f79df776c5ddb232fece8906b6ab97557ab06bcd8d74033c6c0a6ba9cfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08a6304f2c50f5e54adf26554bd2f81aba317d4ee153f55dc25123926bc380e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pppfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:18Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.411385 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.411443 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.411456 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.411478 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.411492 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:18Z","lastTransitionTime":"2025-10-01T15:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.431808 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zxj4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"660f5a12-b71d-454a-8ec0-bae2646530a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbc4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbc4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zxj4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:18Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.444365 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/660f5a12-b71d-454a-8ec0-bae2646530a0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-zxj4z\" (UID: \"660f5a12-b71d-454a-8ec0-bae2646530a0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zxj4z" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.444454 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/660f5a12-b71d-454a-8ec0-bae2646530a0-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-zxj4z\" (UID: \"660f5a12-b71d-454a-8ec0-bae2646530a0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zxj4z" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.444562 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbc4b\" (UniqueName: \"kubernetes.io/projected/660f5a12-b71d-454a-8ec0-bae2646530a0-kube-api-access-cbc4b\") pod \"ovnkube-control-plane-749d76644c-zxj4z\" (UID: \"660f5a12-b71d-454a-8ec0-bae2646530a0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zxj4z" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.444673 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/660f5a12-b71d-454a-8ec0-bae2646530a0-env-overrides\") pod \"ovnkube-control-plane-749d76644c-zxj4z\" (UID: \"660f5a12-b71d-454a-8ec0-bae2646530a0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zxj4z" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.448148 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a28cdb5789930fa13d9e76aeb4dc582efc2f97a117e7d3170909dddd8f6eed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a081b64cca92126512c81346c1b0005ddefcd11b702b12598c567cc627d3561f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:18Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.467171 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pgg4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25c0759d-c94a-438c-b478-48161acbb035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9037ea44408cee3d00ee27bb38ea8e49344ab06d6b173d86689a131010eb7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r42ms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pgg4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:18Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.483594 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439d2f19-631a-4560-aa58-70cdeef997a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://110e9a59347a665dfa5d6fcdf9e4937525c5e799c8356b2f28d3ba523f925d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513e0be5244aa9da6671888fedd775306c0824650cc687f0faea952c28d1b6b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47ab556b4721804041f9abee5effa082f4f29fbffe7681a7fc8efe0cb4f2b7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41f38cee75135e15635fdda772af2bdd1588e57025d28247223af8b5c34202e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02eba524109342a0990f10dfdf5432c2504acbb394b4f6e23b370becc256f51c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-01T15:41:55Z\\\",\\\"message\\\":\\\"W1001 15:41:44.880099 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1001 15:41:44.880530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759333304 cert, and key in /tmp/serving-cert-2775419030/serving-signer.crt, /tmp/serving-cert-2775419030/serving-signer.key\\\\nI1001 15:41:45.097803 1 observer_polling.go:159] Starting file observer\\\\nW1001 15:41:45.100841 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1001 15:41:45.103153 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1001 15:41:45.104329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2775419030/tls.crt::/tmp/serving-cert-2775419030/tls.key\\\\\\\"\\\\nF1001 15:41:55.441587 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2ceea2906c651debca4b72a5654ba5f8291a20d513a96b2fde7f1a0373674c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:18Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.497881 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:18Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.514457 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.514502 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.514529 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.514549 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.514559 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:18Z","lastTransitionTime":"2025-10-01T15:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.518455 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:18Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.529408 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e15cd67-d4ad-49b8-96a6-da114105e558\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee03163adf8037e21674f8ff81a9eba22d580bf29b7c9a81d9505dd1e768169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a240a198e037219428db0de4c633c5e99064f7f13a6a8922eba7fa3fac633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6287\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:18Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.541808 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s5r4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a54b1244d03fd99a21c9ca116873fe7a80ec3053c95c8b99de3fb62e581419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghb86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s5r4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:18Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.545202 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/660f5a12-b71d-454a-8ec0-bae2646530a0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-zxj4z\" (UID: \"660f5a12-b71d-454a-8ec0-bae2646530a0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zxj4z" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.545258 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/660f5a12-b71d-454a-8ec0-bae2646530a0-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-zxj4z\" (UID: \"660f5a12-b71d-454a-8ec0-bae2646530a0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zxj4z" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.545277 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbc4b\" (UniqueName: \"kubernetes.io/projected/660f5a12-b71d-454a-8ec0-bae2646530a0-kube-api-access-cbc4b\") pod \"ovnkube-control-plane-749d76644c-zxj4z\" (UID: \"660f5a12-b71d-454a-8ec0-bae2646530a0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zxj4z" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.545306 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/660f5a12-b71d-454a-8ec0-bae2646530a0-env-overrides\") pod \"ovnkube-control-plane-749d76644c-zxj4z\" (UID: \"660f5a12-b71d-454a-8ec0-bae2646530a0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zxj4z" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.545830 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/660f5a12-b71d-454a-8ec0-bae2646530a0-env-overrides\") pod \"ovnkube-control-plane-749d76644c-zxj4z\" (UID: \"660f5a12-b71d-454a-8ec0-bae2646530a0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zxj4z" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.546224 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/660f5a12-b71d-454a-8ec0-bae2646530a0-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-zxj4z\" (UID: \"660f5a12-b71d-454a-8ec0-bae2646530a0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zxj4z" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.555071 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/660f5a12-b71d-454a-8ec0-bae2646530a0-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-zxj4z\" (UID: \"660f5a12-b71d-454a-8ec0-bae2646530a0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zxj4z" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.561355 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:18Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.563809 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbc4b\" (UniqueName: \"kubernetes.io/projected/660f5a12-b71d-454a-8ec0-bae2646530a0-kube-api-access-cbc4b\") pod \"ovnkube-control-plane-749d76644c-zxj4z\" (UID: \"660f5a12-b71d-454a-8ec0-bae2646530a0\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zxj4z" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.575181 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d76d2a7d-139b-40b6-91f8-c2c8ae0ecc71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b08c9c1691470813c489fb61abd198ca2b5a94ddec2f9b5a06b9d6c9b56334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e467246736f9943a0524d6786a8c4bd4a567f60f20ffd7259a5124c844c44522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da953f01526669a6db92b3ed2d63682b200ea37d86e44ff34a3b7630877fe6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7235ba9aab9db39a23fe6b99b3278ff4c1ede47a1d58c2c986fe3244a2fa12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:18Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.589573 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa37955372c2b350c3cd8103db2d231407c8f853e02d43fcae591d66a1f2bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:18Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.600886 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.600967 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:42:18 crc kubenswrapper[4949]: E1001 15:42:18.601015 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:42:18 crc kubenswrapper[4949]: E1001 15:42:18.601279 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.601452 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:42:18 crc kubenswrapper[4949]: E1001 15:42:18.601524 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.601949 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a5390e0a0285f487962def201f71183a2cfdbcc8db56f1bd0c64b2f6014c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:18Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.616783 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.616994 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.617055 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.617116 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.617393 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:18Z","lastTransitionTime":"2025-10-01T15:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.618105 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xr96p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6656de7a-9b8f-4714-81ae-3685c01f11fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24da01cfcdff6460cccc5cf06d24aa758d33ee7d086c7b93078b5848c650fcc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0296aee184dcef13f03cc1da4987bef80c13c394f067300167eadd8a4091f4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0296aee184dcef13f03cc1da4987bef80c13c394f067300167eadd8a4091f4c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xr96p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:18Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.641155 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5897a864-bb97-4443-a301-b30b22d13a88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02305e2d3e998f2b11185e65d51381d27f69cc544cba61b0878457e4816e5371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05aabfc445359dba1707d194e9dcf27bb57037b582f1181556cc65e2f2f712f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34912853a2f81af97497fc66053b02383114c5a42d46e39f2e7a98b48bf9e78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31382514a5f6e8d234356e87d2c1aba55e02a4edb54835382fe810f10e8df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c11c5a8bd9444a04b072ad656edec0e3f7d287392a2dffd509f3d7c0d1d4601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:18Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.670235 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zxj4z" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.720629 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.720675 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.720687 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.720706 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.720719 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:18Z","lastTransitionTime":"2025-10-01T15:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.823246 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.823291 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.823302 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.823320 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.823334 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:18Z","lastTransitionTime":"2025-10-01T15:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.870154 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zxj4z" event={"ID":"660f5a12-b71d-454a-8ec0-bae2646530a0","Type":"ContainerStarted","Data":"fb3ba5bee630acc9661e1ffa326982afe0e11a17e20f1be957d72a1b6e922a44"} Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.925937 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.925960 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.925969 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.925984 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:18 crc kubenswrapper[4949]: I1001 15:42:18.925993 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:18Z","lastTransitionTime":"2025-10-01T15:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.028710 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.029378 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.029406 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.029483 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.029572 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:19Z","lastTransitionTime":"2025-10-01T15:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.074442 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-kfx8b"] Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.074883 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:42:19 crc kubenswrapper[4949]: E1001 15:42:19.074946 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kfx8b" podUID="d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.093772 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:19Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.111714 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e15cd67-d4ad-49b8-96a6-da114105e558\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee03163adf8037e21674f8ff81a9eba22d580bf29b7c9a81d9505dd1e768169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a240a198e037219428db0de4c633c5e99064f7f13a6a8922eba7fa3fac633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6287\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:19Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.130347 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s5r4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a54b1244d03fd99a21c9ca116873fe7a80ec3053c95c8b99de3fb62e581419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghb86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s5r4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:19Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.133371 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.133421 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.133434 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.133452 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.133464 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:19Z","lastTransitionTime":"2025-10-01T15:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.146403 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kfx8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2bpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2bpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kfx8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:19Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.154532 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd-metrics-certs\") pod \"network-metrics-daemon-kfx8b\" (UID: \"d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd\") " pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.154578 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f2bpg\" (UniqueName: \"kubernetes.io/projected/d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd-kube-api-access-f2bpg\") pod \"network-metrics-daemon-kfx8b\" (UID: \"d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd\") " pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.165243 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:19Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.179923 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:19Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.197795 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa37955372c2b350c3cd8103db2d231407c8f853e02d43fcae591d66a1f2bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:19Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.210265 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a5390e0a0285f487962def201f71183a2cfdbcc8db56f1bd0c64b2f6014c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:19Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.227709 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xr96p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6656de7a-9b8f-4714-81ae-3685c01f11fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24da01cfcdff6460cccc5cf06d24aa758d33ee7d086c7b93078b5848c650fcc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0296aee184dcef13f03cc1da4987bef80c13c394f067300167eadd8a4091f4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0296aee184dcef13f03cc1da4987bef80c13c394f067300167eadd8a4091f4c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xr96p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:19Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.236045 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.236086 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.236098 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.236114 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.236143 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:19Z","lastTransitionTime":"2025-10-01T15:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.248713 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5897a864-bb97-4443-a301-b30b22d13a88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02305e2d3e998f2b11185e65d51381d27f69cc544cba61b0878457e4816e5371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05aabfc445359dba1707d194e9dcf27bb57037b582f1181556cc65e2f2f712f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34912853a2f81af97497fc66053b02383114c5a42d46e39f2e7a98b48bf9e78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31382514a5f6e8d234356e87d2c1aba55e02a4edb54835382fe810f10e8df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c11c5a8bd9444a04b072ad656edec0e3f7d287392a2dffd509f3d7c0d1d4601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:19Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.255181 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd-metrics-certs\") pod \"network-metrics-daemon-kfx8b\" (UID: \"d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd\") " pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.255225 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f2bpg\" (UniqueName: \"kubernetes.io/projected/d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd-kube-api-access-f2bpg\") pod \"network-metrics-daemon-kfx8b\" (UID: \"d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd\") " pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:42:19 crc kubenswrapper[4949]: E1001 15:42:19.255469 4949 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 01 15:42:19 crc kubenswrapper[4949]: E1001 15:42:19.255584 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd-metrics-certs podName:d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd nodeName:}" failed. No retries permitted until 2025-10-01 15:42:19.755554239 +0000 UTC m=+39.061160470 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd-metrics-certs") pod "network-metrics-daemon-kfx8b" (UID: "d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.260970 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d76d2a7d-139b-40b6-91f8-c2c8ae0ecc71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b08c9c1691470813c489fb61abd198ca2b5a94ddec2f9b5a06b9d6c9b56334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e467246736f9943a0524d6786a8c4bd4a567f60f20ffd7259a5124c844c44522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da953f01526669a6db92b3ed2d63682b200ea37d86e44ff34a3b7630877fe6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7235ba9aab9db39a23fe6b99b3278ff4c1ede47a1d58c2c986fe3244a2fa12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:19Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.271241 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f2bpg\" (UniqueName: \"kubernetes.io/projected/d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd-kube-api-access-f2bpg\") pod \"network-metrics-daemon-kfx8b\" (UID: \"d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd\") " pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.277174 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b30af5f-469f-4bee-b77f-4b58edba325b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2df7f85dd23b39e72771fddf6c1ab34df3b9fcea664dffe05a1b8859281fff26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0d4badfc0169a3e0cbd62d6661b7e542f43cbbe9005a56c68c7e0027235b64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://728c9faef5d0d63e5dd9cd98520d62db3e75646be0aeec6ff20c71598e35b5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35e991ed458eaa7ea90ab0ae0204b7c9b865ad5c5044709257614edda19865c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde65de2ce60422cb5ddb92562725b8dcd2f8b36bee19cc6e6de8aaab216c7ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb7bdabc02a6c9442d7f842e078acef313e10cd6dbf7153b289ee93b99c9abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16dc5f79df776c5ddb232fece8906b6ab97557ab06bcd8d74033c6c0a6ba9cfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08a6304f2c50f5e54adf26554bd2f81aba317d4ee153f55dc25123926bc380e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pppfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:19Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.287798 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zxj4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"660f5a12-b71d-454a-8ec0-bae2646530a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbc4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbc4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zxj4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:19Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.301432 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a28cdb5789930fa13d9e76aeb4dc582efc2f97a117e7d3170909dddd8f6eed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a081b64cca92126512c81346c1b0005ddefcd11b702b12598c567cc627d3561f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:19Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.310863 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kg2qk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75c430b-f863-470c-b57f-def53bf840db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab6ca47d17c063e35d4f3109e8dbe39f42dbf245b8f439b12464e167fb57dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kg2qk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:19Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.320108 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pgg4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25c0759d-c94a-438c-b478-48161acbb035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9037ea44408cee3d00ee27bb38ea8e49344ab06d6b173d86689a131010eb7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r42ms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pgg4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:19Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.333502 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439d2f19-631a-4560-aa58-70cdeef997a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://110e9a59347a665dfa5d6fcdf9e4937525c5e799c8356b2f28d3ba523f925d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513e0be5244aa9da6671888fedd775306c0824650cc687f0faea952c28d1b6b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47ab556b4721804041f9abee5effa082f4f29fbffe7681a7fc8efe0cb4f2b7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41f38cee75135e15635fdda772af2bdd1588e57025d28247223af8b5c34202e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02eba524109342a0990f10dfdf5432c2504acbb394b4f6e23b370becc256f51c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-01T15:41:55Z\\\",\\\"message\\\":\\\"W1001 15:41:44.880099 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1001 15:41:44.880530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759333304 cert, and key in /tmp/serving-cert-2775419030/serving-signer.crt, /tmp/serving-cert-2775419030/serving-signer.key\\\\nI1001 15:41:45.097803 1 observer_polling.go:159] Starting file observer\\\\nW1001 15:41:45.100841 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1001 15:41:45.103153 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1001 15:41:45.104329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2775419030/tls.crt::/tmp/serving-cert-2775419030/tls.key\\\\\\\"\\\\nF1001 15:41:55.441587 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2ceea2906c651debca4b72a5654ba5f8291a20d513a96b2fde7f1a0373674c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:19Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.338221 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.338254 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.338266 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.338283 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.338297 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:19Z","lastTransitionTime":"2025-10-01T15:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.363084 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.363293 4949 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Oct 01 15:42:19 crc kubenswrapper[4949]: E1001 15:42:19.363790 4949 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 16dc5f79df776c5ddb232fece8906b6ab97557ab06bcd8d74033c6c0a6ba9cfa is running failed: container process not found" containerID="16dc5f79df776c5ddb232fece8906b6ab97557ab06bcd8d74033c6c0a6ba9cfa" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Oct 01 15:42:19 crc kubenswrapper[4949]: E1001 15:42:19.364203 4949 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 16dc5f79df776c5ddb232fece8906b6ab97557ab06bcd8d74033c6c0a6ba9cfa is running failed: container process not found" containerID="16dc5f79df776c5ddb232fece8906b6ab97557ab06bcd8d74033c6c0a6ba9cfa" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Oct 01 15:42:19 crc kubenswrapper[4949]: E1001 15:42:19.364535 4949 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 16dc5f79df776c5ddb232fece8906b6ab97557ab06bcd8d74033c6c0a6ba9cfa is running failed: container process not found" containerID="16dc5f79df776c5ddb232fece8906b6ab97557ab06bcd8d74033c6c0a6ba9cfa" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Oct 01 15:42:19 crc kubenswrapper[4949]: E1001 15:42:19.364610 4949 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 16dc5f79df776c5ddb232fece8906b6ab97557ab06bcd8d74033c6c0a6ba9cfa is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerName="ovnkube-controller" Oct 01 15:42:19 crc kubenswrapper[4949]: E1001 15:42:19.365239 4949 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 16dc5f79df776c5ddb232fece8906b6ab97557ab06bcd8d74033c6c0a6ba9cfa is running failed: container process not found" containerID="16dc5f79df776c5ddb232fece8906b6ab97557ab06bcd8d74033c6c0a6ba9cfa" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Oct 01 15:42:19 crc kubenswrapper[4949]: E1001 15:42:19.365644 4949 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 16dc5f79df776c5ddb232fece8906b6ab97557ab06bcd8d74033c6c0a6ba9cfa is running failed: container process not found" containerID="16dc5f79df776c5ddb232fece8906b6ab97557ab06bcd8d74033c6c0a6ba9cfa" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Oct 01 15:42:19 crc kubenswrapper[4949]: E1001 15:42:19.366045 4949 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 16dc5f79df776c5ddb232fece8906b6ab97557ab06bcd8d74033c6c0a6ba9cfa is running failed: container process not found" containerID="16dc5f79df776c5ddb232fece8906b6ab97557ab06bcd8d74033c6c0a6ba9cfa" cmd=["/bin/bash","-c","#!/bin/bash\ntest -f /etc/cni/net.d/10-ovn-kubernetes.conf\n"] Oct 01 15:42:19 crc kubenswrapper[4949]: E1001 15:42:19.366104 4949 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 16dc5f79df776c5ddb232fece8906b6ab97557ab06bcd8d74033c6c0a6ba9cfa is running failed: container process not found" probeType="Readiness" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerName="ovnkube-controller" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.441658 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.441721 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.441734 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.441752 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.441774 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:19Z","lastTransitionTime":"2025-10-01T15:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.545216 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.545295 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.545321 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.545350 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.545375 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:19Z","lastTransitionTime":"2025-10-01T15:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.648042 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.648088 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.648097 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.648110 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.648184 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:19Z","lastTransitionTime":"2025-10-01T15:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.750665 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.750696 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.750704 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.750721 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.750730 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:19Z","lastTransitionTime":"2025-10-01T15:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.761205 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd-metrics-certs\") pod \"network-metrics-daemon-kfx8b\" (UID: \"d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd\") " pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:42:19 crc kubenswrapper[4949]: E1001 15:42:19.761347 4949 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 01 15:42:19 crc kubenswrapper[4949]: E1001 15:42:19.761396 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd-metrics-certs podName:d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd nodeName:}" failed. No retries permitted until 2025-10-01 15:42:20.761384503 +0000 UTC m=+40.066990694 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd-metrics-certs") pod "network-metrics-daemon-kfx8b" (UID: "d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.853282 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.853334 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.853350 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.853367 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.853379 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:19Z","lastTransitionTime":"2025-10-01T15:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.875435 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pppfm_6b30af5f-469f-4bee-b77f-4b58edba325b/ovnkube-controller/0.log" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.878195 4949 generic.go:334] "Generic (PLEG): container finished" podID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerID="16dc5f79df776c5ddb232fece8906b6ab97557ab06bcd8d74033c6c0a6ba9cfa" exitCode=1 Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.878296 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" event={"ID":"6b30af5f-469f-4bee-b77f-4b58edba325b","Type":"ContainerDied","Data":"16dc5f79df776c5ddb232fece8906b6ab97557ab06bcd8d74033c6c0a6ba9cfa"} Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.879088 4949 scope.go:117] "RemoveContainer" containerID="16dc5f79df776c5ddb232fece8906b6ab97557ab06bcd8d74033c6c0a6ba9cfa" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.880399 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zxj4z" event={"ID":"660f5a12-b71d-454a-8ec0-bae2646530a0","Type":"ContainerStarted","Data":"3a4e431fdeada96dc88cfe8f1b26c2ba3502f52e3d28880a87c5c4ed7482defb"} Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.880420 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zxj4z" event={"ID":"660f5a12-b71d-454a-8ec0-bae2646530a0","Type":"ContainerStarted","Data":"956d6c7f1d35dedc8f0f1255301e89801525d97a0a3837f3d085900a53bead59"} Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.910771 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a28cdb5789930fa13d9e76aeb4dc582efc2f97a117e7d3170909dddd8f6eed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a081b64cca92126512c81346c1b0005ddefcd11b702b12598c567cc627d3561f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:19Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.925185 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kg2qk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75c430b-f863-470c-b57f-def53bf840db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab6ca47d17c063e35d4f3109e8dbe39f42dbf245b8f439b12464e167fb57dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kg2qk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:19Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.955592 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.955646 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.955658 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.955678 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.955691 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:19Z","lastTransitionTime":"2025-10-01T15:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.963512 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b30af5f-469f-4bee-b77f-4b58edba325b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2df7f85dd23b39e72771fddf6c1ab34df3b9fcea664dffe05a1b8859281fff26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0d4badfc0169a3e0cbd62d6661b7e542f43cbbe9005a56c68c7e0027235b64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://728c9faef5d0d63e5dd9cd98520d62db3e75646be0aeec6ff20c71598e35b5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35e991ed458eaa7ea90ab0ae0204b7c9b865ad5c5044709257614edda19865c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde65de2ce60422cb5ddb92562725b8dcd2f8b36bee19cc6e6de8aaab216c7ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb7bdabc02a6c9442d7f842e078acef313e10cd6dbf7153b289ee93b99c9abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16dc5f79df776c5ddb232fece8906b6ab97557ab06bcd8d74033c6c0a6ba9cfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16dc5f79df776c5ddb232fece8906b6ab97557ab06bcd8d74033c6c0a6ba9cfa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"message\\\":\\\" handler 4 for removal\\\\nI1001 15:42:19.239282 6207 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1001 15:42:19.239288 6207 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1001 15:42:19.239304 6207 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1001 15:42:19.239334 6207 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1001 15:42:19.239339 6207 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1001 15:42:19.239349 6207 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1001 15:42:19.239368 6207 factory.go:656] Stopping watch factory\\\\nI1001 15:42:19.239387 6207 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1001 15:42:19.239400 6207 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1001 15:42:19.239406 6207 handler.go:208] Removed *v1.Node event handler 7\\\\nI1001 15:42:19.239424 6207 handler.go:208] Removed *v1.Node event handler 2\\\\nI1001 15:42:19.239431 6207 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1001 15:42:19.239438 6207 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1001 15:42:19.239444 6207 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1001 15:42:19.239506 6207 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08a6304f2c50f5e54adf26554bd2f81aba317d4ee153f55dc25123926bc380e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pppfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:19Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:19 crc kubenswrapper[4949]: I1001 15:42:19.983517 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zxj4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"660f5a12-b71d-454a-8ec0-bae2646530a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:18Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:18Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbc4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbc4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zxj4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:19Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.001494 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439d2f19-631a-4560-aa58-70cdeef997a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://110e9a59347a665dfa5d6fcdf9e4937525c5e799c8356b2f28d3ba523f925d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513e0be5244aa9da6671888fedd775306c0824650cc687f0faea952c28d1b6b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47ab556b4721804041f9abee5effa082f4f29fbffe7681a7fc8efe0cb4f2b7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41f38cee75135e15635fdda772af2bdd1588e57025d28247223af8b5c34202e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02eba524109342a0990f10dfdf5432c2504acbb394b4f6e23b370becc256f51c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-01T15:41:55Z\\\",\\\"message\\\":\\\"W1001 15:41:44.880099 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1001 15:41:44.880530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759333304 cert, and key in /tmp/serving-cert-2775419030/serving-signer.crt, /tmp/serving-cert-2775419030/serving-signer.key\\\\nI1001 15:41:45.097803 1 observer_polling.go:159] Starting file observer\\\\nW1001 15:41:45.100841 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1001 15:41:45.103153 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1001 15:41:45.104329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2775419030/tls.crt::/tmp/serving-cert-2775419030/tls.key\\\\\\\"\\\\nF1001 15:41:55.441587 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2ceea2906c651debca4b72a5654ba5f8291a20d513a96b2fde7f1a0373674c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:19Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.016738 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pgg4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25c0759d-c94a-438c-b478-48161acbb035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9037ea44408cee3d00ee27bb38ea8e49344ab06d6b173d86689a131010eb7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r42ms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pgg4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:20Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.033402 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:20Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.047460 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:20Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.058528 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.058577 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.058599 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.058618 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.058632 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:20Z","lastTransitionTime":"2025-10-01T15:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.063917 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:20Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.079580 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e15cd67-d4ad-49b8-96a6-da114105e558\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee03163adf8037e21674f8ff81a9eba22d580bf29b7c9a81d9505dd1e768169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a240a198e037219428db0de4c633c5e99064f7f13a6a8922eba7fa3fac633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6287\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:20Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.098562 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s5r4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a54b1244d03fd99a21c9ca116873fe7a80ec3053c95c8b99de3fb62e581419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghb86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s5r4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:20Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.110717 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kfx8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2bpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2bpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kfx8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:20Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.131554 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5897a864-bb97-4443-a301-b30b22d13a88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02305e2d3e998f2b11185e65d51381d27f69cc544cba61b0878457e4816e5371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05aabfc445359dba1707d194e9dcf27bb57037b582f1181556cc65e2f2f712f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34912853a2f81af97497fc66053b02383114c5a42d46e39f2e7a98b48bf9e78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31382514a5f6e8d234356e87d2c1aba55e02a4edb54835382fe810f10e8df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c11c5a8bd9444a04b072ad656edec0e3f7d287392a2dffd509f3d7c0d1d4601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:20Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.146203 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d76d2a7d-139b-40b6-91f8-c2c8ae0ecc71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b08c9c1691470813c489fb61abd198ca2b5a94ddec2f9b5a06b9d6c9b56334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e467246736f9943a0524d6786a8c4bd4a567f60f20ffd7259a5124c844c44522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da953f01526669a6db92b3ed2d63682b200ea37d86e44ff34a3b7630877fe6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7235ba9aab9db39a23fe6b99b3278ff4c1ede47a1d58c2c986fe3244a2fa12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:20Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.161693 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa37955372c2b350c3cd8103db2d231407c8f853e02d43fcae591d66a1f2bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:20Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.161917 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.161950 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.161963 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.161983 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.161996 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:20Z","lastTransitionTime":"2025-10-01T15:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.175840 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a5390e0a0285f487962def201f71183a2cfdbcc8db56f1bd0c64b2f6014c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:20Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.193346 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xr96p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6656de7a-9b8f-4714-81ae-3685c01f11fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24da01cfcdff6460cccc5cf06d24aa758d33ee7d086c7b93078b5848c650fcc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0296aee184dcef13f03cc1da4987bef80c13c394f067300167eadd8a4091f4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0296aee184dcef13f03cc1da4987bef80c13c394f067300167eadd8a4091f4c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xr96p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:20Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.208005 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:20Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.221282 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:20Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.234568 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:20Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.248140 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e15cd67-d4ad-49b8-96a6-da114105e558\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee03163adf8037e21674f8ff81a9eba22d580bf29b7c9a81d9505dd1e768169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a240a198e037219428db0de4c633c5e99064f7f13a6a8922eba7fa3fac633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6287\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:20Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.260943 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s5r4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a54b1244d03fd99a21c9ca116873fe7a80ec3053c95c8b99de3fb62e581419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghb86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s5r4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:20Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.264503 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.264562 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.264577 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.264598 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.264610 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:20Z","lastTransitionTime":"2025-10-01T15:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.271290 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kfx8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2bpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2bpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kfx8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:20Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.289498 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5897a864-bb97-4443-a301-b30b22d13a88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02305e2d3e998f2b11185e65d51381d27f69cc544cba61b0878457e4816e5371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05aabfc445359dba1707d194e9dcf27bb57037b582f1181556cc65e2f2f712f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34912853a2f81af97497fc66053b02383114c5a42d46e39f2e7a98b48bf9e78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31382514a5f6e8d234356e87d2c1aba55e02a4edb54835382fe810f10e8df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c11c5a8bd9444a04b072ad656edec0e3f7d287392a2dffd509f3d7c0d1d4601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:20Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.305272 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d76d2a7d-139b-40b6-91f8-c2c8ae0ecc71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b08c9c1691470813c489fb61abd198ca2b5a94ddec2f9b5a06b9d6c9b56334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e467246736f9943a0524d6786a8c4bd4a567f60f20ffd7259a5124c844c44522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da953f01526669a6db92b3ed2d63682b200ea37d86e44ff34a3b7630877fe6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7235ba9aab9db39a23fe6b99b3278ff4c1ede47a1d58c2c986fe3244a2fa12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:20Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.326172 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa37955372c2b350c3cd8103db2d231407c8f853e02d43fcae591d66a1f2bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:20Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.340828 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a5390e0a0285f487962def201f71183a2cfdbcc8db56f1bd0c64b2f6014c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:20Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.357564 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xr96p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6656de7a-9b8f-4714-81ae-3685c01f11fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24da01cfcdff6460cccc5cf06d24aa758d33ee7d086c7b93078b5848c650fcc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0296aee184dcef13f03cc1da4987bef80c13c394f067300167eadd8a4091f4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0296aee184dcef13f03cc1da4987bef80c13c394f067300167eadd8a4091f4c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xr96p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:20Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.367379 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.367414 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.367425 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.367442 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.367455 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:20Z","lastTransitionTime":"2025-10-01T15:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.387923 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a28cdb5789930fa13d9e76aeb4dc582efc2f97a117e7d3170909dddd8f6eed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a081b64cca92126512c81346c1b0005ddefcd11b702b12598c567cc627d3561f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:20Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.400903 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kg2qk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75c430b-f863-470c-b57f-def53bf840db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab6ca47d17c063e35d4f3109e8dbe39f42dbf245b8f439b12464e167fb57dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kg2qk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:20Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.425677 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b30af5f-469f-4bee-b77f-4b58edba325b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2df7f85dd23b39e72771fddf6c1ab34df3b9fcea664dffe05a1b8859281fff26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0d4badfc0169a3e0cbd62d6661b7e542f43cbbe9005a56c68c7e0027235b64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://728c9faef5d0d63e5dd9cd98520d62db3e75646be0aeec6ff20c71598e35b5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35e991ed458eaa7ea90ab0ae0204b7c9b865ad5c5044709257614edda19865c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde65de2ce60422cb5ddb92562725b8dcd2f8b36bee19cc6e6de8aaab216c7ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb7bdabc02a6c9442d7f842e078acef313e10cd6dbf7153b289ee93b99c9abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://16dc5f79df776c5ddb232fece8906b6ab97557ab06bcd8d74033c6c0a6ba9cfa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16dc5f79df776c5ddb232fece8906b6ab97557ab06bcd8d74033c6c0a6ba9cfa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"message\\\":\\\" handler 4 for removal\\\\nI1001 15:42:19.239282 6207 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1001 15:42:19.239288 6207 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1001 15:42:19.239304 6207 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1001 15:42:19.239334 6207 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1001 15:42:19.239339 6207 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1001 15:42:19.239349 6207 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1001 15:42:19.239368 6207 factory.go:656] Stopping watch factory\\\\nI1001 15:42:19.239387 6207 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1001 15:42:19.239400 6207 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1001 15:42:19.239406 6207 handler.go:208] Removed *v1.Node event handler 7\\\\nI1001 15:42:19.239424 6207 handler.go:208] Removed *v1.Node event handler 2\\\\nI1001 15:42:19.239431 6207 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1001 15:42:19.239438 6207 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1001 15:42:19.239444 6207 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1001 15:42:19.239506 6207 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:14Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08a6304f2c50f5e54adf26554bd2f81aba317d4ee153f55dc25123926bc380e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pppfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:20Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.445582 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zxj4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"660f5a12-b71d-454a-8ec0-bae2646530a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://956d6c7f1d35dedc8f0f1255301e89801525d97a0a3837f3d085900a53bead59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbc4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4e431fdeada96dc88cfe8f1b26c2ba3502f52e3d28880a87c5c4ed7482defb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbc4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zxj4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:20Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.469330 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.469562 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.469656 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.469741 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.469819 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:20Z","lastTransitionTime":"2025-10-01T15:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.469853 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439d2f19-631a-4560-aa58-70cdeef997a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://110e9a59347a665dfa5d6fcdf9e4937525c5e799c8356b2f28d3ba523f925d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513e0be5244aa9da6671888fedd775306c0824650cc687f0faea952c28d1b6b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47ab556b4721804041f9abee5effa082f4f29fbffe7681a7fc8efe0cb4f2b7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41f38cee75135e15635fdda772af2bdd1588e57025d28247223af8b5c34202e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02eba524109342a0990f10dfdf5432c2504acbb394b4f6e23b370becc256f51c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-01T15:41:55Z\\\",\\\"message\\\":\\\"W1001 15:41:44.880099 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1001 15:41:44.880530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759333304 cert, and key in /tmp/serving-cert-2775419030/serving-signer.crt, /tmp/serving-cert-2775419030/serving-signer.key\\\\nI1001 15:41:45.097803 1 observer_polling.go:159] Starting file observer\\\\nW1001 15:41:45.100841 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1001 15:41:45.103153 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1001 15:41:45.104329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2775419030/tls.crt::/tmp/serving-cert-2775419030/tls.key\\\\\\\"\\\\nF1001 15:41:55.441587 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2ceea2906c651debca4b72a5654ba5f8291a20d513a96b2fde7f1a0373674c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:20Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.483742 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pgg4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25c0759d-c94a-438c-b478-48161acbb035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9037ea44408cee3d00ee27bb38ea8e49344ab06d6b173d86689a131010eb7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r42ms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pgg4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:20Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.572625 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.572665 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.572676 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.572695 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.572709 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:20Z","lastTransitionTime":"2025-10-01T15:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.601025 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.601080 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.601111 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.601321 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:42:20 crc kubenswrapper[4949]: E1001 15:42:20.601882 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:42:20 crc kubenswrapper[4949]: E1001 15:42:20.601621 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kfx8b" podUID="d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd" Oct 01 15:42:20 crc kubenswrapper[4949]: E1001 15:42:20.601718 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:42:20 crc kubenswrapper[4949]: E1001 15:42:20.601504 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.676202 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.676997 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.677060 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.677136 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.677195 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:20Z","lastTransitionTime":"2025-10-01T15:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.770927 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd-metrics-certs\") pod \"network-metrics-daemon-kfx8b\" (UID: \"d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd\") " pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:42:20 crc kubenswrapper[4949]: E1001 15:42:20.771103 4949 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 01 15:42:20 crc kubenswrapper[4949]: E1001 15:42:20.771199 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd-metrics-certs podName:d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd nodeName:}" failed. No retries permitted until 2025-10-01 15:42:22.77117641 +0000 UTC m=+42.076782631 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd-metrics-certs") pod "network-metrics-daemon-kfx8b" (UID: "d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.780385 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.780446 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.780463 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.780488 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.780507 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:20Z","lastTransitionTime":"2025-10-01T15:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.887063 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pppfm_6b30af5f-469f-4bee-b77f-4b58edba325b/ovnkube-controller/0.log" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.901414 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.901472 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.901486 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.901510 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.901525 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:20Z","lastTransitionTime":"2025-10-01T15:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.902621 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" event={"ID":"6b30af5f-469f-4bee-b77f-4b58edba325b","Type":"ContainerStarted","Data":"3583993658cb241148305ec5df4ca9c7b72512cd521369892abec9d2c06c1064"} Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.903635 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.924794 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a28cdb5789930fa13d9e76aeb4dc582efc2f97a117e7d3170909dddd8f6eed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a081b64cca92126512c81346c1b0005ddefcd11b702b12598c567cc627d3561f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:20Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.936291 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kg2qk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75c430b-f863-470c-b57f-def53bf840db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab6ca47d17c063e35d4f3109e8dbe39f42dbf245b8f439b12464e167fb57dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kg2qk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:20Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.965701 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b30af5f-469f-4bee-b77f-4b58edba325b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2df7f85dd23b39e72771fddf6c1ab34df3b9fcea664dffe05a1b8859281fff26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0d4badfc0169a3e0cbd62d6661b7e542f43cbbe9005a56c68c7e0027235b64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://728c9faef5d0d63e5dd9cd98520d62db3e75646be0aeec6ff20c71598e35b5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35e991ed458eaa7ea90ab0ae0204b7c9b865ad5c5044709257614edda19865c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde65de2ce60422cb5ddb92562725b8dcd2f8b36bee19cc6e6de8aaab216c7ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb7bdabc02a6c9442d7f842e078acef313e10cd6dbf7153b289ee93b99c9abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3583993658cb241148305ec5df4ca9c7b72512cd521369892abec9d2c06c1064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16dc5f79df776c5ddb232fece8906b6ab97557ab06bcd8d74033c6c0a6ba9cfa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"message\\\":\\\" handler 4 for removal\\\\nI1001 15:42:19.239282 6207 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1001 15:42:19.239288 6207 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1001 15:42:19.239304 6207 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1001 15:42:19.239334 6207 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1001 15:42:19.239339 6207 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1001 15:42:19.239349 6207 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1001 15:42:19.239368 6207 factory.go:656] Stopping watch factory\\\\nI1001 15:42:19.239387 6207 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1001 15:42:19.239400 6207 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1001 15:42:19.239406 6207 handler.go:208] Removed *v1.Node event handler 7\\\\nI1001 15:42:19.239424 6207 handler.go:208] Removed *v1.Node event handler 2\\\\nI1001 15:42:19.239431 6207 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1001 15:42:19.239438 6207 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1001 15:42:19.239444 6207 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1001 15:42:19.239506 6207 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08a6304f2c50f5e54adf26554bd2f81aba317d4ee153f55dc25123926bc380e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pppfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:20Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.977698 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zxj4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"660f5a12-b71d-454a-8ec0-bae2646530a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://956d6c7f1d35dedc8f0f1255301e89801525d97a0a3837f3d085900a53bead59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbc4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4e431fdeada96dc88cfe8f1b26c2ba3502f52e3d28880a87c5c4ed7482defb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbc4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zxj4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:20Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:20 crc kubenswrapper[4949]: I1001 15:42:20.989273 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439d2f19-631a-4560-aa58-70cdeef997a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://110e9a59347a665dfa5d6fcdf9e4937525c5e799c8356b2f28d3ba523f925d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513e0be5244aa9da6671888fedd775306c0824650cc687f0faea952c28d1b6b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47ab556b4721804041f9abee5effa082f4f29fbffe7681a7fc8efe0cb4f2b7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41f38cee75135e15635fdda772af2bdd1588e57025d28247223af8b5c34202e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02eba524109342a0990f10dfdf5432c2504acbb394b4f6e23b370becc256f51c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-01T15:41:55Z\\\",\\\"message\\\":\\\"W1001 15:41:44.880099 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1001 15:41:44.880530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759333304 cert, and key in /tmp/serving-cert-2775419030/serving-signer.crt, /tmp/serving-cert-2775419030/serving-signer.key\\\\nI1001 15:41:45.097803 1 observer_polling.go:159] Starting file observer\\\\nW1001 15:41:45.100841 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1001 15:41:45.103153 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1001 15:41:45.104329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2775419030/tls.crt::/tmp/serving-cert-2775419030/tls.key\\\\\\\"\\\\nF1001 15:41:55.441587 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2ceea2906c651debca4b72a5654ba5f8291a20d513a96b2fde7f1a0373674c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:20Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.004231 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.004274 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.004286 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.004304 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.004315 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:21Z","lastTransitionTime":"2025-10-01T15:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.017111 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pgg4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25c0759d-c94a-438c-b478-48161acbb035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9037ea44408cee3d00ee27bb38ea8e49344ab06d6b173d86689a131010eb7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r42ms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pgg4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:20Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.029943 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:21Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.041867 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:21Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.055713 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:21Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.068362 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e15cd67-d4ad-49b8-96a6-da114105e558\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee03163adf8037e21674f8ff81a9eba22d580bf29b7c9a81d9505dd1e768169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a240a198e037219428db0de4c633c5e99064f7f13a6a8922eba7fa3fac633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6287\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:21Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.080539 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s5r4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a54b1244d03fd99a21c9ca116873fe7a80ec3053c95c8b99de3fb62e581419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghb86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s5r4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:21Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.092367 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kfx8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2bpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2bpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kfx8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:21Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.107740 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.107801 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.107821 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.107842 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.107858 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:21Z","lastTransitionTime":"2025-10-01T15:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.118672 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5897a864-bb97-4443-a301-b30b22d13a88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02305e2d3e998f2b11185e65d51381d27f69cc544cba61b0878457e4816e5371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05aabfc445359dba1707d194e9dcf27bb57037b582f1181556cc65e2f2f712f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34912853a2f81af97497fc66053b02383114c5a42d46e39f2e7a98b48bf9e78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31382514a5f6e8d234356e87d2c1aba55e02a4edb54835382fe810f10e8df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c11c5a8bd9444a04b072ad656edec0e3f7d287392a2dffd509f3d7c0d1d4601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:21Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.136104 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d76d2a7d-139b-40b6-91f8-c2c8ae0ecc71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b08c9c1691470813c489fb61abd198ca2b5a94ddec2f9b5a06b9d6c9b56334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e467246736f9943a0524d6786a8c4bd4a567f60f20ffd7259a5124c844c44522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da953f01526669a6db92b3ed2d63682b200ea37d86e44ff34a3b7630877fe6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7235ba9aab9db39a23fe6b99b3278ff4c1ede47a1d58c2c986fe3244a2fa12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:21Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.150031 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa37955372c2b350c3cd8103db2d231407c8f853e02d43fcae591d66a1f2bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:21Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.177497 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a5390e0a0285f487962def201f71183a2cfdbcc8db56f1bd0c64b2f6014c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:21Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.193551 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xr96p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6656de7a-9b8f-4714-81ae-3685c01f11fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24da01cfcdff6460cccc5cf06d24aa758d33ee7d086c7b93078b5848c650fcc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0296aee184dcef13f03cc1da4987bef80c13c394f067300167eadd8a4091f4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0296aee184dcef13f03cc1da4987bef80c13c394f067300167eadd8a4091f4c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xr96p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:21Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.211234 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.211279 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.211291 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.211310 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.211323 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:21Z","lastTransitionTime":"2025-10-01T15:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.314090 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.314179 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.314198 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.314217 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.314360 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:21Z","lastTransitionTime":"2025-10-01T15:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.418739 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.419025 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.419218 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.419356 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.419479 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:21Z","lastTransitionTime":"2025-10-01T15:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.521968 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.522278 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.522372 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.522502 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.522593 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:21Z","lastTransitionTime":"2025-10-01T15:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.624445 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pgg4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25c0759d-c94a-438c-b478-48161acbb035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9037ea44408cee3d00ee27bb38ea8e49344ab06d6b173d86689a131010eb7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r42ms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pgg4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:21Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.626073 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.626204 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.626231 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.626261 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.626281 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:21Z","lastTransitionTime":"2025-10-01T15:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.650446 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439d2f19-631a-4560-aa58-70cdeef997a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://110e9a59347a665dfa5d6fcdf9e4937525c5e799c8356b2f28d3ba523f925d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513e0be5244aa9da6671888fedd775306c0824650cc687f0faea952c28d1b6b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47ab556b4721804041f9abee5effa082f4f29fbffe7681a7fc8efe0cb4f2b7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41f38cee75135e15635fdda772af2bdd1588e57025d28247223af8b5c34202e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02eba524109342a0990f10dfdf5432c2504acbb394b4f6e23b370becc256f51c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-01T15:41:55Z\\\",\\\"message\\\":\\\"W1001 15:41:44.880099 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1001 15:41:44.880530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759333304 cert, and key in /tmp/serving-cert-2775419030/serving-signer.crt, /tmp/serving-cert-2775419030/serving-signer.key\\\\nI1001 15:41:45.097803 1 observer_polling.go:159] Starting file observer\\\\nW1001 15:41:45.100841 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1001 15:41:45.103153 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1001 15:41:45.104329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2775419030/tls.crt::/tmp/serving-cert-2775419030/tls.key\\\\\\\"\\\\nF1001 15:41:55.441587 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2ceea2906c651debca4b72a5654ba5f8291a20d513a96b2fde7f1a0373674c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:21Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.669431 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:21Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.693312 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:21Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.720397 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e15cd67-d4ad-49b8-96a6-da114105e558\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee03163adf8037e21674f8ff81a9eba22d580bf29b7c9a81d9505dd1e768169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a240a198e037219428db0de4c633c5e99064f7f13a6a8922eba7fa3fac633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6287\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:21Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.729162 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.729393 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.729547 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.729672 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.729800 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:21Z","lastTransitionTime":"2025-10-01T15:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.739748 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s5r4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a54b1244d03fd99a21c9ca116873fe7a80ec3053c95c8b99de3fb62e581419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghb86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s5r4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:21Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.756067 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kfx8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2bpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2bpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kfx8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:21Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.778502 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:21Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.798961 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d76d2a7d-139b-40b6-91f8-c2c8ae0ecc71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b08c9c1691470813c489fb61abd198ca2b5a94ddec2f9b5a06b9d6c9b56334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e467246736f9943a0524d6786a8c4bd4a567f60f20ffd7259a5124c844c44522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da953f01526669a6db92b3ed2d63682b200ea37d86e44ff34a3b7630877fe6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7235ba9aab9db39a23fe6b99b3278ff4c1ede47a1d58c2c986fe3244a2fa12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:21Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.818654 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa37955372c2b350c3cd8103db2d231407c8f853e02d43fcae591d66a1f2bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:21Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.832772 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.832832 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.832845 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.832872 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.832890 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:21Z","lastTransitionTime":"2025-10-01T15:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.838540 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a5390e0a0285f487962def201f71183a2cfdbcc8db56f1bd0c64b2f6014c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:21Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.856922 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xr96p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6656de7a-9b8f-4714-81ae-3685c01f11fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24da01cfcdff6460cccc5cf06d24aa758d33ee7d086c7b93078b5848c650fcc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0296aee184dcef13f03cc1da4987bef80c13c394f067300167eadd8a4091f4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0296aee184dcef13f03cc1da4987bef80c13c394f067300167eadd8a4091f4c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xr96p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:21Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.884471 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5897a864-bb97-4443-a301-b30b22d13a88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02305e2d3e998f2b11185e65d51381d27f69cc544cba61b0878457e4816e5371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05aabfc445359dba1707d194e9dcf27bb57037b582f1181556cc65e2f2f712f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34912853a2f81af97497fc66053b02383114c5a42d46e39f2e7a98b48bf9e78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31382514a5f6e8d234356e87d2c1aba55e02a4edb54835382fe810f10e8df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c11c5a8bd9444a04b072ad656edec0e3f7d287392a2dffd509f3d7c0d1d4601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:21Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.898006 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kg2qk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75c430b-f863-470c-b57f-def53bf840db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab6ca47d17c063e35d4f3109e8dbe39f42dbf245b8f439b12464e167fb57dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kg2qk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:21Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.909595 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pppfm_6b30af5f-469f-4bee-b77f-4b58edba325b/ovnkube-controller/1.log" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.910240 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pppfm_6b30af5f-469f-4bee-b77f-4b58edba325b/ovnkube-controller/0.log" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.912552 4949 generic.go:334] "Generic (PLEG): container finished" podID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerID="3583993658cb241148305ec5df4ca9c7b72512cd521369892abec9d2c06c1064" exitCode=1 Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.912656 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" event={"ID":"6b30af5f-469f-4bee-b77f-4b58edba325b","Type":"ContainerDied","Data":"3583993658cb241148305ec5df4ca9c7b72512cd521369892abec9d2c06c1064"} Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.912746 4949 scope.go:117] "RemoveContainer" containerID="16dc5f79df776c5ddb232fece8906b6ab97557ab06bcd8d74033c6c0a6ba9cfa" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.913387 4949 scope.go:117] "RemoveContainer" containerID="3583993658cb241148305ec5df4ca9c7b72512cd521369892abec9d2c06c1064" Oct 01 15:42:21 crc kubenswrapper[4949]: E1001 15:42:21.913545 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pppfm_openshift-ovn-kubernetes(6b30af5f-469f-4bee-b77f-4b58edba325b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.924885 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b30af5f-469f-4bee-b77f-4b58edba325b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2df7f85dd23b39e72771fddf6c1ab34df3b9fcea664dffe05a1b8859281fff26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0d4badfc0169a3e0cbd62d6661b7e542f43cbbe9005a56c68c7e0027235b64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://728c9faef5d0d63e5dd9cd98520d62db3e75646be0aeec6ff20c71598e35b5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35e991ed458eaa7ea90ab0ae0204b7c9b865ad5c5044709257614edda19865c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde65de2ce60422cb5ddb92562725b8dcd2f8b36bee19cc6e6de8aaab216c7ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb7bdabc02a6c9442d7f842e078acef313e10cd6dbf7153b289ee93b99c9abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3583993658cb241148305ec5df4ca9c7b72512cd521369892abec9d2c06c1064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16dc5f79df776c5ddb232fece8906b6ab97557ab06bcd8d74033c6c0a6ba9cfa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"message\\\":\\\" handler 4 for removal\\\\nI1001 15:42:19.239282 6207 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1001 15:42:19.239288 6207 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1001 15:42:19.239304 6207 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1001 15:42:19.239334 6207 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1001 15:42:19.239339 6207 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1001 15:42:19.239349 6207 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1001 15:42:19.239368 6207 factory.go:656] Stopping watch factory\\\\nI1001 15:42:19.239387 6207 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1001 15:42:19.239400 6207 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1001 15:42:19.239406 6207 handler.go:208] Removed *v1.Node event handler 7\\\\nI1001 15:42:19.239424 6207 handler.go:208] Removed *v1.Node event handler 2\\\\nI1001 15:42:19.239431 6207 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1001 15:42:19.239438 6207 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1001 15:42:19.239444 6207 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1001 15:42:19.239506 6207 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08a6304f2c50f5e54adf26554bd2f81aba317d4ee153f55dc25123926bc380e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pppfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:21Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.935322 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.935387 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.935401 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.935428 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.935442 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:21Z","lastTransitionTime":"2025-10-01T15:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.943100 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zxj4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"660f5a12-b71d-454a-8ec0-bae2646530a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://956d6c7f1d35dedc8f0f1255301e89801525d97a0a3837f3d085900a53bead59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbc4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4e431fdeada96dc88cfe8f1b26c2ba3502f52e3d28880a87c5c4ed7482defb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbc4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zxj4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:21Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.961441 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a28cdb5789930fa13d9e76aeb4dc582efc2f97a117e7d3170909dddd8f6eed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a081b64cca92126512c81346c1b0005ddefcd11b702b12598c567cc627d3561f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:21Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.974813 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pgg4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25c0759d-c94a-438c-b478-48161acbb035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9037ea44408cee3d00ee27bb38ea8e49344ab06d6b173d86689a131010eb7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r42ms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pgg4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:21Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:21 crc kubenswrapper[4949]: I1001 15:42:21.993627 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439d2f19-631a-4560-aa58-70cdeef997a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://110e9a59347a665dfa5d6fcdf9e4937525c5e799c8356b2f28d3ba523f925d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513e0be5244aa9da6671888fedd775306c0824650cc687f0faea952c28d1b6b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47ab556b4721804041f9abee5effa082f4f29fbffe7681a7fc8efe0cb4f2b7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41f38cee75135e15635fdda772af2bdd1588e57025d28247223af8b5c34202e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02eba524109342a0990f10dfdf5432c2504acbb394b4f6e23b370becc256f51c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-01T15:41:55Z\\\",\\\"message\\\":\\\"W1001 15:41:44.880099 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1001 15:41:44.880530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759333304 cert, and key in /tmp/serving-cert-2775419030/serving-signer.crt, /tmp/serving-cert-2775419030/serving-signer.key\\\\nI1001 15:41:45.097803 1 observer_polling.go:159] Starting file observer\\\\nW1001 15:41:45.100841 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1001 15:41:45.103153 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1001 15:41:45.104329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2775419030/tls.crt::/tmp/serving-cert-2775419030/tls.key\\\\\\\"\\\\nF1001 15:41:55.441587 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2ceea2906c651debca4b72a5654ba5f8291a20d513a96b2fde7f1a0373674c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:21Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.007001 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:22Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.019865 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:22Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.034813 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e15cd67-d4ad-49b8-96a6-da114105e558\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee03163adf8037e21674f8ff81a9eba22d580bf29b7c9a81d9505dd1e768169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a240a198e037219428db0de4c633c5e99064f7f13a6a8922eba7fa3fac633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6287\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:22Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.037648 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.037683 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.037695 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.037711 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.037723 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:22Z","lastTransitionTime":"2025-10-01T15:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.053290 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s5r4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a54b1244d03fd99a21c9ca116873fe7a80ec3053c95c8b99de3fb62e581419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghb86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s5r4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:22Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.067782 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kfx8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2bpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2bpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kfx8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:22Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.084357 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:22Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.104024 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d76d2a7d-139b-40b6-91f8-c2c8ae0ecc71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b08c9c1691470813c489fb61abd198ca2b5a94ddec2f9b5a06b9d6c9b56334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e467246736f9943a0524d6786a8c4bd4a567f60f20ffd7259a5124c844c44522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da953f01526669a6db92b3ed2d63682b200ea37d86e44ff34a3b7630877fe6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7235ba9aab9db39a23fe6b99b3278ff4c1ede47a1d58c2c986fe3244a2fa12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:22Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.128947 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa37955372c2b350c3cd8103db2d231407c8f853e02d43fcae591d66a1f2bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:22Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.140069 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.140114 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.140161 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.140182 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.140196 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:22Z","lastTransitionTime":"2025-10-01T15:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.142471 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a5390e0a0285f487962def201f71183a2cfdbcc8db56f1bd0c64b2f6014c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:22Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.157288 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xr96p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6656de7a-9b8f-4714-81ae-3685c01f11fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24da01cfcdff6460cccc5cf06d24aa758d33ee7d086c7b93078b5848c650fcc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0296aee184dcef13f03cc1da4987bef80c13c394f067300167eadd8a4091f4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0296aee184dcef13f03cc1da4987bef80c13c394f067300167eadd8a4091f4c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xr96p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:22Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.178279 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5897a864-bb97-4443-a301-b30b22d13a88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02305e2d3e998f2b11185e65d51381d27f69cc544cba61b0878457e4816e5371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05aabfc445359dba1707d194e9dcf27bb57037b582f1181556cc65e2f2f712f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34912853a2f81af97497fc66053b02383114c5a42d46e39f2e7a98b48bf9e78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31382514a5f6e8d234356e87d2c1aba55e02a4edb54835382fe810f10e8df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c11c5a8bd9444a04b072ad656edec0e3f7d287392a2dffd509f3d7c0d1d4601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:22Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.189428 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kg2qk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75c430b-f863-470c-b57f-def53bf840db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab6ca47d17c063e35d4f3109e8dbe39f42dbf245b8f439b12464e167fb57dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kg2qk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:22Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.219058 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b30af5f-469f-4bee-b77f-4b58edba325b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2df7f85dd23b39e72771fddf6c1ab34df3b9fcea664dffe05a1b8859281fff26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0d4badfc0169a3e0cbd62d6661b7e542f43cbbe9005a56c68c7e0027235b64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://728c9faef5d0d63e5dd9cd98520d62db3e75646be0aeec6ff20c71598e35b5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35e991ed458eaa7ea90ab0ae0204b7c9b865ad5c5044709257614edda19865c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde65de2ce60422cb5ddb92562725b8dcd2f8b36bee19cc6e6de8aaab216c7ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb7bdabc02a6c9442d7f842e078acef313e10cd6dbf7153b289ee93b99c9abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3583993658cb241148305ec5df4ca9c7b72512cd521369892abec9d2c06c1064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16dc5f79df776c5ddb232fece8906b6ab97557ab06bcd8d74033c6c0a6ba9cfa\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"message\\\":\\\" handler 4 for removal\\\\nI1001 15:42:19.239282 6207 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1001 15:42:19.239288 6207 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1001 15:42:19.239304 6207 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI1001 15:42:19.239334 6207 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1001 15:42:19.239339 6207 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1001 15:42:19.239349 6207 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI1001 15:42:19.239368 6207 factory.go:656] Stopping watch factory\\\\nI1001 15:42:19.239387 6207 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI1001 15:42:19.239400 6207 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1001 15:42:19.239406 6207 handler.go:208] Removed *v1.Node event handler 7\\\\nI1001 15:42:19.239424 6207 handler.go:208] Removed *v1.Node event handler 2\\\\nI1001 15:42:19.239431 6207 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI1001 15:42:19.239438 6207 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1001 15:42:19.239444 6207 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI1001 15:42:19.239506 6207 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:14Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3583993658cb241148305ec5df4ca9c7b72512cd521369892abec9d2c06c1064\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-01T15:42:21Z\\\",\\\"message\\\":\\\":160\\\\nI1001 15:42:20.949761 6420 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1001 15:42:20.949806 6420 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1001 15:42:20.950328 6420 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1001 15:42:20.950359 6420 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1001 15:42:20.950367 6420 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1001 15:42:20.950404 6420 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1001 15:42:20.950410 6420 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1001 15:42:20.950449 6420 factory.go:656] Stopping watch factory\\\\nI1001 15:42:20.950459 6420 handler.go:208] Removed *v1.Node event handler 2\\\\nI1001 15:42:20.950466 6420 ovnkube.go:599] Stopped ovnkube\\\\nI1001 15:42:20.950472 6420 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1001 15:42:20.950475 6420 handler.go:208] Removed *v1.Node event handler 7\\\\nI1001 15:42:20.950483 6420 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1001 15\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08a6304f2c50f5e54adf26554bd2f81aba317d4ee153f55dc25123926bc380e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pppfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:22Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.232323 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zxj4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"660f5a12-b71d-454a-8ec0-bae2646530a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://956d6c7f1d35dedc8f0f1255301e89801525d97a0a3837f3d085900a53bead59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbc4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4e431fdeada96dc88cfe8f1b26c2ba3502f52e3d28880a87c5c4ed7482defb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbc4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zxj4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:22Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.244595 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.244636 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.244644 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.244658 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.244668 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:22Z","lastTransitionTime":"2025-10-01T15:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.252287 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a28cdb5789930fa13d9e76aeb4dc582efc2f97a117e7d3170909dddd8f6eed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a081b64cca92126512c81346c1b0005ddefcd11b702b12598c567cc627d3561f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:22Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.347244 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.347294 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.347311 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.347334 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.347351 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:22Z","lastTransitionTime":"2025-10-01T15:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.450209 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.450250 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.450262 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.450278 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.450289 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:22Z","lastTransitionTime":"2025-10-01T15:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.552984 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.553034 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.553048 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.553066 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.553079 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:22Z","lastTransitionTime":"2025-10-01T15:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.601066 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.601153 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.601153 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:42:22 crc kubenswrapper[4949]: E1001 15:42:22.601234 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.601282 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:42:22 crc kubenswrapper[4949]: E1001 15:42:22.601516 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kfx8b" podUID="d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd" Oct 01 15:42:22 crc kubenswrapper[4949]: E1001 15:42:22.601557 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:42:22 crc kubenswrapper[4949]: E1001 15:42:22.601621 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.657048 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.657157 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.657177 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.657203 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.657221 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:22Z","lastTransitionTime":"2025-10-01T15:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.760683 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.760755 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.760772 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.760797 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.760813 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:22Z","lastTransitionTime":"2025-10-01T15:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.793025 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd-metrics-certs\") pod \"network-metrics-daemon-kfx8b\" (UID: \"d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd\") " pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:42:22 crc kubenswrapper[4949]: E1001 15:42:22.793332 4949 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 01 15:42:22 crc kubenswrapper[4949]: E1001 15:42:22.793457 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd-metrics-certs podName:d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd nodeName:}" failed. No retries permitted until 2025-10-01 15:42:26.793434437 +0000 UTC m=+46.099040638 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd-metrics-certs") pod "network-metrics-daemon-kfx8b" (UID: "d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.862972 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.863029 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.863048 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.863071 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.863088 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:22Z","lastTransitionTime":"2025-10-01T15:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.920278 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pppfm_6b30af5f-469f-4bee-b77f-4b58edba325b/ovnkube-controller/1.log" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.926845 4949 scope.go:117] "RemoveContainer" containerID="3583993658cb241148305ec5df4ca9c7b72512cd521369892abec9d2c06c1064" Oct 01 15:42:22 crc kubenswrapper[4949]: E1001 15:42:22.927197 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pppfm_openshift-ovn-kubernetes(6b30af5f-469f-4bee-b77f-4b58edba325b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.942683 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d76d2a7d-139b-40b6-91f8-c2c8ae0ecc71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b08c9c1691470813c489fb61abd198ca2b5a94ddec2f9b5a06b9d6c9b56334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e467246736f9943a0524d6786a8c4bd4a567f60f20ffd7259a5124c844c44522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da953f01526669a6db92b3ed2d63682b200ea37d86e44ff34a3b7630877fe6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7235ba9aab9db39a23fe6b99b3278ff4c1ede47a1d58c2c986fe3244a2fa12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:22Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.955965 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa37955372c2b350c3cd8103db2d231407c8f853e02d43fcae591d66a1f2bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:22Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.966030 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.966066 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.966074 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.966087 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.966098 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:22Z","lastTransitionTime":"2025-10-01T15:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.982328 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a5390e0a0285f487962def201f71183a2cfdbcc8db56f1bd0c64b2f6014c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:22Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:22 crc kubenswrapper[4949]: I1001 15:42:22.999317 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xr96p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6656de7a-9b8f-4714-81ae-3685c01f11fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24da01cfcdff6460cccc5cf06d24aa758d33ee7d086c7b93078b5848c650fcc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0296aee184dcef13f03cc1da4987bef80c13c394f067300167eadd8a4091f4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0296aee184dcef13f03cc1da4987bef80c13c394f067300167eadd8a4091f4c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xr96p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:22Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.023060 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5897a864-bb97-4443-a301-b30b22d13a88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02305e2d3e998f2b11185e65d51381d27f69cc544cba61b0878457e4816e5371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05aabfc445359dba1707d194e9dcf27bb57037b582f1181556cc65e2f2f712f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34912853a2f81af97497fc66053b02383114c5a42d46e39f2e7a98b48bf9e78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31382514a5f6e8d234356e87d2c1aba55e02a4edb54835382fe810f10e8df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c11c5a8bd9444a04b072ad656edec0e3f7d287392a2dffd509f3d7c0d1d4601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:23Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.035052 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kg2qk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75c430b-f863-470c-b57f-def53bf840db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab6ca47d17c063e35d4f3109e8dbe39f42dbf245b8f439b12464e167fb57dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kg2qk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:23Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.065179 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b30af5f-469f-4bee-b77f-4b58edba325b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2df7f85dd23b39e72771fddf6c1ab34df3b9fcea664dffe05a1b8859281fff26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0d4badfc0169a3e0cbd62d6661b7e542f43cbbe9005a56c68c7e0027235b64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://728c9faef5d0d63e5dd9cd98520d62db3e75646be0aeec6ff20c71598e35b5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35e991ed458eaa7ea90ab0ae0204b7c9b865ad5c5044709257614edda19865c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde65de2ce60422cb5ddb92562725b8dcd2f8b36bee19cc6e6de8aaab216c7ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb7bdabc02a6c9442d7f842e078acef313e10cd6dbf7153b289ee93b99c9abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3583993658cb241148305ec5df4ca9c7b72512cd521369892abec9d2c06c1064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3583993658cb241148305ec5df4ca9c7b72512cd521369892abec9d2c06c1064\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-01T15:42:21Z\\\",\\\"message\\\":\\\":160\\\\nI1001 15:42:20.949761 6420 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1001 15:42:20.949806 6420 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1001 15:42:20.950328 6420 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1001 15:42:20.950359 6420 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1001 15:42:20.950367 6420 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1001 15:42:20.950404 6420 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1001 15:42:20.950410 6420 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1001 15:42:20.950449 6420 factory.go:656] Stopping watch factory\\\\nI1001 15:42:20.950459 6420 handler.go:208] Removed *v1.Node event handler 2\\\\nI1001 15:42:20.950466 6420 ovnkube.go:599] Stopped ovnkube\\\\nI1001 15:42:20.950472 6420 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1001 15:42:20.950475 6420 handler.go:208] Removed *v1.Node event handler 7\\\\nI1001 15:42:20.950483 6420 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1001 15\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pppfm_openshift-ovn-kubernetes(6b30af5f-469f-4bee-b77f-4b58edba325b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08a6304f2c50f5e54adf26554bd2f81aba317d4ee153f55dc25123926bc380e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pppfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:23Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.068908 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.068949 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.068963 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.068979 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.068990 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:23Z","lastTransitionTime":"2025-10-01T15:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.084284 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zxj4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"660f5a12-b71d-454a-8ec0-bae2646530a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://956d6c7f1d35dedc8f0f1255301e89801525d97a0a3837f3d085900a53bead59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbc4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4e431fdeada96dc88cfe8f1b26c2ba3502f52e3d28880a87c5c4ed7482defb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbc4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zxj4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:23Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.098723 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a28cdb5789930fa13d9e76aeb4dc582efc2f97a117e7d3170909dddd8f6eed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a081b64cca92126512c81346c1b0005ddefcd11b702b12598c567cc627d3561f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:23Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.112621 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pgg4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25c0759d-c94a-438c-b478-48161acbb035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9037ea44408cee3d00ee27bb38ea8e49344ab06d6b173d86689a131010eb7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r42ms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pgg4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:23Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.129441 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439d2f19-631a-4560-aa58-70cdeef997a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://110e9a59347a665dfa5d6fcdf9e4937525c5e799c8356b2f28d3ba523f925d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513e0be5244aa9da6671888fedd775306c0824650cc687f0faea952c28d1b6b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47ab556b4721804041f9abee5effa082f4f29fbffe7681a7fc8efe0cb4f2b7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41f38cee75135e15635fdda772af2bdd1588e57025d28247223af8b5c34202e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02eba524109342a0990f10dfdf5432c2504acbb394b4f6e23b370becc256f51c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-01T15:41:55Z\\\",\\\"message\\\":\\\"W1001 15:41:44.880099 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1001 15:41:44.880530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759333304 cert, and key in /tmp/serving-cert-2775419030/serving-signer.crt, /tmp/serving-cert-2775419030/serving-signer.key\\\\nI1001 15:41:45.097803 1 observer_polling.go:159] Starting file observer\\\\nW1001 15:41:45.100841 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1001 15:41:45.103153 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1001 15:41:45.104329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2775419030/tls.crt::/tmp/serving-cert-2775419030/tls.key\\\\\\\"\\\\nF1001 15:41:55.441587 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2ceea2906c651debca4b72a5654ba5f8291a20d513a96b2fde7f1a0373674c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:23Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.142388 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:23Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.155904 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:23Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.170590 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e15cd67-d4ad-49b8-96a6-da114105e558\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee03163adf8037e21674f8ff81a9eba22d580bf29b7c9a81d9505dd1e768169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a240a198e037219428db0de4c633c5e99064f7f13a6a8922eba7fa3fac633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6287\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:23Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.172218 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.172269 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.172288 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.172310 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.172326 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:23Z","lastTransitionTime":"2025-10-01T15:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.187639 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s5r4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a54b1244d03fd99a21c9ca116873fe7a80ec3053c95c8b99de3fb62e581419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghb86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s5r4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:23Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.199788 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kfx8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2bpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2bpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kfx8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:23Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.216461 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:23Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.275080 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.275156 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.275175 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.275199 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.275218 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:23Z","lastTransitionTime":"2025-10-01T15:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.379003 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.379413 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.379582 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.379727 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.379938 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:23Z","lastTransitionTime":"2025-10-01T15:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.483838 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.483910 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.483929 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.483953 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.483972 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:23Z","lastTransitionTime":"2025-10-01T15:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.586840 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.586929 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.586953 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.586980 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.587001 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:23Z","lastTransitionTime":"2025-10-01T15:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.690507 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.690569 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.690588 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.690610 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.690628 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:23Z","lastTransitionTime":"2025-10-01T15:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.794354 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.794465 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.794487 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.794515 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.794533 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:23Z","lastTransitionTime":"2025-10-01T15:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.897937 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.897998 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.898011 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.898060 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:23 crc kubenswrapper[4949]: I1001 15:42:23.898079 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:23Z","lastTransitionTime":"2025-10-01T15:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:24 crc kubenswrapper[4949]: I1001 15:42:24.001099 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:24 crc kubenswrapper[4949]: I1001 15:42:24.001159 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:24 crc kubenswrapper[4949]: I1001 15:42:24.001172 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:24 crc kubenswrapper[4949]: I1001 15:42:24.001190 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:24 crc kubenswrapper[4949]: I1001 15:42:24.001203 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:24Z","lastTransitionTime":"2025-10-01T15:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:24 crc kubenswrapper[4949]: I1001 15:42:24.104245 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:24 crc kubenswrapper[4949]: I1001 15:42:24.104304 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:24 crc kubenswrapper[4949]: I1001 15:42:24.104322 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:24 crc kubenswrapper[4949]: I1001 15:42:24.104344 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:24 crc kubenswrapper[4949]: I1001 15:42:24.104360 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:24Z","lastTransitionTime":"2025-10-01T15:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:24 crc kubenswrapper[4949]: I1001 15:42:24.208016 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:24 crc kubenswrapper[4949]: I1001 15:42:24.208075 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:24 crc kubenswrapper[4949]: I1001 15:42:24.208110 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:24 crc kubenswrapper[4949]: I1001 15:42:24.208172 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:24 crc kubenswrapper[4949]: I1001 15:42:24.208191 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:24Z","lastTransitionTime":"2025-10-01T15:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:24 crc kubenswrapper[4949]: I1001 15:42:24.310961 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:24 crc kubenswrapper[4949]: I1001 15:42:24.311017 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:24 crc kubenswrapper[4949]: I1001 15:42:24.311034 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:24 crc kubenswrapper[4949]: I1001 15:42:24.311056 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:24 crc kubenswrapper[4949]: I1001 15:42:24.311072 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:24Z","lastTransitionTime":"2025-10-01T15:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:24 crc kubenswrapper[4949]: I1001 15:42:24.413837 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:24 crc kubenswrapper[4949]: I1001 15:42:24.413881 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:24 crc kubenswrapper[4949]: I1001 15:42:24.413892 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:24 crc kubenswrapper[4949]: I1001 15:42:24.413911 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:24 crc kubenswrapper[4949]: I1001 15:42:24.413922 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:24Z","lastTransitionTime":"2025-10-01T15:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:24 crc kubenswrapper[4949]: I1001 15:42:24.517213 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:24 crc kubenswrapper[4949]: I1001 15:42:24.517265 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:24 crc kubenswrapper[4949]: I1001 15:42:24.517277 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:24 crc kubenswrapper[4949]: I1001 15:42:24.517296 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:24 crc kubenswrapper[4949]: I1001 15:42:24.517309 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:24Z","lastTransitionTime":"2025-10-01T15:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:24 crc kubenswrapper[4949]: I1001 15:42:24.600925 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:42:24 crc kubenswrapper[4949]: I1001 15:42:24.600969 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:42:24 crc kubenswrapper[4949]: I1001 15:42:24.601038 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:42:24 crc kubenswrapper[4949]: E1001 15:42:24.601242 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:42:24 crc kubenswrapper[4949]: E1001 15:42:24.601512 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:42:24 crc kubenswrapper[4949]: I1001 15:42:24.601530 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:42:24 crc kubenswrapper[4949]: E1001 15:42:24.601680 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kfx8b" podUID="d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd" Oct 01 15:42:24 crc kubenswrapper[4949]: E1001 15:42:24.601809 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:42:24 crc kubenswrapper[4949]: I1001 15:42:24.620015 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:24 crc kubenswrapper[4949]: I1001 15:42:24.620067 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:24 crc kubenswrapper[4949]: I1001 15:42:24.620084 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:24 crc kubenswrapper[4949]: I1001 15:42:24.620149 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:24 crc kubenswrapper[4949]: I1001 15:42:24.620167 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:24Z","lastTransitionTime":"2025-10-01T15:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:24 crc kubenswrapper[4949]: I1001 15:42:24.723324 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:24 crc kubenswrapper[4949]: I1001 15:42:24.723365 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:24 crc kubenswrapper[4949]: I1001 15:42:24.723377 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:24 crc kubenswrapper[4949]: I1001 15:42:24.723393 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:24 crc kubenswrapper[4949]: I1001 15:42:24.723405 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:24Z","lastTransitionTime":"2025-10-01T15:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:24 crc kubenswrapper[4949]: I1001 15:42:24.826421 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:24 crc kubenswrapper[4949]: I1001 15:42:24.826509 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:24 crc kubenswrapper[4949]: I1001 15:42:24.826522 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:24 crc kubenswrapper[4949]: I1001 15:42:24.826539 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:24 crc kubenswrapper[4949]: I1001 15:42:24.826549 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:24Z","lastTransitionTime":"2025-10-01T15:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:24 crc kubenswrapper[4949]: I1001 15:42:24.929648 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:24 crc kubenswrapper[4949]: I1001 15:42:24.929684 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:24 crc kubenswrapper[4949]: I1001 15:42:24.929693 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:24 crc kubenswrapper[4949]: I1001 15:42:24.929707 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:24 crc kubenswrapper[4949]: I1001 15:42:24.929715 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:24Z","lastTransitionTime":"2025-10-01T15:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:25 crc kubenswrapper[4949]: I1001 15:42:25.032403 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:25 crc kubenswrapper[4949]: I1001 15:42:25.032443 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:25 crc kubenswrapper[4949]: I1001 15:42:25.032453 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:25 crc kubenswrapper[4949]: I1001 15:42:25.032474 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:25 crc kubenswrapper[4949]: I1001 15:42:25.032483 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:25Z","lastTransitionTime":"2025-10-01T15:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:25 crc kubenswrapper[4949]: I1001 15:42:25.136003 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:25 crc kubenswrapper[4949]: I1001 15:42:25.136087 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:25 crc kubenswrapper[4949]: I1001 15:42:25.136106 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:25 crc kubenswrapper[4949]: I1001 15:42:25.136173 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:25 crc kubenswrapper[4949]: I1001 15:42:25.136204 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:25Z","lastTransitionTime":"2025-10-01T15:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:25 crc kubenswrapper[4949]: I1001 15:42:25.239196 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:25 crc kubenswrapper[4949]: I1001 15:42:25.239261 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:25 crc kubenswrapper[4949]: I1001 15:42:25.239283 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:25 crc kubenswrapper[4949]: I1001 15:42:25.239307 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:25 crc kubenswrapper[4949]: I1001 15:42:25.239324 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:25Z","lastTransitionTime":"2025-10-01T15:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:25 crc kubenswrapper[4949]: I1001 15:42:25.341682 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:25 crc kubenswrapper[4949]: I1001 15:42:25.341741 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:25 crc kubenswrapper[4949]: I1001 15:42:25.341750 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:25 crc kubenswrapper[4949]: I1001 15:42:25.341764 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:25 crc kubenswrapper[4949]: I1001 15:42:25.341774 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:25Z","lastTransitionTime":"2025-10-01T15:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:25 crc kubenswrapper[4949]: I1001 15:42:25.444213 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:25 crc kubenswrapper[4949]: I1001 15:42:25.444258 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:25 crc kubenswrapper[4949]: I1001 15:42:25.444268 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:25 crc kubenswrapper[4949]: I1001 15:42:25.444283 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:25 crc kubenswrapper[4949]: I1001 15:42:25.444294 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:25Z","lastTransitionTime":"2025-10-01T15:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:25 crc kubenswrapper[4949]: I1001 15:42:25.546957 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:25 crc kubenswrapper[4949]: I1001 15:42:25.547004 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:25 crc kubenswrapper[4949]: I1001 15:42:25.547018 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:25 crc kubenswrapper[4949]: I1001 15:42:25.547035 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:25 crc kubenswrapper[4949]: I1001 15:42:25.547048 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:25Z","lastTransitionTime":"2025-10-01T15:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:25 crc kubenswrapper[4949]: I1001 15:42:25.649786 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:25 crc kubenswrapper[4949]: I1001 15:42:25.649823 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:25 crc kubenswrapper[4949]: I1001 15:42:25.649831 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:25 crc kubenswrapper[4949]: I1001 15:42:25.649844 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:25 crc kubenswrapper[4949]: I1001 15:42:25.649854 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:25Z","lastTransitionTime":"2025-10-01T15:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:25 crc kubenswrapper[4949]: I1001 15:42:25.752669 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:25 crc kubenswrapper[4949]: I1001 15:42:25.752711 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:25 crc kubenswrapper[4949]: I1001 15:42:25.752723 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:25 crc kubenswrapper[4949]: I1001 15:42:25.752742 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:25 crc kubenswrapper[4949]: I1001 15:42:25.752755 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:25Z","lastTransitionTime":"2025-10-01T15:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:25 crc kubenswrapper[4949]: I1001 15:42:25.854374 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:25 crc kubenswrapper[4949]: I1001 15:42:25.854413 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:25 crc kubenswrapper[4949]: I1001 15:42:25.854428 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:25 crc kubenswrapper[4949]: I1001 15:42:25.854446 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:25 crc kubenswrapper[4949]: I1001 15:42:25.854457 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:25Z","lastTransitionTime":"2025-10-01T15:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:25 crc kubenswrapper[4949]: I1001 15:42:25.956730 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:25 crc kubenswrapper[4949]: I1001 15:42:25.956768 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:25 crc kubenswrapper[4949]: I1001 15:42:25.956787 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:25 crc kubenswrapper[4949]: I1001 15:42:25.956805 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:25 crc kubenswrapper[4949]: I1001 15:42:25.956816 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:25Z","lastTransitionTime":"2025-10-01T15:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.059381 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.059441 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.059458 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.059481 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.059498 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:26Z","lastTransitionTime":"2025-10-01T15:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.162197 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.162236 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.162247 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.162265 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.162277 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:26Z","lastTransitionTime":"2025-10-01T15:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.265220 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.265282 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.265301 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.265326 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.265346 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:26Z","lastTransitionTime":"2025-10-01T15:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.368796 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.368843 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.368854 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.368871 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.368886 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:26Z","lastTransitionTime":"2025-10-01T15:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.471495 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.471555 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.471573 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.471602 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.471618 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:26Z","lastTransitionTime":"2025-10-01T15:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.574698 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.574733 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.574742 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.574756 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.574767 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:26Z","lastTransitionTime":"2025-10-01T15:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.601578 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.601652 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.601622 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.601589 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:42:26 crc kubenswrapper[4949]: E1001 15:42:26.601805 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:42:26 crc kubenswrapper[4949]: E1001 15:42:26.601961 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kfx8b" podUID="d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd" Oct 01 15:42:26 crc kubenswrapper[4949]: E1001 15:42:26.602212 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:42:26 crc kubenswrapper[4949]: E1001 15:42:26.602376 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.677921 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.678002 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.678030 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.678061 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.678085 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:26Z","lastTransitionTime":"2025-10-01T15:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.781488 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.781566 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.781592 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.781622 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.781646 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:26Z","lastTransitionTime":"2025-10-01T15:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.844698 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd-metrics-certs\") pod \"network-metrics-daemon-kfx8b\" (UID: \"d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd\") " pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:42:26 crc kubenswrapper[4949]: E1001 15:42:26.844943 4949 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 01 15:42:26 crc kubenswrapper[4949]: E1001 15:42:26.845079 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd-metrics-certs podName:d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd nodeName:}" failed. No retries permitted until 2025-10-01 15:42:34.845045526 +0000 UTC m=+54.150651757 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd-metrics-certs") pod "network-metrics-daemon-kfx8b" (UID: "d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.884769 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.884838 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.884863 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.884892 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.884914 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:26Z","lastTransitionTime":"2025-10-01T15:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.988011 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.988063 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.988085 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.988117 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:26 crc kubenswrapper[4949]: I1001 15:42:26.988186 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:26Z","lastTransitionTime":"2025-10-01T15:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.090664 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.090714 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.090728 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.090750 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.090762 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:27Z","lastTransitionTime":"2025-10-01T15:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.194112 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.194209 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.194229 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.194254 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.194269 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:27Z","lastTransitionTime":"2025-10-01T15:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.204867 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.204918 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.204931 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.204952 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.204965 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:27Z","lastTransitionTime":"2025-10-01T15:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:27 crc kubenswrapper[4949]: E1001 15:42:27.218940 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a4890954-ee04-4573-a52a-d0437f2c0f47\\\",\\\"systemUUID\\\":\\\"de71db26-32c1-4956-9e9f-66fc023dcd38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:27Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.222986 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.223040 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.223058 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.223081 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.223101 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:27Z","lastTransitionTime":"2025-10-01T15:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:27 crc kubenswrapper[4949]: E1001 15:42:27.240905 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a4890954-ee04-4573-a52a-d0437f2c0f47\\\",\\\"systemUUID\\\":\\\"de71db26-32c1-4956-9e9f-66fc023dcd38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:27Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.245439 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.245485 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.245497 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.245516 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.245530 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:27Z","lastTransitionTime":"2025-10-01T15:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:27 crc kubenswrapper[4949]: E1001 15:42:27.262219 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a4890954-ee04-4573-a52a-d0437f2c0f47\\\",\\\"systemUUID\\\":\\\"de71db26-32c1-4956-9e9f-66fc023dcd38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:27Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.265934 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.265964 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.265975 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.265990 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.266004 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:27Z","lastTransitionTime":"2025-10-01T15:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:27 crc kubenswrapper[4949]: E1001 15:42:27.280565 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a4890954-ee04-4573-a52a-d0437f2c0f47\\\",\\\"systemUUID\\\":\\\"de71db26-32c1-4956-9e9f-66fc023dcd38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:27Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.284333 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.284366 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.284378 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.284394 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.284407 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:27Z","lastTransitionTime":"2025-10-01T15:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:27 crc kubenswrapper[4949]: E1001 15:42:27.299208 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:27Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a4890954-ee04-4573-a52a-d0437f2c0f47\\\",\\\"systemUUID\\\":\\\"de71db26-32c1-4956-9e9f-66fc023dcd38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:27Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:27 crc kubenswrapper[4949]: E1001 15:42:27.299355 4949 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.300818 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.300854 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.301434 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.301461 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.301473 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:27Z","lastTransitionTime":"2025-10-01T15:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.408986 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.409113 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.409169 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.409224 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.409259 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:27Z","lastTransitionTime":"2025-10-01T15:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.512408 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.512477 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.512499 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.512528 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.512549 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:27Z","lastTransitionTime":"2025-10-01T15:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.614576 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.614633 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.614651 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.614672 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.614688 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:27Z","lastTransitionTime":"2025-10-01T15:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.718288 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.718363 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.718383 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.718408 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.718425 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:27Z","lastTransitionTime":"2025-10-01T15:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.821297 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.821379 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.821403 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.821433 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.821458 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:27Z","lastTransitionTime":"2025-10-01T15:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.924272 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.924367 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.924408 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.924440 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:27 crc kubenswrapper[4949]: I1001 15:42:27.924465 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:27Z","lastTransitionTime":"2025-10-01T15:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:28 crc kubenswrapper[4949]: I1001 15:42:28.028006 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:28 crc kubenswrapper[4949]: I1001 15:42:28.028085 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:28 crc kubenswrapper[4949]: I1001 15:42:28.028108 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:28 crc kubenswrapper[4949]: I1001 15:42:28.028287 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:28 crc kubenswrapper[4949]: I1001 15:42:28.028322 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:28Z","lastTransitionTime":"2025-10-01T15:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:28 crc kubenswrapper[4949]: I1001 15:42:28.131437 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:28 crc kubenswrapper[4949]: I1001 15:42:28.131489 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:28 crc kubenswrapper[4949]: I1001 15:42:28.131558 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:28 crc kubenswrapper[4949]: I1001 15:42:28.131615 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:28 crc kubenswrapper[4949]: I1001 15:42:28.131638 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:28Z","lastTransitionTime":"2025-10-01T15:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:28 crc kubenswrapper[4949]: I1001 15:42:28.235616 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:28 crc kubenswrapper[4949]: I1001 15:42:28.235677 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:28 crc kubenswrapper[4949]: I1001 15:42:28.235692 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:28 crc kubenswrapper[4949]: I1001 15:42:28.235714 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:28 crc kubenswrapper[4949]: I1001 15:42:28.235733 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:28Z","lastTransitionTime":"2025-10-01T15:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:28 crc kubenswrapper[4949]: I1001 15:42:28.338621 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:28 crc kubenswrapper[4949]: I1001 15:42:28.338697 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:28 crc kubenswrapper[4949]: I1001 15:42:28.338730 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:28 crc kubenswrapper[4949]: I1001 15:42:28.338760 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:28 crc kubenswrapper[4949]: I1001 15:42:28.338780 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:28Z","lastTransitionTime":"2025-10-01T15:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:28 crc kubenswrapper[4949]: I1001 15:42:28.441561 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:28 crc kubenswrapper[4949]: I1001 15:42:28.441604 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:28 crc kubenswrapper[4949]: I1001 15:42:28.441616 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:28 crc kubenswrapper[4949]: I1001 15:42:28.441640 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:28 crc kubenswrapper[4949]: I1001 15:42:28.441663 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:28Z","lastTransitionTime":"2025-10-01T15:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:28 crc kubenswrapper[4949]: I1001 15:42:28.544789 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:28 crc kubenswrapper[4949]: I1001 15:42:28.544847 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:28 crc kubenswrapper[4949]: I1001 15:42:28.544862 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:28 crc kubenswrapper[4949]: I1001 15:42:28.544881 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:28 crc kubenswrapper[4949]: I1001 15:42:28.544894 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:28Z","lastTransitionTime":"2025-10-01T15:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:28 crc kubenswrapper[4949]: I1001 15:42:28.600620 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:42:28 crc kubenswrapper[4949]: I1001 15:42:28.600676 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:42:28 crc kubenswrapper[4949]: I1001 15:42:28.600678 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:42:28 crc kubenswrapper[4949]: I1001 15:42:28.600748 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:42:28 crc kubenswrapper[4949]: E1001 15:42:28.600816 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:42:28 crc kubenswrapper[4949]: E1001 15:42:28.600968 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:42:28 crc kubenswrapper[4949]: E1001 15:42:28.601178 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kfx8b" podUID="d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd" Oct 01 15:42:28 crc kubenswrapper[4949]: E1001 15:42:28.601308 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:42:28 crc kubenswrapper[4949]: I1001 15:42:28.647110 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:28 crc kubenswrapper[4949]: I1001 15:42:28.647190 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:28 crc kubenswrapper[4949]: I1001 15:42:28.647205 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:28 crc kubenswrapper[4949]: I1001 15:42:28.647225 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:28 crc kubenswrapper[4949]: I1001 15:42:28.647237 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:28Z","lastTransitionTime":"2025-10-01T15:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:28 crc kubenswrapper[4949]: I1001 15:42:28.749749 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:28 crc kubenswrapper[4949]: I1001 15:42:28.749806 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:28 crc kubenswrapper[4949]: I1001 15:42:28.749822 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:28 crc kubenswrapper[4949]: I1001 15:42:28.749845 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:28 crc kubenswrapper[4949]: I1001 15:42:28.749863 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:28Z","lastTransitionTime":"2025-10-01T15:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:28 crc kubenswrapper[4949]: I1001 15:42:28.853826 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:28 crc kubenswrapper[4949]: I1001 15:42:28.853872 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:28 crc kubenswrapper[4949]: I1001 15:42:28.853886 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:28 crc kubenswrapper[4949]: I1001 15:42:28.853903 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:28 crc kubenswrapper[4949]: I1001 15:42:28.853913 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:28Z","lastTransitionTime":"2025-10-01T15:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:28 crc kubenswrapper[4949]: I1001 15:42:28.956451 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:28 crc kubenswrapper[4949]: I1001 15:42:28.956485 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:28 crc kubenswrapper[4949]: I1001 15:42:28.956494 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:28 crc kubenswrapper[4949]: I1001 15:42:28.956506 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:28 crc kubenswrapper[4949]: I1001 15:42:28.956514 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:28Z","lastTransitionTime":"2025-10-01T15:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:29 crc kubenswrapper[4949]: I1001 15:42:29.059954 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:29 crc kubenswrapper[4949]: I1001 15:42:29.060053 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:29 crc kubenswrapper[4949]: I1001 15:42:29.060090 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:29 crc kubenswrapper[4949]: I1001 15:42:29.060120 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:29 crc kubenswrapper[4949]: I1001 15:42:29.060179 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:29Z","lastTransitionTime":"2025-10-01T15:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:29 crc kubenswrapper[4949]: I1001 15:42:29.163460 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:29 crc kubenswrapper[4949]: I1001 15:42:29.163532 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:29 crc kubenswrapper[4949]: I1001 15:42:29.163555 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:29 crc kubenswrapper[4949]: I1001 15:42:29.163589 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:29 crc kubenswrapper[4949]: I1001 15:42:29.163612 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:29Z","lastTransitionTime":"2025-10-01T15:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:29 crc kubenswrapper[4949]: I1001 15:42:29.267037 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:29 crc kubenswrapper[4949]: I1001 15:42:29.267102 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:29 crc kubenswrapper[4949]: I1001 15:42:29.267165 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:29 crc kubenswrapper[4949]: I1001 15:42:29.267202 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:29 crc kubenswrapper[4949]: I1001 15:42:29.267228 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:29Z","lastTransitionTime":"2025-10-01T15:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:29 crc kubenswrapper[4949]: I1001 15:42:29.370214 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:29 crc kubenswrapper[4949]: I1001 15:42:29.370275 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:29 crc kubenswrapper[4949]: I1001 15:42:29.370294 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:29 crc kubenswrapper[4949]: I1001 15:42:29.370320 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:29 crc kubenswrapper[4949]: I1001 15:42:29.370339 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:29Z","lastTransitionTime":"2025-10-01T15:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:29 crc kubenswrapper[4949]: I1001 15:42:29.473068 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:29 crc kubenswrapper[4949]: I1001 15:42:29.473173 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:29 crc kubenswrapper[4949]: I1001 15:42:29.473192 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:29 crc kubenswrapper[4949]: I1001 15:42:29.473217 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:29 crc kubenswrapper[4949]: I1001 15:42:29.473236 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:29Z","lastTransitionTime":"2025-10-01T15:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:29 crc kubenswrapper[4949]: I1001 15:42:29.575418 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:29 crc kubenswrapper[4949]: I1001 15:42:29.575457 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:29 crc kubenswrapper[4949]: I1001 15:42:29.575469 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:29 crc kubenswrapper[4949]: I1001 15:42:29.575484 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:29 crc kubenswrapper[4949]: I1001 15:42:29.575493 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:29Z","lastTransitionTime":"2025-10-01T15:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:29 crc kubenswrapper[4949]: I1001 15:42:29.678214 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:29 crc kubenswrapper[4949]: I1001 15:42:29.678261 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:29 crc kubenswrapper[4949]: I1001 15:42:29.678276 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:29 crc kubenswrapper[4949]: I1001 15:42:29.678293 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:29 crc kubenswrapper[4949]: I1001 15:42:29.678305 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:29Z","lastTransitionTime":"2025-10-01T15:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:29 crc kubenswrapper[4949]: I1001 15:42:29.780529 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:29 crc kubenswrapper[4949]: I1001 15:42:29.780562 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:29 crc kubenswrapper[4949]: I1001 15:42:29.780573 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:29 crc kubenswrapper[4949]: I1001 15:42:29.780589 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:29 crc kubenswrapper[4949]: I1001 15:42:29.780600 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:29Z","lastTransitionTime":"2025-10-01T15:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:29 crc kubenswrapper[4949]: I1001 15:42:29.882623 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:29 crc kubenswrapper[4949]: I1001 15:42:29.882671 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:29 crc kubenswrapper[4949]: I1001 15:42:29.882686 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:29 crc kubenswrapper[4949]: I1001 15:42:29.882706 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:29 crc kubenswrapper[4949]: I1001 15:42:29.882721 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:29Z","lastTransitionTime":"2025-10-01T15:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:29 crc kubenswrapper[4949]: I1001 15:42:29.986149 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:29 crc kubenswrapper[4949]: I1001 15:42:29.986190 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:29 crc kubenswrapper[4949]: I1001 15:42:29.986203 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:29 crc kubenswrapper[4949]: I1001 15:42:29.986222 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:29 crc kubenswrapper[4949]: I1001 15:42:29.986233 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:29Z","lastTransitionTime":"2025-10-01T15:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:30 crc kubenswrapper[4949]: I1001 15:42:30.089819 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:30 crc kubenswrapper[4949]: I1001 15:42:30.089900 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:30 crc kubenswrapper[4949]: I1001 15:42:30.089923 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:30 crc kubenswrapper[4949]: I1001 15:42:30.089968 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:30 crc kubenswrapper[4949]: I1001 15:42:30.090001 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:30Z","lastTransitionTime":"2025-10-01T15:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:30 crc kubenswrapper[4949]: I1001 15:42:30.194191 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:30 crc kubenswrapper[4949]: I1001 15:42:30.194277 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:30 crc kubenswrapper[4949]: I1001 15:42:30.194310 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:30 crc kubenswrapper[4949]: I1001 15:42:30.194342 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:30 crc kubenswrapper[4949]: I1001 15:42:30.194365 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:30Z","lastTransitionTime":"2025-10-01T15:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:30 crc kubenswrapper[4949]: I1001 15:42:30.298826 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:30 crc kubenswrapper[4949]: I1001 15:42:30.298909 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:30 crc kubenswrapper[4949]: I1001 15:42:30.298930 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:30 crc kubenswrapper[4949]: I1001 15:42:30.298959 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:30 crc kubenswrapper[4949]: I1001 15:42:30.298980 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:30Z","lastTransitionTime":"2025-10-01T15:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:30 crc kubenswrapper[4949]: I1001 15:42:30.402648 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:30 crc kubenswrapper[4949]: I1001 15:42:30.402701 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:30 crc kubenswrapper[4949]: I1001 15:42:30.402714 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:30 crc kubenswrapper[4949]: I1001 15:42:30.402736 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:30 crc kubenswrapper[4949]: I1001 15:42:30.402747 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:30Z","lastTransitionTime":"2025-10-01T15:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:30 crc kubenswrapper[4949]: I1001 15:42:30.505048 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:30 crc kubenswrapper[4949]: I1001 15:42:30.505081 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:30 crc kubenswrapper[4949]: I1001 15:42:30.505089 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:30 crc kubenswrapper[4949]: I1001 15:42:30.505133 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:30 crc kubenswrapper[4949]: I1001 15:42:30.505192 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:30Z","lastTransitionTime":"2025-10-01T15:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:30 crc kubenswrapper[4949]: I1001 15:42:30.601155 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:42:30 crc kubenswrapper[4949]: I1001 15:42:30.601253 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:42:30 crc kubenswrapper[4949]: E1001 15:42:30.601300 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kfx8b" podUID="d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd" Oct 01 15:42:30 crc kubenswrapper[4949]: I1001 15:42:30.601333 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:42:30 crc kubenswrapper[4949]: E1001 15:42:30.601432 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:42:30 crc kubenswrapper[4949]: I1001 15:42:30.601478 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:42:30 crc kubenswrapper[4949]: E1001 15:42:30.601527 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:42:30 crc kubenswrapper[4949]: E1001 15:42:30.601571 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:42:30 crc kubenswrapper[4949]: I1001 15:42:30.607341 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:30 crc kubenswrapper[4949]: I1001 15:42:30.607367 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:30 crc kubenswrapper[4949]: I1001 15:42:30.607376 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:30 crc kubenswrapper[4949]: I1001 15:42:30.607394 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:30 crc kubenswrapper[4949]: I1001 15:42:30.607411 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:30Z","lastTransitionTime":"2025-10-01T15:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:30 crc kubenswrapper[4949]: I1001 15:42:30.711086 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:30 crc kubenswrapper[4949]: I1001 15:42:30.711222 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:30 crc kubenswrapper[4949]: I1001 15:42:30.711252 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:30 crc kubenswrapper[4949]: I1001 15:42:30.711282 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:30 crc kubenswrapper[4949]: I1001 15:42:30.711304 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:30Z","lastTransitionTime":"2025-10-01T15:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:30 crc kubenswrapper[4949]: I1001 15:42:30.814536 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:30 crc kubenswrapper[4949]: I1001 15:42:30.814625 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:30 crc kubenswrapper[4949]: I1001 15:42:30.814648 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:30 crc kubenswrapper[4949]: I1001 15:42:30.814711 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:30 crc kubenswrapper[4949]: I1001 15:42:30.814742 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:30Z","lastTransitionTime":"2025-10-01T15:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:30 crc kubenswrapper[4949]: I1001 15:42:30.918230 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:30 crc kubenswrapper[4949]: I1001 15:42:30.918311 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:30 crc kubenswrapper[4949]: I1001 15:42:30.918360 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:30 crc kubenswrapper[4949]: I1001 15:42:30.918392 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:30 crc kubenswrapper[4949]: I1001 15:42:30.918410 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:30Z","lastTransitionTime":"2025-10-01T15:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.021023 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.021060 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.021069 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.021082 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.021092 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:31Z","lastTransitionTime":"2025-10-01T15:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.124531 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.124619 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.124646 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.124677 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.124697 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:31Z","lastTransitionTime":"2025-10-01T15:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.227532 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.227575 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.227587 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.227617 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.227642 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:31Z","lastTransitionTime":"2025-10-01T15:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.330467 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.330529 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.330597 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.330635 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.330658 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:31Z","lastTransitionTime":"2025-10-01T15:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.433781 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.433819 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.433827 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.433839 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.433849 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:31Z","lastTransitionTime":"2025-10-01T15:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.537522 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.537574 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.537584 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.537600 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.537613 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:31Z","lastTransitionTime":"2025-10-01T15:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.619412 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kfx8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2bpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2bpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kfx8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:31Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.635336 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:31Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.639956 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.639988 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.639999 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.640017 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.640029 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:31Z","lastTransitionTime":"2025-10-01T15:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.651227 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:31Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.663897 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:31Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.686488 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e15cd67-d4ad-49b8-96a6-da114105e558\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee03163adf8037e21674f8ff81a9eba22d580bf29b7c9a81d9505dd1e768169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a240a198e037219428db0de4c633c5e99064f7f13a6a8922eba7fa3fac633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6287\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:31Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.701334 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s5r4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a54b1244d03fd99a21c9ca116873fe7a80ec3053c95c8b99de3fb62e581419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghb86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s5r4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:31Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.715709 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xr96p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6656de7a-9b8f-4714-81ae-3685c01f11fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24da01cfcdff6460cccc5cf06d24aa758d33ee7d086c7b93078b5848c650fcc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0296aee184dcef13f03cc1da4987bef80c13c394f067300167eadd8a4091f4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0296aee184dcef13f03cc1da4987bef80c13c394f067300167eadd8a4091f4c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xr96p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:31Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.734943 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5897a864-bb97-4443-a301-b30b22d13a88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02305e2d3e998f2b11185e65d51381d27f69cc544cba61b0878457e4816e5371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05aabfc445359dba1707d194e9dcf27bb57037b582f1181556cc65e2f2f712f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34912853a2f81af97497fc66053b02383114c5a42d46e39f2e7a98b48bf9e78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31382514a5f6e8d234356e87d2c1aba55e02a4edb54835382fe810f10e8df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c11c5a8bd9444a04b072ad656edec0e3f7d287392a2dffd509f3d7c0d1d4601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:31Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.742186 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.742230 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.742244 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.742264 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.742277 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:31Z","lastTransitionTime":"2025-10-01T15:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.749440 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d76d2a7d-139b-40b6-91f8-c2c8ae0ecc71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b08c9c1691470813c489fb61abd198ca2b5a94ddec2f9b5a06b9d6c9b56334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e467246736f9943a0524d6786a8c4bd4a567f60f20ffd7259a5124c844c44522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da953f01526669a6db92b3ed2d63682b200ea37d86e44ff34a3b7630877fe6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7235ba9aab9db39a23fe6b99b3278ff4c1ede47a1d58c2c986fe3244a2fa12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:31Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.762105 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa37955372c2b350c3cd8103db2d231407c8f853e02d43fcae591d66a1f2bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:31Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.775051 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a5390e0a0285f487962def201f71183a2cfdbcc8db56f1bd0c64b2f6014c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:31Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.786518 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a28cdb5789930fa13d9e76aeb4dc582efc2f97a117e7d3170909dddd8f6eed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a081b64cca92126512c81346c1b0005ddefcd11b702b12598c567cc627d3561f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:31Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.796806 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kg2qk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75c430b-f863-470c-b57f-def53bf840db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab6ca47d17c063e35d4f3109e8dbe39f42dbf245b8f439b12464e167fb57dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kg2qk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:31Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.818794 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b30af5f-469f-4bee-b77f-4b58edba325b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2df7f85dd23b39e72771fddf6c1ab34df3b9fcea664dffe05a1b8859281fff26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0d4badfc0169a3e0cbd62d6661b7e542f43cbbe9005a56c68c7e0027235b64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://728c9faef5d0d63e5dd9cd98520d62db3e75646be0aeec6ff20c71598e35b5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35e991ed458eaa7ea90ab0ae0204b7c9b865ad5c5044709257614edda19865c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde65de2ce60422cb5ddb92562725b8dcd2f8b36bee19cc6e6de8aaab216c7ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb7bdabc02a6c9442d7f842e078acef313e10cd6dbf7153b289ee93b99c9abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3583993658cb241148305ec5df4ca9c7b72512cd521369892abec9d2c06c1064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3583993658cb241148305ec5df4ca9c7b72512cd521369892abec9d2c06c1064\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-01T15:42:21Z\\\",\\\"message\\\":\\\":160\\\\nI1001 15:42:20.949761 6420 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1001 15:42:20.949806 6420 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1001 15:42:20.950328 6420 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1001 15:42:20.950359 6420 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1001 15:42:20.950367 6420 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1001 15:42:20.950404 6420 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1001 15:42:20.950410 6420 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1001 15:42:20.950449 6420 factory.go:656] Stopping watch factory\\\\nI1001 15:42:20.950459 6420 handler.go:208] Removed *v1.Node event handler 2\\\\nI1001 15:42:20.950466 6420 ovnkube.go:599] Stopped ovnkube\\\\nI1001 15:42:20.950472 6420 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1001 15:42:20.950475 6420 handler.go:208] Removed *v1.Node event handler 7\\\\nI1001 15:42:20.950483 6420 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1001 15\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pppfm_openshift-ovn-kubernetes(6b30af5f-469f-4bee-b77f-4b58edba325b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08a6304f2c50f5e54adf26554bd2f81aba317d4ee153f55dc25123926bc380e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pppfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:31Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.833665 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zxj4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"660f5a12-b71d-454a-8ec0-bae2646530a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://956d6c7f1d35dedc8f0f1255301e89801525d97a0a3837f3d085900a53bead59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbc4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4e431fdeada96dc88cfe8f1b26c2ba3502f52e3d28880a87c5c4ed7482defb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbc4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zxj4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:31Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.844589 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.844638 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.844647 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.844661 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.844672 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:31Z","lastTransitionTime":"2025-10-01T15:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.860618 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439d2f19-631a-4560-aa58-70cdeef997a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://110e9a59347a665dfa5d6fcdf9e4937525c5e799c8356b2f28d3ba523f925d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513e0be5244aa9da6671888fedd775306c0824650cc687f0faea952c28d1b6b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47ab556b4721804041f9abee5effa082f4f29fbffe7681a7fc8efe0cb4f2b7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41f38cee75135e15635fdda772af2bdd1588e57025d28247223af8b5c34202e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02eba524109342a0990f10dfdf5432c2504acbb394b4f6e23b370becc256f51c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-01T15:41:55Z\\\",\\\"message\\\":\\\"W1001 15:41:44.880099 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1001 15:41:44.880530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759333304 cert, and key in /tmp/serving-cert-2775419030/serving-signer.crt, /tmp/serving-cert-2775419030/serving-signer.key\\\\nI1001 15:41:45.097803 1 observer_polling.go:159] Starting file observer\\\\nW1001 15:41:45.100841 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1001 15:41:45.103153 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1001 15:41:45.104329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2775419030/tls.crt::/tmp/serving-cert-2775419030/tls.key\\\\\\\"\\\\nF1001 15:41:55.441587 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2ceea2906c651debca4b72a5654ba5f8291a20d513a96b2fde7f1a0373674c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:31Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.872882 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pgg4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25c0759d-c94a-438c-b478-48161acbb035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9037ea44408cee3d00ee27bb38ea8e49344ab06d6b173d86689a131010eb7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r42ms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pgg4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:31Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.947773 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.947810 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.947819 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.947833 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:31 crc kubenswrapper[4949]: I1001 15:42:31.947841 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:31Z","lastTransitionTime":"2025-10-01T15:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.050741 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.050786 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.050803 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.050818 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.050829 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:32Z","lastTransitionTime":"2025-10-01T15:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.153221 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.153288 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.153307 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.153332 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.153350 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:32Z","lastTransitionTime":"2025-10-01T15:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.257252 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.257340 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.257399 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.257422 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.257438 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:32Z","lastTransitionTime":"2025-10-01T15:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.304268 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.304404 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.304481 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:42:32 crc kubenswrapper[4949]: E1001 15:42:32.304638 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:43:04.304598272 +0000 UTC m=+83.610204513 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:42:32 crc kubenswrapper[4949]: E1001 15:42:32.304669 4949 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 01 15:42:32 crc kubenswrapper[4949]: E1001 15:42:32.304784 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-01 15:43:04.304765687 +0000 UTC m=+83.610371918 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 01 15:42:32 crc kubenswrapper[4949]: E1001 15:42:32.304690 4949 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 01 15:42:32 crc kubenswrapper[4949]: E1001 15:42:32.304870 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-01 15:43:04.304852089 +0000 UTC m=+83.610458490 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.359929 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.359964 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.359977 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.359994 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.360004 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:32Z","lastTransitionTime":"2025-10-01T15:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.405569 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.406332 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:42:32 crc kubenswrapper[4949]: E1001 15:42:32.405855 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 01 15:42:32 crc kubenswrapper[4949]: E1001 15:42:32.406469 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 01 15:42:32 crc kubenswrapper[4949]: E1001 15:42:32.406500 4949 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 01 15:42:32 crc kubenswrapper[4949]: E1001 15:42:32.406599 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-01 15:43:04.406563842 +0000 UTC m=+83.712170173 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 01 15:42:32 crc kubenswrapper[4949]: E1001 15:42:32.406732 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 01 15:42:32 crc kubenswrapper[4949]: E1001 15:42:32.406789 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 01 15:42:32 crc kubenswrapper[4949]: E1001 15:42:32.406812 4949 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 01 15:42:32 crc kubenswrapper[4949]: E1001 15:42:32.406894 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-01 15:43:04.40687092 +0000 UTC m=+83.712477151 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.463080 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.463150 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.463163 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.463178 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.463193 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:32Z","lastTransitionTime":"2025-10-01T15:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.565385 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.565424 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.565435 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.565450 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.565461 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:32Z","lastTransitionTime":"2025-10-01T15:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.601050 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.601228 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:42:32 crc kubenswrapper[4949]: E1001 15:42:32.601296 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.601091 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.601115 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:42:32 crc kubenswrapper[4949]: E1001 15:42:32.601380 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:42:32 crc kubenswrapper[4949]: E1001 15:42:32.602498 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:42:32 crc kubenswrapper[4949]: E1001 15:42:32.602740 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kfx8b" podUID="d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd" Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.669026 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.669186 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.669211 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.669235 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.669297 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:32Z","lastTransitionTime":"2025-10-01T15:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.773002 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.773326 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.773405 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.773502 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.773664 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:32Z","lastTransitionTime":"2025-10-01T15:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.876900 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.877144 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.877253 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.877338 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.877449 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:32Z","lastTransitionTime":"2025-10-01T15:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.979727 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.979777 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.979794 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.979816 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:32 crc kubenswrapper[4949]: I1001 15:42:32.979832 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:32Z","lastTransitionTime":"2025-10-01T15:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.082541 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.083278 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.083299 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.083315 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.083326 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:33Z","lastTransitionTime":"2025-10-01T15:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.176711 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.187158 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.188339 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.188371 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.188382 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.188396 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.188407 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:33Z","lastTransitionTime":"2025-10-01T15:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.190263 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a28cdb5789930fa13d9e76aeb4dc582efc2f97a117e7d3170909dddd8f6eed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a081b64cca92126512c81346c1b0005ddefcd11b702b12598c567cc627d3561f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:33Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.199983 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kg2qk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75c430b-f863-470c-b57f-def53bf840db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab6ca47d17c063e35d4f3109e8dbe39f42dbf245b8f439b12464e167fb57dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kg2qk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:33Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.217803 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b30af5f-469f-4bee-b77f-4b58edba325b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2df7f85dd23b39e72771fddf6c1ab34df3b9fcea664dffe05a1b8859281fff26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0d4badfc0169a3e0cbd62d6661b7e542f43cbbe9005a56c68c7e0027235b64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://728c9faef5d0d63e5dd9cd98520d62db3e75646be0aeec6ff20c71598e35b5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35e991ed458eaa7ea90ab0ae0204b7c9b865ad5c5044709257614edda19865c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde65de2ce60422cb5ddb92562725b8dcd2f8b36bee19cc6e6de8aaab216c7ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb7bdabc02a6c9442d7f842e078acef313e10cd6dbf7153b289ee93b99c9abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3583993658cb241148305ec5df4ca9c7b72512cd521369892abec9d2c06c1064\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3583993658cb241148305ec5df4ca9c7b72512cd521369892abec9d2c06c1064\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-01T15:42:21Z\\\",\\\"message\\\":\\\":160\\\\nI1001 15:42:20.949761 6420 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1001 15:42:20.949806 6420 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1001 15:42:20.950328 6420 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1001 15:42:20.950359 6420 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1001 15:42:20.950367 6420 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1001 15:42:20.950404 6420 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1001 15:42:20.950410 6420 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1001 15:42:20.950449 6420 factory.go:656] Stopping watch factory\\\\nI1001 15:42:20.950459 6420 handler.go:208] Removed *v1.Node event handler 2\\\\nI1001 15:42:20.950466 6420 ovnkube.go:599] Stopped ovnkube\\\\nI1001 15:42:20.950472 6420 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1001 15:42:20.950475 6420 handler.go:208] Removed *v1.Node event handler 7\\\\nI1001 15:42:20.950483 6420 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1001 15\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-pppfm_openshift-ovn-kubernetes(6b30af5f-469f-4bee-b77f-4b58edba325b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08a6304f2c50f5e54adf26554bd2f81aba317d4ee153f55dc25123926bc380e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pppfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:33Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.232765 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zxj4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"660f5a12-b71d-454a-8ec0-bae2646530a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://956d6c7f1d35dedc8f0f1255301e89801525d97a0a3837f3d085900a53bead59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbc4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4e431fdeada96dc88cfe8f1b26c2ba3502f52e3d28880a87c5c4ed7482defb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbc4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zxj4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:33Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.245821 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439d2f19-631a-4560-aa58-70cdeef997a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://110e9a59347a665dfa5d6fcdf9e4937525c5e799c8356b2f28d3ba523f925d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513e0be5244aa9da6671888fedd775306c0824650cc687f0faea952c28d1b6b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47ab556b4721804041f9abee5effa082f4f29fbffe7681a7fc8efe0cb4f2b7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41f38cee75135e15635fdda772af2bdd1588e57025d28247223af8b5c34202e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02eba524109342a0990f10dfdf5432c2504acbb394b4f6e23b370becc256f51c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-01T15:41:55Z\\\",\\\"message\\\":\\\"W1001 15:41:44.880099 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1001 15:41:44.880530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759333304 cert, and key in /tmp/serving-cert-2775419030/serving-signer.crt, /tmp/serving-cert-2775419030/serving-signer.key\\\\nI1001 15:41:45.097803 1 observer_polling.go:159] Starting file observer\\\\nW1001 15:41:45.100841 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1001 15:41:45.103153 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1001 15:41:45.104329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2775419030/tls.crt::/tmp/serving-cert-2775419030/tls.key\\\\\\\"\\\\nF1001 15:41:55.441587 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2ceea2906c651debca4b72a5654ba5f8291a20d513a96b2fde7f1a0373674c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:33Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.257588 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pgg4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25c0759d-c94a-438c-b478-48161acbb035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9037ea44408cee3d00ee27bb38ea8e49344ab06d6b173d86689a131010eb7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r42ms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pgg4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:33Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.269326 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:33Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.286626 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:33Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.290175 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.290220 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.290231 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.290248 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.290258 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:33Z","lastTransitionTime":"2025-10-01T15:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.300032 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:33Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.312838 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e15cd67-d4ad-49b8-96a6-da114105e558\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee03163adf8037e21674f8ff81a9eba22d580bf29b7c9a81d9505dd1e768169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a240a198e037219428db0de4c633c5e99064f7f13a6a8922eba7fa3fac633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6287\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:33Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.329280 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s5r4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a54b1244d03fd99a21c9ca116873fe7a80ec3053c95c8b99de3fb62e581419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghb86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s5r4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:33Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.344246 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kfx8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2bpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2bpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kfx8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:33Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.367809 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5897a864-bb97-4443-a301-b30b22d13a88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02305e2d3e998f2b11185e65d51381d27f69cc544cba61b0878457e4816e5371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05aabfc445359dba1707d194e9dcf27bb57037b582f1181556cc65e2f2f712f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34912853a2f81af97497fc66053b02383114c5a42d46e39f2e7a98b48bf9e78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31382514a5f6e8d234356e87d2c1aba55e02a4edb54835382fe810f10e8df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c11c5a8bd9444a04b072ad656edec0e3f7d287392a2dffd509f3d7c0d1d4601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:33Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.383578 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d76d2a7d-139b-40b6-91f8-c2c8ae0ecc71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b08c9c1691470813c489fb61abd198ca2b5a94ddec2f9b5a06b9d6c9b56334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e467246736f9943a0524d6786a8c4bd4a567f60f20ffd7259a5124c844c44522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da953f01526669a6db92b3ed2d63682b200ea37d86e44ff34a3b7630877fe6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7235ba9aab9db39a23fe6b99b3278ff4c1ede47a1d58c2c986fe3244a2fa12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:33Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.392781 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.392838 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.392848 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.392864 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.392892 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:33Z","lastTransitionTime":"2025-10-01T15:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.398720 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa37955372c2b350c3cd8103db2d231407c8f853e02d43fcae591d66a1f2bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:33Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.410525 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a5390e0a0285f487962def201f71183a2cfdbcc8db56f1bd0c64b2f6014c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:33Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.426090 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xr96p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6656de7a-9b8f-4714-81ae-3685c01f11fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24da01cfcdff6460cccc5cf06d24aa758d33ee7d086c7b93078b5848c650fcc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0296aee184dcef13f03cc1da4987bef80c13c394f067300167eadd8a4091f4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0296aee184dcef13f03cc1da4987bef80c13c394f067300167eadd8a4091f4c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xr96p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:33Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.495376 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.495423 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.495435 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.495451 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.495462 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:33Z","lastTransitionTime":"2025-10-01T15:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.598875 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.598945 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.598959 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.598984 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.598999 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:33Z","lastTransitionTime":"2025-10-01T15:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.701767 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.701815 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.701826 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.701864 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.701875 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:33Z","lastTransitionTime":"2025-10-01T15:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.804731 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.804775 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.804789 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.804809 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.804820 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:33Z","lastTransitionTime":"2025-10-01T15:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.907871 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.907928 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.907942 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.907972 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:33 crc kubenswrapper[4949]: I1001 15:42:33.907995 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:33Z","lastTransitionTime":"2025-10-01T15:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.010177 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.010222 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.010233 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.010246 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.010254 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:34Z","lastTransitionTime":"2025-10-01T15:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.112922 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.112958 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.112966 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.112979 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.112989 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:34Z","lastTransitionTime":"2025-10-01T15:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.215822 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.216101 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.216234 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.216330 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.216414 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:34Z","lastTransitionTime":"2025-10-01T15:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.319186 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.319229 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.319239 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.319252 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.319261 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:34Z","lastTransitionTime":"2025-10-01T15:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.421886 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.421927 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.421941 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.421957 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.421969 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:34Z","lastTransitionTime":"2025-10-01T15:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.525380 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.525443 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.525464 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.525494 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.525517 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:34Z","lastTransitionTime":"2025-10-01T15:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.600820 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.600943 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:42:34 crc kubenswrapper[4949]: E1001 15:42:34.600989 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kfx8b" podUID="d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd" Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.601060 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:42:34 crc kubenswrapper[4949]: E1001 15:42:34.601254 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.600851 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:42:34 crc kubenswrapper[4949]: E1001 15:42:34.601684 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:42:34 crc kubenswrapper[4949]: E1001 15:42:34.601771 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.602064 4949 scope.go:117] "RemoveContainer" containerID="3583993658cb241148305ec5df4ca9c7b72512cd521369892abec9d2c06c1064" Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.628823 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.629318 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.629432 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.629544 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.629668 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:34Z","lastTransitionTime":"2025-10-01T15:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.731917 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.732232 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.732346 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.732458 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.732578 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:34Z","lastTransitionTime":"2025-10-01T15:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.835298 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.835338 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.835349 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.835364 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.835406 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:34Z","lastTransitionTime":"2025-10-01T15:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.935182 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd-metrics-certs\") pod \"network-metrics-daemon-kfx8b\" (UID: \"d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd\") " pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:42:34 crc kubenswrapper[4949]: E1001 15:42:34.935355 4949 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 01 15:42:34 crc kubenswrapper[4949]: E1001 15:42:34.935497 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd-metrics-certs podName:d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd nodeName:}" failed. No retries permitted until 2025-10-01 15:42:50.935472186 +0000 UTC m=+70.241078557 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd-metrics-certs") pod "network-metrics-daemon-kfx8b" (UID: "d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.939646 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.939698 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.939709 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.939729 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.939740 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:34Z","lastTransitionTime":"2025-10-01T15:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.971532 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pppfm_6b30af5f-469f-4bee-b77f-4b58edba325b/ovnkube-controller/1.log" Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.974390 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" event={"ID":"6b30af5f-469f-4bee-b77f-4b58edba325b","Type":"ContainerStarted","Data":"b25501be462a751324672b5d9b82300e2ac592102b8247557aa5a70dd38f26c1"} Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.974908 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:42:34 crc kubenswrapper[4949]: I1001 15:42:34.989616 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439d2f19-631a-4560-aa58-70cdeef997a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://110e9a59347a665dfa5d6fcdf9e4937525c5e799c8356b2f28d3ba523f925d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513e0be5244aa9da6671888fedd775306c0824650cc687f0faea952c28d1b6b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47ab556b4721804041f9abee5effa082f4f29fbffe7681a7fc8efe0cb4f2b7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41f38cee75135e15635fdda772af2bdd1588e57025d28247223af8b5c34202e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02eba524109342a0990f10dfdf5432c2504acbb394b4f6e23b370becc256f51c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-01T15:41:55Z\\\",\\\"message\\\":\\\"W1001 15:41:44.880099 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1001 15:41:44.880530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759333304 cert, and key in /tmp/serving-cert-2775419030/serving-signer.crt, /tmp/serving-cert-2775419030/serving-signer.key\\\\nI1001 15:41:45.097803 1 observer_polling.go:159] Starting file observer\\\\nW1001 15:41:45.100841 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1001 15:41:45.103153 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1001 15:41:45.104329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2775419030/tls.crt::/tmp/serving-cert-2775419030/tls.key\\\\\\\"\\\\nF1001 15:41:55.441587 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2ceea2906c651debca4b72a5654ba5f8291a20d513a96b2fde7f1a0373674c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:34Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.001523 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pgg4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25c0759d-c94a-438c-b478-48161acbb035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9037ea44408cee3d00ee27bb38ea8e49344ab06d6b173d86689a131010eb7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r42ms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pgg4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:34Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.017757 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e15cd67-d4ad-49b8-96a6-da114105e558\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee03163adf8037e21674f8ff81a9eba22d580bf29b7c9a81d9505dd1e768169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a240a198e037219428db0de4c633c5e99064f7f13a6a8922eba7fa3fac633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6287\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:35Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.042504 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.042537 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.042545 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.042559 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.042567 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:35Z","lastTransitionTime":"2025-10-01T15:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.077459 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s5r4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a54b1244d03fd99a21c9ca116873fe7a80ec3053c95c8b99de3fb62e581419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghb86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s5r4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:35Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.090324 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kfx8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2bpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2bpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kfx8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:35Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.103573 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:35Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.117596 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:35Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.138329 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:35Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.144587 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.144624 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.144633 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.144648 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.144658 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:35Z","lastTransitionTime":"2025-10-01T15:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.154006 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa37955372c2b350c3cd8103db2d231407c8f853e02d43fcae591d66a1f2bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:35Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.168479 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a5390e0a0285f487962def201f71183a2cfdbcc8db56f1bd0c64b2f6014c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:35Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.182847 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xr96p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6656de7a-9b8f-4714-81ae-3685c01f11fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24da01cfcdff6460cccc5cf06d24aa758d33ee7d086c7b93078b5848c650fcc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0296aee184dcef13f03cc1da4987bef80c13c394f067300167eadd8a4091f4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0296aee184dcef13f03cc1da4987bef80c13c394f067300167eadd8a4091f4c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xr96p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:35Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.204441 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5897a864-bb97-4443-a301-b30b22d13a88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02305e2d3e998f2b11185e65d51381d27f69cc544cba61b0878457e4816e5371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05aabfc445359dba1707d194e9dcf27bb57037b582f1181556cc65e2f2f712f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34912853a2f81af97497fc66053b02383114c5a42d46e39f2e7a98b48bf9e78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31382514a5f6e8d234356e87d2c1aba55e02a4edb54835382fe810f10e8df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c11c5a8bd9444a04b072ad656edec0e3f7d287392a2dffd509f3d7c0d1d4601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:35Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.217591 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d76d2a7d-139b-40b6-91f8-c2c8ae0ecc71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b08c9c1691470813c489fb61abd198ca2b5a94ddec2f9b5a06b9d6c9b56334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e467246736f9943a0524d6786a8c4bd4a567f60f20ffd7259a5124c844c44522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da953f01526669a6db92b3ed2d63682b200ea37d86e44ff34a3b7630877fe6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7235ba9aab9db39a23fe6b99b3278ff4c1ede47a1d58c2c986fe3244a2fa12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:35Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.234163 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8101fee3-df2e-48d3-87d7-fd9cb4da0f2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ae2d981e775beb7e08339ee2b24437a4e7faa11462128d69dc0120905075759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://714ddbb1add5e32f98fea89501df6bfa217b607e3252008af82f130bc66e69c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d21bf7c2c1ae0ae4427bdccaa3cb14eec8034ad6b89498a13e41c346aa10f33d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd26ba21f3b352fff240b18114aa47c6185f125ed89be2586e676b766434c3b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd26ba21f3b352fff240b18114aa47c6185f125ed89be2586e676b766434c3b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:35Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.246813 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.246859 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.246870 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.246890 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.246903 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:35Z","lastTransitionTime":"2025-10-01T15:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.247107 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zxj4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"660f5a12-b71d-454a-8ec0-bae2646530a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://956d6c7f1d35dedc8f0f1255301e89801525d97a0a3837f3d085900a53bead59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbc4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4e431fdeada96dc88cfe8f1b26c2ba3502f52e3d28880a87c5c4ed7482defb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbc4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zxj4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:35Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.263889 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a28cdb5789930fa13d9e76aeb4dc582efc2f97a117e7d3170909dddd8f6eed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a081b64cca92126512c81346c1b0005ddefcd11b702b12598c567cc627d3561f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:35Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.277007 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kg2qk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75c430b-f863-470c-b57f-def53bf840db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab6ca47d17c063e35d4f3109e8dbe39f42dbf245b8f439b12464e167fb57dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kg2qk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:35Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.301069 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b30af5f-469f-4bee-b77f-4b58edba325b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2df7f85dd23b39e72771fddf6c1ab34df3b9fcea664dffe05a1b8859281fff26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0d4badfc0169a3e0cbd62d6661b7e542f43cbbe9005a56c68c7e0027235b64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://728c9faef5d0d63e5dd9cd98520d62db3e75646be0aeec6ff20c71598e35b5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35e991ed458eaa7ea90ab0ae0204b7c9b865ad5c5044709257614edda19865c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde65de2ce60422cb5ddb92562725b8dcd2f8b36bee19cc6e6de8aaab216c7ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb7bdabc02a6c9442d7f842e078acef313e10cd6dbf7153b289ee93b99c9abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b25501be462a751324672b5d9b82300e2ac592102b8247557aa5a70dd38f26c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3583993658cb241148305ec5df4ca9c7b72512cd521369892abec9d2c06c1064\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-01T15:42:21Z\\\",\\\"message\\\":\\\":160\\\\nI1001 15:42:20.949761 6420 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1001 15:42:20.949806 6420 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1001 15:42:20.950328 6420 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1001 15:42:20.950359 6420 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1001 15:42:20.950367 6420 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1001 15:42:20.950404 6420 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1001 15:42:20.950410 6420 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1001 15:42:20.950449 6420 factory.go:656] Stopping watch factory\\\\nI1001 15:42:20.950459 6420 handler.go:208] Removed *v1.Node event handler 2\\\\nI1001 15:42:20.950466 6420 ovnkube.go:599] Stopped ovnkube\\\\nI1001 15:42:20.950472 6420 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1001 15:42:20.950475 6420 handler.go:208] Removed *v1.Node event handler 7\\\\nI1001 15:42:20.950483 6420 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1001 15\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08a6304f2c50f5e54adf26554bd2f81aba317d4ee153f55dc25123926bc380e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pppfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:35Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.349019 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.349044 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.349055 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.349067 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.349076 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:35Z","lastTransitionTime":"2025-10-01T15:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.452284 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.452332 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.452348 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.452369 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.452385 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:35Z","lastTransitionTime":"2025-10-01T15:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.555076 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.555134 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.555144 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.555163 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.555182 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:35Z","lastTransitionTime":"2025-10-01T15:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.657903 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.657945 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.657957 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.657971 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.657983 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:35Z","lastTransitionTime":"2025-10-01T15:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.760462 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.760553 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.760580 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.760615 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.760641 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:35Z","lastTransitionTime":"2025-10-01T15:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.863091 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.863190 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.863205 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.863230 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.863246 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:35Z","lastTransitionTime":"2025-10-01T15:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.966697 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.966744 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.966757 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.966777 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.966792 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:35Z","lastTransitionTime":"2025-10-01T15:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.981206 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pppfm_6b30af5f-469f-4bee-b77f-4b58edba325b/ovnkube-controller/2.log" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.982345 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pppfm_6b30af5f-469f-4bee-b77f-4b58edba325b/ovnkube-controller/1.log" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.986435 4949 generic.go:334] "Generic (PLEG): container finished" podID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerID="b25501be462a751324672b5d9b82300e2ac592102b8247557aa5a70dd38f26c1" exitCode=1 Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.986484 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" event={"ID":"6b30af5f-469f-4bee-b77f-4b58edba325b","Type":"ContainerDied","Data":"b25501be462a751324672b5d9b82300e2ac592102b8247557aa5a70dd38f26c1"} Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.986522 4949 scope.go:117] "RemoveContainer" containerID="3583993658cb241148305ec5df4ca9c7b72512cd521369892abec9d2c06c1064" Oct 01 15:42:35 crc kubenswrapper[4949]: I1001 15:42:35.987978 4949 scope.go:117] "RemoveContainer" containerID="b25501be462a751324672b5d9b82300e2ac592102b8247557aa5a70dd38f26c1" Oct 01 15:42:35 crc kubenswrapper[4949]: E1001 15:42:35.988407 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pppfm_openshift-ovn-kubernetes(6b30af5f-469f-4bee-b77f-4b58edba325b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.007117 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439d2f19-631a-4560-aa58-70cdeef997a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://110e9a59347a665dfa5d6fcdf9e4937525c5e799c8356b2f28d3ba523f925d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513e0be5244aa9da6671888fedd775306c0824650cc687f0faea952c28d1b6b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47ab556b4721804041f9abee5effa082f4f29fbffe7681a7fc8efe0cb4f2b7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41f38cee75135e15635fdda772af2bdd1588e57025d28247223af8b5c34202e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02eba524109342a0990f10dfdf5432c2504acbb394b4f6e23b370becc256f51c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-01T15:41:55Z\\\",\\\"message\\\":\\\"W1001 15:41:44.880099 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1001 15:41:44.880530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759333304 cert, and key in /tmp/serving-cert-2775419030/serving-signer.crt, /tmp/serving-cert-2775419030/serving-signer.key\\\\nI1001 15:41:45.097803 1 observer_polling.go:159] Starting file observer\\\\nW1001 15:41:45.100841 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1001 15:41:45.103153 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1001 15:41:45.104329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2775419030/tls.crt::/tmp/serving-cert-2775419030/tls.key\\\\\\\"\\\\nF1001 15:41:55.441587 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2ceea2906c651debca4b72a5654ba5f8291a20d513a96b2fde7f1a0373674c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:36Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.020758 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pgg4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25c0759d-c94a-438c-b478-48161acbb035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9037ea44408cee3d00ee27bb38ea8e49344ab06d6b173d86689a131010eb7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r42ms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pgg4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:36Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.035847 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:36Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.051914 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:36Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.068622 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:36Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.071101 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.071169 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.071186 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.071216 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.071228 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:36Z","lastTransitionTime":"2025-10-01T15:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.081767 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e15cd67-d4ad-49b8-96a6-da114105e558\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee03163adf8037e21674f8ff81a9eba22d580bf29b7c9a81d9505dd1e768169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a240a198e037219428db0de4c633c5e99064f7f13a6a8922eba7fa3fac633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6287\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:36Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.096922 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s5r4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a54b1244d03fd99a21c9ca116873fe7a80ec3053c95c8b99de3fb62e581419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghb86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s5r4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:36Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.110913 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kfx8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2bpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2bpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kfx8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:36Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.134310 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5897a864-bb97-4443-a301-b30b22d13a88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02305e2d3e998f2b11185e65d51381d27f69cc544cba61b0878457e4816e5371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05aabfc445359dba1707d194e9dcf27bb57037b582f1181556cc65e2f2f712f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34912853a2f81af97497fc66053b02383114c5a42d46e39f2e7a98b48bf9e78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31382514a5f6e8d234356e87d2c1aba55e02a4edb54835382fe810f10e8df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c11c5a8bd9444a04b072ad656edec0e3f7d287392a2dffd509f3d7c0d1d4601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:36Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.152552 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d76d2a7d-139b-40b6-91f8-c2c8ae0ecc71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b08c9c1691470813c489fb61abd198ca2b5a94ddec2f9b5a06b9d6c9b56334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e467246736f9943a0524d6786a8c4bd4a567f60f20ffd7259a5124c844c44522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da953f01526669a6db92b3ed2d63682b200ea37d86e44ff34a3b7630877fe6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7235ba9aab9db39a23fe6b99b3278ff4c1ede47a1d58c2c986fe3244a2fa12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:36Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.169614 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8101fee3-df2e-48d3-87d7-fd9cb4da0f2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ae2d981e775beb7e08339ee2b24437a4e7faa11462128d69dc0120905075759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://714ddbb1add5e32f98fea89501df6bfa217b607e3252008af82f130bc66e69c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d21bf7c2c1ae0ae4427bdccaa3cb14eec8034ad6b89498a13e41c346aa10f33d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd26ba21f3b352fff240b18114aa47c6185f125ed89be2586e676b766434c3b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd26ba21f3b352fff240b18114aa47c6185f125ed89be2586e676b766434c3b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:36Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.173814 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.173848 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.173857 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.173877 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.173891 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:36Z","lastTransitionTime":"2025-10-01T15:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.185973 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa37955372c2b350c3cd8103db2d231407c8f853e02d43fcae591d66a1f2bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:36Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.200265 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a5390e0a0285f487962def201f71183a2cfdbcc8db56f1bd0c64b2f6014c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:36Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.218181 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xr96p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6656de7a-9b8f-4714-81ae-3685c01f11fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24da01cfcdff6460cccc5cf06d24aa758d33ee7d086c7b93078b5848c650fcc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0296aee184dcef13f03cc1da4987bef80c13c394f067300167eadd8a4091f4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0296aee184dcef13f03cc1da4987bef80c13c394f067300167eadd8a4091f4c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xr96p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:36Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.231564 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a28cdb5789930fa13d9e76aeb4dc582efc2f97a117e7d3170909dddd8f6eed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a081b64cca92126512c81346c1b0005ddefcd11b702b12598c567cc627d3561f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:36Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.242639 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kg2qk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75c430b-f863-470c-b57f-def53bf840db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab6ca47d17c063e35d4f3109e8dbe39f42dbf245b8f439b12464e167fb57dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kg2qk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:36Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.265102 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b30af5f-469f-4bee-b77f-4b58edba325b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2df7f85dd23b39e72771fddf6c1ab34df3b9fcea664dffe05a1b8859281fff26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0d4badfc0169a3e0cbd62d6661b7e542f43cbbe9005a56c68c7e0027235b64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://728c9faef5d0d63e5dd9cd98520d62db3e75646be0aeec6ff20c71598e35b5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35e991ed458eaa7ea90ab0ae0204b7c9b865ad5c5044709257614edda19865c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde65de2ce60422cb5ddb92562725b8dcd2f8b36bee19cc6e6de8aaab216c7ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb7bdabc02a6c9442d7f842e078acef313e10cd6dbf7153b289ee93b99c9abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b25501be462a751324672b5d9b82300e2ac592102b8247557aa5a70dd38f26c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3583993658cb241148305ec5df4ca9c7b72512cd521369892abec9d2c06c1064\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-01T15:42:21Z\\\",\\\"message\\\":\\\":160\\\\nI1001 15:42:20.949761 6420 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI1001 15:42:20.949806 6420 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI1001 15:42:20.950328 6420 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI1001 15:42:20.950359 6420 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI1001 15:42:20.950367 6420 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI1001 15:42:20.950404 6420 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI1001 15:42:20.950410 6420 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI1001 15:42:20.950449 6420 factory.go:656] Stopping watch factory\\\\nI1001 15:42:20.950459 6420 handler.go:208] Removed *v1.Node event handler 2\\\\nI1001 15:42:20.950466 6420 ovnkube.go:599] Stopped ovnkube\\\\nI1001 15:42:20.950472 6420 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI1001 15:42:20.950475 6420 handler.go:208] Removed *v1.Node event handler 7\\\\nI1001 15:42:20.950483 6420 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI1001 15\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:19Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25501be462a751324672b5d9b82300e2ac592102b8247557aa5a70dd38f26c1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-01T15:42:35Z\\\",\\\"message\\\":\\\"r for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:35Z is after 2025-08-24T17:21:41Z]\\\\nI1001 15:42:35.385219 6563 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-route-controller-manager/route-controller-manager]} name:Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.239:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {18746a4d-8a63-458a-b7e3-8fb89ff95fc0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1001 15:42:35.385254 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08a6304f2c50f5e54adf26554bd2f81aba317d4ee153f55dc25123926bc380e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pppfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:36Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.276102 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.276215 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.276227 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.276247 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.276259 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:36Z","lastTransitionTime":"2025-10-01T15:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.278351 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zxj4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"660f5a12-b71d-454a-8ec0-bae2646530a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://956d6c7f1d35dedc8f0f1255301e89801525d97a0a3837f3d085900a53bead59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbc4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4e431fdeada96dc88cfe8f1b26c2ba3502f52e3d28880a87c5c4ed7482defb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbc4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zxj4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:36Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.379198 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.379237 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.379248 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.379262 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.379271 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:36Z","lastTransitionTime":"2025-10-01T15:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.482482 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.482533 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.482545 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.482562 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.482575 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:36Z","lastTransitionTime":"2025-10-01T15:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.585071 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.585109 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.585146 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.585162 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.585172 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:36Z","lastTransitionTime":"2025-10-01T15:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.601042 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.601083 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.601057 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.601172 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:42:36 crc kubenswrapper[4949]: E1001 15:42:36.601243 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:42:36 crc kubenswrapper[4949]: E1001 15:42:36.601374 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:42:36 crc kubenswrapper[4949]: E1001 15:42:36.601454 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kfx8b" podUID="d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd" Oct 01 15:42:36 crc kubenswrapper[4949]: E1001 15:42:36.601581 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.687009 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.687041 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.687050 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.687062 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.687073 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:36Z","lastTransitionTime":"2025-10-01T15:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.789635 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.789671 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.789680 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.789692 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.789703 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:36Z","lastTransitionTime":"2025-10-01T15:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.892091 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.892151 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.892165 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.892185 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.892197 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:36Z","lastTransitionTime":"2025-10-01T15:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.993711 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pppfm_6b30af5f-469f-4bee-b77f-4b58edba325b/ovnkube-controller/2.log" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.994340 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.994381 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.994391 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.994405 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:36 crc kubenswrapper[4949]: I1001 15:42:36.994417 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:36Z","lastTransitionTime":"2025-10-01T15:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:36.999413 4949 scope.go:117] "RemoveContainer" containerID="b25501be462a751324672b5d9b82300e2ac592102b8247557aa5a70dd38f26c1" Oct 01 15:42:37 crc kubenswrapper[4949]: E1001 15:42:36.999628 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pppfm_openshift-ovn-kubernetes(6b30af5f-469f-4bee-b77f-4b58edba325b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.016892 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:37Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.034496 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:37Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.048777 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e15cd67-d4ad-49b8-96a6-da114105e558\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee03163adf8037e21674f8ff81a9eba22d580bf29b7c9a81d9505dd1e768169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a240a198e037219428db0de4c633c5e99064f7f13a6a8922eba7fa3fac633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6287\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:37Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.062820 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s5r4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a54b1244d03fd99a21c9ca116873fe7a80ec3053c95c8b99de3fb62e581419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghb86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s5r4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:37Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.078147 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kfx8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2bpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2bpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kfx8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:37Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.096882 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:37Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.097751 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.097816 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.097837 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.097862 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.097882 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:37Z","lastTransitionTime":"2025-10-01T15:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.115209 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d76d2a7d-139b-40b6-91f8-c2c8ae0ecc71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b08c9c1691470813c489fb61abd198ca2b5a94ddec2f9b5a06b9d6c9b56334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e467246736f9943a0524d6786a8c4bd4a567f60f20ffd7259a5124c844c44522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da953f01526669a6db92b3ed2d63682b200ea37d86e44ff34a3b7630877fe6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7235ba9aab9db39a23fe6b99b3278ff4c1ede47a1d58c2c986fe3244a2fa12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:37Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.132767 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8101fee3-df2e-48d3-87d7-fd9cb4da0f2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ae2d981e775beb7e08339ee2b24437a4e7faa11462128d69dc0120905075759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://714ddbb1add5e32f98fea89501df6bfa217b607e3252008af82f130bc66e69c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d21bf7c2c1ae0ae4427bdccaa3cb14eec8034ad6b89498a13e41c346aa10f33d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd26ba21f3b352fff240b18114aa47c6185f125ed89be2586e676b766434c3b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd26ba21f3b352fff240b18114aa47c6185f125ed89be2586e676b766434c3b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:37Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.147899 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa37955372c2b350c3cd8103db2d231407c8f853e02d43fcae591d66a1f2bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:37Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.162732 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a5390e0a0285f487962def201f71183a2cfdbcc8db56f1bd0c64b2f6014c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:37Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.186559 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xr96p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6656de7a-9b8f-4714-81ae-3685c01f11fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24da01cfcdff6460cccc5cf06d24aa758d33ee7d086c7b93078b5848c650fcc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0296aee184dcef13f03cc1da4987bef80c13c394f067300167eadd8a4091f4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0296aee184dcef13f03cc1da4987bef80c13c394f067300167eadd8a4091f4c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xr96p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:37Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.200146 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.200202 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.200219 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.200244 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.200261 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:37Z","lastTransitionTime":"2025-10-01T15:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.209414 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5897a864-bb97-4443-a301-b30b22d13a88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02305e2d3e998f2b11185e65d51381d27f69cc544cba61b0878457e4816e5371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05aabfc445359dba1707d194e9dcf27bb57037b582f1181556cc65e2f2f712f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34912853a2f81af97497fc66053b02383114c5a42d46e39f2e7a98b48bf9e78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31382514a5f6e8d234356e87d2c1aba55e02a4edb54835382fe810f10e8df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c11c5a8bd9444a04b072ad656edec0e3f7d287392a2dffd509f3d7c0d1d4601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:37Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.221953 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kg2qk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75c430b-f863-470c-b57f-def53bf840db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab6ca47d17c063e35d4f3109e8dbe39f42dbf245b8f439b12464e167fb57dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kg2qk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:37Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.250156 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b30af5f-469f-4bee-b77f-4b58edba325b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2df7f85dd23b39e72771fddf6c1ab34df3b9fcea664dffe05a1b8859281fff26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0d4badfc0169a3e0cbd62d6661b7e542f43cbbe9005a56c68c7e0027235b64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://728c9faef5d0d63e5dd9cd98520d62db3e75646be0aeec6ff20c71598e35b5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35e991ed458eaa7ea90ab0ae0204b7c9b865ad5c5044709257614edda19865c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde65de2ce60422cb5ddb92562725b8dcd2f8b36bee19cc6e6de8aaab216c7ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb7bdabc02a6c9442d7f842e078acef313e10cd6dbf7153b289ee93b99c9abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b25501be462a751324672b5d9b82300e2ac592102b8247557aa5a70dd38f26c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25501be462a751324672b5d9b82300e2ac592102b8247557aa5a70dd38f26c1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-01T15:42:35Z\\\",\\\"message\\\":\\\"r for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:35Z is after 2025-08-24T17:21:41Z]\\\\nI1001 15:42:35.385219 6563 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-route-controller-manager/route-controller-manager]} name:Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.239:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {18746a4d-8a63-458a-b7e3-8fb89ff95fc0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1001 15:42:35.385254 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pppfm_openshift-ovn-kubernetes(6b30af5f-469f-4bee-b77f-4b58edba325b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08a6304f2c50f5e54adf26554bd2f81aba317d4ee153f55dc25123926bc380e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pppfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:37Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.266074 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zxj4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"660f5a12-b71d-454a-8ec0-bae2646530a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://956d6c7f1d35dedc8f0f1255301e89801525d97a0a3837f3d085900a53bead59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbc4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4e431fdeada96dc88cfe8f1b26c2ba3502f52e3d28880a87c5c4ed7482defb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbc4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zxj4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:37Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.281289 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a28cdb5789930fa13d9e76aeb4dc582efc2f97a117e7d3170909dddd8f6eed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a081b64cca92126512c81346c1b0005ddefcd11b702b12598c567cc627d3561f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:37Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.293226 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pgg4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25c0759d-c94a-438c-b478-48161acbb035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9037ea44408cee3d00ee27bb38ea8e49344ab06d6b173d86689a131010eb7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r42ms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pgg4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:37Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.303261 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.303336 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.303362 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.303392 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.303413 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:37Z","lastTransitionTime":"2025-10-01T15:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.307831 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439d2f19-631a-4560-aa58-70cdeef997a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://110e9a59347a665dfa5d6fcdf9e4937525c5e799c8356b2f28d3ba523f925d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513e0be5244aa9da6671888fedd775306c0824650cc687f0faea952c28d1b6b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47ab556b4721804041f9abee5effa082f4f29fbffe7681a7fc8efe0cb4f2b7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41f38cee75135e15635fdda772af2bdd1588e57025d28247223af8b5c34202e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02eba524109342a0990f10dfdf5432c2504acbb394b4f6e23b370becc256f51c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-01T15:41:55Z\\\",\\\"message\\\":\\\"W1001 15:41:44.880099 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1001 15:41:44.880530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759333304 cert, and key in /tmp/serving-cert-2775419030/serving-signer.crt, /tmp/serving-cert-2775419030/serving-signer.key\\\\nI1001 15:41:45.097803 1 observer_polling.go:159] Starting file observer\\\\nW1001 15:41:45.100841 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1001 15:41:45.103153 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1001 15:41:45.104329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2775419030/tls.crt::/tmp/serving-cert-2775419030/tls.key\\\\\\\"\\\\nF1001 15:41:55.441587 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2ceea2906c651debca4b72a5654ba5f8291a20d513a96b2fde7f1a0373674c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:37Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.407196 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.407254 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.407271 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.407294 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.407312 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:37Z","lastTransitionTime":"2025-10-01T15:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.513901 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.513941 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.513959 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.513977 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.513988 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:37Z","lastTransitionTime":"2025-10-01T15:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.616372 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.616413 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.616424 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.616439 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.616448 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:37Z","lastTransitionTime":"2025-10-01T15:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.686110 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.686163 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.686171 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.686184 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.686193 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:37Z","lastTransitionTime":"2025-10-01T15:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:37 crc kubenswrapper[4949]: E1001 15:42:37.699409 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a4890954-ee04-4573-a52a-d0437f2c0f47\\\",\\\"systemUUID\\\":\\\"de71db26-32c1-4956-9e9f-66fc023dcd38\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:37Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.703223 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.703253 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.703262 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.703276 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.703285 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:37Z","lastTransitionTime":"2025-10-01T15:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:37 crc kubenswrapper[4949]: E1001 15:42:37.717994 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a4890954-ee04-4573-a52a-d0437f2c0f47\\\",\\\"systemUUID\\\":\\\"de71db26-32c1-4956-9e9f-66fc023dcd38\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:37Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.721876 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.721948 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.721961 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.721976 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.721988 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:37Z","lastTransitionTime":"2025-10-01T15:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:37 crc kubenswrapper[4949]: E1001 15:42:37.736341 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a4890954-ee04-4573-a52a-d0437f2c0f47\\\",\\\"systemUUID\\\":\\\"de71db26-32c1-4956-9e9f-66fc023dcd38\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:37Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.740492 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.740574 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.740585 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.740601 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.740612 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:37Z","lastTransitionTime":"2025-10-01T15:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:37 crc kubenswrapper[4949]: E1001 15:42:37.753062 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a4890954-ee04-4573-a52a-d0437f2c0f47\\\",\\\"systemUUID\\\":\\\"de71db26-32c1-4956-9e9f-66fc023dcd38\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:37Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.758019 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.758080 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.758093 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.758110 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.758142 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:37Z","lastTransitionTime":"2025-10-01T15:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:37 crc kubenswrapper[4949]: E1001 15:42:37.770889 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:37Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a4890954-ee04-4573-a52a-d0437f2c0f47\\\",\\\"systemUUID\\\":\\\"de71db26-32c1-4956-9e9f-66fc023dcd38\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:37Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:37 crc kubenswrapper[4949]: E1001 15:42:37.771011 4949 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.772533 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.772562 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.772570 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.772583 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.772593 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:37Z","lastTransitionTime":"2025-10-01T15:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.876019 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.876104 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.876182 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.876219 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.876246 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:37Z","lastTransitionTime":"2025-10-01T15:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.978794 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.978830 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.978845 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.978862 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:37 crc kubenswrapper[4949]: I1001 15:42:37.978874 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:37Z","lastTransitionTime":"2025-10-01T15:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:38 crc kubenswrapper[4949]: I1001 15:42:38.081352 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:38 crc kubenswrapper[4949]: I1001 15:42:38.081415 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:38 crc kubenswrapper[4949]: I1001 15:42:38.081436 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:38 crc kubenswrapper[4949]: I1001 15:42:38.081465 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:38 crc kubenswrapper[4949]: I1001 15:42:38.081490 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:38Z","lastTransitionTime":"2025-10-01T15:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:38 crc kubenswrapper[4949]: I1001 15:42:38.184444 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:38 crc kubenswrapper[4949]: I1001 15:42:38.184496 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:38 crc kubenswrapper[4949]: I1001 15:42:38.184511 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:38 crc kubenswrapper[4949]: I1001 15:42:38.184532 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:38 crc kubenswrapper[4949]: I1001 15:42:38.184548 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:38Z","lastTransitionTime":"2025-10-01T15:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:38 crc kubenswrapper[4949]: I1001 15:42:38.288405 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:38 crc kubenswrapper[4949]: I1001 15:42:38.288762 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:38 crc kubenswrapper[4949]: I1001 15:42:38.288781 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:38 crc kubenswrapper[4949]: I1001 15:42:38.288804 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:38 crc kubenswrapper[4949]: I1001 15:42:38.288818 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:38Z","lastTransitionTime":"2025-10-01T15:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:38 crc kubenswrapper[4949]: I1001 15:42:38.390818 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:38 crc kubenswrapper[4949]: I1001 15:42:38.390861 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:38 crc kubenswrapper[4949]: I1001 15:42:38.390872 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:38 crc kubenswrapper[4949]: I1001 15:42:38.390887 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:38 crc kubenswrapper[4949]: I1001 15:42:38.390900 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:38Z","lastTransitionTime":"2025-10-01T15:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:38 crc kubenswrapper[4949]: I1001 15:42:38.493720 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:38 crc kubenswrapper[4949]: I1001 15:42:38.493779 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:38 crc kubenswrapper[4949]: I1001 15:42:38.493791 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:38 crc kubenswrapper[4949]: I1001 15:42:38.493808 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:38 crc kubenswrapper[4949]: I1001 15:42:38.493820 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:38Z","lastTransitionTime":"2025-10-01T15:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:38 crc kubenswrapper[4949]: I1001 15:42:38.597444 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:38 crc kubenswrapper[4949]: I1001 15:42:38.597507 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:38 crc kubenswrapper[4949]: I1001 15:42:38.597519 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:38 crc kubenswrapper[4949]: I1001 15:42:38.597539 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:38 crc kubenswrapper[4949]: I1001 15:42:38.597555 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:38Z","lastTransitionTime":"2025-10-01T15:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:38 crc kubenswrapper[4949]: I1001 15:42:38.601242 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:42:38 crc kubenswrapper[4949]: I1001 15:42:38.601326 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:42:38 crc kubenswrapper[4949]: E1001 15:42:38.601411 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kfx8b" podUID="d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd" Oct 01 15:42:38 crc kubenswrapper[4949]: I1001 15:42:38.601274 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:42:38 crc kubenswrapper[4949]: I1001 15:42:38.601274 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:42:38 crc kubenswrapper[4949]: E1001 15:42:38.601502 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:42:38 crc kubenswrapper[4949]: E1001 15:42:38.601636 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:42:38 crc kubenswrapper[4949]: E1001 15:42:38.601737 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:42:38 crc kubenswrapper[4949]: I1001 15:42:38.700482 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:38 crc kubenswrapper[4949]: I1001 15:42:38.700546 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:38 crc kubenswrapper[4949]: I1001 15:42:38.700558 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:38 crc kubenswrapper[4949]: I1001 15:42:38.700609 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:38 crc kubenswrapper[4949]: I1001 15:42:38.700621 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:38Z","lastTransitionTime":"2025-10-01T15:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:38 crc kubenswrapper[4949]: I1001 15:42:38.803237 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:38 crc kubenswrapper[4949]: I1001 15:42:38.803302 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:38 crc kubenswrapper[4949]: I1001 15:42:38.803320 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:38 crc kubenswrapper[4949]: I1001 15:42:38.803346 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:38 crc kubenswrapper[4949]: I1001 15:42:38.803364 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:38Z","lastTransitionTime":"2025-10-01T15:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:38 crc kubenswrapper[4949]: I1001 15:42:38.905893 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:38 crc kubenswrapper[4949]: I1001 15:42:38.905947 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:38 crc kubenswrapper[4949]: I1001 15:42:38.905958 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:38 crc kubenswrapper[4949]: I1001 15:42:38.905974 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:38 crc kubenswrapper[4949]: I1001 15:42:38.905985 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:38Z","lastTransitionTime":"2025-10-01T15:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:39 crc kubenswrapper[4949]: I1001 15:42:39.008858 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:39 crc kubenswrapper[4949]: I1001 15:42:39.008900 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:39 crc kubenswrapper[4949]: I1001 15:42:39.008912 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:39 crc kubenswrapper[4949]: I1001 15:42:39.008927 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:39 crc kubenswrapper[4949]: I1001 15:42:39.008938 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:39Z","lastTransitionTime":"2025-10-01T15:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:39 crc kubenswrapper[4949]: I1001 15:42:39.112494 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:39 crc kubenswrapper[4949]: I1001 15:42:39.112549 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:39 crc kubenswrapper[4949]: I1001 15:42:39.112560 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:39 crc kubenswrapper[4949]: I1001 15:42:39.112577 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:39 crc kubenswrapper[4949]: I1001 15:42:39.112589 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:39Z","lastTransitionTime":"2025-10-01T15:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:39 crc kubenswrapper[4949]: I1001 15:42:39.215855 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:39 crc kubenswrapper[4949]: I1001 15:42:39.215907 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:39 crc kubenswrapper[4949]: I1001 15:42:39.215920 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:39 crc kubenswrapper[4949]: I1001 15:42:39.215939 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:39 crc kubenswrapper[4949]: I1001 15:42:39.215952 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:39Z","lastTransitionTime":"2025-10-01T15:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:39 crc kubenswrapper[4949]: I1001 15:42:39.319580 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:39 crc kubenswrapper[4949]: I1001 15:42:39.319652 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:39 crc kubenswrapper[4949]: I1001 15:42:39.319679 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:39 crc kubenswrapper[4949]: I1001 15:42:39.319705 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:39 crc kubenswrapper[4949]: I1001 15:42:39.319723 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:39Z","lastTransitionTime":"2025-10-01T15:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:39 crc kubenswrapper[4949]: I1001 15:42:39.422280 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:39 crc kubenswrapper[4949]: I1001 15:42:39.422348 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:39 crc kubenswrapper[4949]: I1001 15:42:39.422373 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:39 crc kubenswrapper[4949]: I1001 15:42:39.422403 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:39 crc kubenswrapper[4949]: I1001 15:42:39.422427 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:39Z","lastTransitionTime":"2025-10-01T15:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:39 crc kubenswrapper[4949]: I1001 15:42:39.525828 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:39 crc kubenswrapper[4949]: I1001 15:42:39.525919 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:39 crc kubenswrapper[4949]: I1001 15:42:39.525929 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:39 crc kubenswrapper[4949]: I1001 15:42:39.525953 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:39 crc kubenswrapper[4949]: I1001 15:42:39.525967 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:39Z","lastTransitionTime":"2025-10-01T15:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:39 crc kubenswrapper[4949]: I1001 15:42:39.628708 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:39 crc kubenswrapper[4949]: I1001 15:42:39.628779 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:39 crc kubenswrapper[4949]: I1001 15:42:39.628798 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:39 crc kubenswrapper[4949]: I1001 15:42:39.628824 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:39 crc kubenswrapper[4949]: I1001 15:42:39.628842 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:39Z","lastTransitionTime":"2025-10-01T15:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:39 crc kubenswrapper[4949]: I1001 15:42:39.731819 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:39 crc kubenswrapper[4949]: I1001 15:42:39.731890 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:39 crc kubenswrapper[4949]: I1001 15:42:39.731914 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:39 crc kubenswrapper[4949]: I1001 15:42:39.731945 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:39 crc kubenswrapper[4949]: I1001 15:42:39.731967 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:39Z","lastTransitionTime":"2025-10-01T15:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:39 crc kubenswrapper[4949]: I1001 15:42:39.835516 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:39 crc kubenswrapper[4949]: I1001 15:42:39.835589 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:39 crc kubenswrapper[4949]: I1001 15:42:39.835611 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:39 crc kubenswrapper[4949]: I1001 15:42:39.835634 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:39 crc kubenswrapper[4949]: I1001 15:42:39.835651 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:39Z","lastTransitionTime":"2025-10-01T15:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:39 crc kubenswrapper[4949]: I1001 15:42:39.938286 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:39 crc kubenswrapper[4949]: I1001 15:42:39.938328 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:39 crc kubenswrapper[4949]: I1001 15:42:39.938342 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:39 crc kubenswrapper[4949]: I1001 15:42:39.938361 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:39 crc kubenswrapper[4949]: I1001 15:42:39.938373 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:39Z","lastTransitionTime":"2025-10-01T15:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:40 crc kubenswrapper[4949]: I1001 15:42:40.041099 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:40 crc kubenswrapper[4949]: I1001 15:42:40.041178 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:40 crc kubenswrapper[4949]: I1001 15:42:40.041195 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:40 crc kubenswrapper[4949]: I1001 15:42:40.041212 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:40 crc kubenswrapper[4949]: I1001 15:42:40.041223 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:40Z","lastTransitionTime":"2025-10-01T15:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:40 crc kubenswrapper[4949]: I1001 15:42:40.143812 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:40 crc kubenswrapper[4949]: I1001 15:42:40.143887 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:40 crc kubenswrapper[4949]: I1001 15:42:40.143907 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:40 crc kubenswrapper[4949]: I1001 15:42:40.143938 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:40 crc kubenswrapper[4949]: I1001 15:42:40.143965 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:40Z","lastTransitionTime":"2025-10-01T15:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:40 crc kubenswrapper[4949]: I1001 15:42:40.246729 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:40 crc kubenswrapper[4949]: I1001 15:42:40.246785 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:40 crc kubenswrapper[4949]: I1001 15:42:40.246798 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:40 crc kubenswrapper[4949]: I1001 15:42:40.246817 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:40 crc kubenswrapper[4949]: I1001 15:42:40.246829 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:40Z","lastTransitionTime":"2025-10-01T15:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:40 crc kubenswrapper[4949]: I1001 15:42:40.349686 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:40 crc kubenswrapper[4949]: I1001 15:42:40.349735 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:40 crc kubenswrapper[4949]: I1001 15:42:40.349747 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:40 crc kubenswrapper[4949]: I1001 15:42:40.349765 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:40 crc kubenswrapper[4949]: I1001 15:42:40.349777 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:40Z","lastTransitionTime":"2025-10-01T15:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:40 crc kubenswrapper[4949]: I1001 15:42:40.452100 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:40 crc kubenswrapper[4949]: I1001 15:42:40.452180 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:40 crc kubenswrapper[4949]: I1001 15:42:40.452198 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:40 crc kubenswrapper[4949]: I1001 15:42:40.452223 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:40 crc kubenswrapper[4949]: I1001 15:42:40.452238 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:40Z","lastTransitionTime":"2025-10-01T15:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:40 crc kubenswrapper[4949]: I1001 15:42:40.555752 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:40 crc kubenswrapper[4949]: I1001 15:42:40.555801 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:40 crc kubenswrapper[4949]: I1001 15:42:40.555816 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:40 crc kubenswrapper[4949]: I1001 15:42:40.555832 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:40 crc kubenswrapper[4949]: I1001 15:42:40.555856 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:40Z","lastTransitionTime":"2025-10-01T15:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:40 crc kubenswrapper[4949]: I1001 15:42:40.605242 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:42:40 crc kubenswrapper[4949]: I1001 15:42:40.605312 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:42:40 crc kubenswrapper[4949]: E1001 15:42:40.605367 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:42:40 crc kubenswrapper[4949]: I1001 15:42:40.605450 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:42:40 crc kubenswrapper[4949]: E1001 15:42:40.605564 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:42:40 crc kubenswrapper[4949]: I1001 15:42:40.605768 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:42:40 crc kubenswrapper[4949]: E1001 15:42:40.605843 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:42:40 crc kubenswrapper[4949]: E1001 15:42:40.606016 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kfx8b" podUID="d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd" Oct 01 15:42:40 crc kubenswrapper[4949]: I1001 15:42:40.659415 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:40 crc kubenswrapper[4949]: I1001 15:42:40.659474 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:40 crc kubenswrapper[4949]: I1001 15:42:40.659491 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:40 crc kubenswrapper[4949]: I1001 15:42:40.659514 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:40 crc kubenswrapper[4949]: I1001 15:42:40.659533 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:40Z","lastTransitionTime":"2025-10-01T15:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:40 crc kubenswrapper[4949]: I1001 15:42:40.762329 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:40 crc kubenswrapper[4949]: I1001 15:42:40.762394 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:40 crc kubenswrapper[4949]: I1001 15:42:40.762418 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:40 crc kubenswrapper[4949]: I1001 15:42:40.762448 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:40 crc kubenswrapper[4949]: I1001 15:42:40.762478 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:40Z","lastTransitionTime":"2025-10-01T15:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:40 crc kubenswrapper[4949]: I1001 15:42:40.865574 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:40 crc kubenswrapper[4949]: I1001 15:42:40.865613 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:40 crc kubenswrapper[4949]: I1001 15:42:40.865625 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:40 crc kubenswrapper[4949]: I1001 15:42:40.865641 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:40 crc kubenswrapper[4949]: I1001 15:42:40.865654 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:40Z","lastTransitionTime":"2025-10-01T15:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:40 crc kubenswrapper[4949]: I1001 15:42:40.967736 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:40 crc kubenswrapper[4949]: I1001 15:42:40.967776 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:40 crc kubenswrapper[4949]: I1001 15:42:40.967786 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:40 crc kubenswrapper[4949]: I1001 15:42:40.967799 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:40 crc kubenswrapper[4949]: I1001 15:42:40.967811 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:40Z","lastTransitionTime":"2025-10-01T15:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.070450 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.070497 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.070536 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.070554 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.070565 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:41Z","lastTransitionTime":"2025-10-01T15:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.174033 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.174346 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.174409 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.174479 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.174569 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:41Z","lastTransitionTime":"2025-10-01T15:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.278022 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.278120 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.278181 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.278207 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.278225 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:41Z","lastTransitionTime":"2025-10-01T15:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.381508 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.381585 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.381603 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.382055 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.382105 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:41Z","lastTransitionTime":"2025-10-01T15:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.484808 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.485067 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.485156 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.485218 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.485272 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:41Z","lastTransitionTime":"2025-10-01T15:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.588248 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.588308 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.588329 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.588355 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.588373 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:41Z","lastTransitionTime":"2025-10-01T15:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.619372 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439d2f19-631a-4560-aa58-70cdeef997a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://110e9a59347a665dfa5d6fcdf9e4937525c5e799c8356b2f28d3ba523f925d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513e0be5244aa9da6671888fedd775306c0824650cc687f0faea952c28d1b6b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47ab556b4721804041f9abee5effa082f4f29fbffe7681a7fc8efe0cb4f2b7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41f38cee75135e15635fdda772af2bdd1588e57025d28247223af8b5c34202e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02eba524109342a0990f10dfdf5432c2504acbb394b4f6e23b370becc256f51c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-01T15:41:55Z\\\",\\\"message\\\":\\\"W1001 15:41:44.880099 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1001 15:41:44.880530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759333304 cert, and key in /tmp/serving-cert-2775419030/serving-signer.crt, /tmp/serving-cert-2775419030/serving-signer.key\\\\nI1001 15:41:45.097803 1 observer_polling.go:159] Starting file observer\\\\nW1001 15:41:45.100841 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1001 15:41:45.103153 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1001 15:41:45.104329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2775419030/tls.crt::/tmp/serving-cert-2775419030/tls.key\\\\\\\"\\\\nF1001 15:41:55.441587 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2ceea2906c651debca4b72a5654ba5f8291a20d513a96b2fde7f1a0373674c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:41Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.633417 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pgg4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25c0759d-c94a-438c-b478-48161acbb035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9037ea44408cee3d00ee27bb38ea8e49344ab06d6b173d86689a131010eb7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r42ms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pgg4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:41Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.653608 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s5r4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a54b1244d03fd99a21c9ca116873fe7a80ec3053c95c8b99de3fb62e581419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghb86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s5r4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:41Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.669152 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kfx8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2bpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2bpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kfx8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:41Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.690472 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:41Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.692192 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.692350 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.692973 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.693065 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.693162 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:41Z","lastTransitionTime":"2025-10-01T15:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.711143 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:41Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.730623 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:41Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.749373 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e15cd67-d4ad-49b8-96a6-da114105e558\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee03163adf8037e21674f8ff81a9eba22d580bf29b7c9a81d9505dd1e768169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a240a198e037219428db0de4c633c5e99064f7f13a6a8922eba7fa3fac633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6287\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:41Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.770191 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a5390e0a0285f487962def201f71183a2cfdbcc8db56f1bd0c64b2f6014c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:41Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.794906 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xr96p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6656de7a-9b8f-4714-81ae-3685c01f11fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24da01cfcdff6460cccc5cf06d24aa758d33ee7d086c7b93078b5848c650fcc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0296aee184dcef13f03cc1da4987bef80c13c394f067300167eadd8a4091f4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0296aee184dcef13f03cc1da4987bef80c13c394f067300167eadd8a4091f4c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xr96p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:41Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.797808 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.797940 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.797963 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.797985 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.798000 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:41Z","lastTransitionTime":"2025-10-01T15:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.817847 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5897a864-bb97-4443-a301-b30b22d13a88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02305e2d3e998f2b11185e65d51381d27f69cc544cba61b0878457e4816e5371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05aabfc445359dba1707d194e9dcf27bb57037b582f1181556cc65e2f2f712f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34912853a2f81af97497fc66053b02383114c5a42d46e39f2e7a98b48bf9e78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31382514a5f6e8d234356e87d2c1aba55e02a4edb54835382fe810f10e8df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c11c5a8bd9444a04b072ad656edec0e3f7d287392a2dffd509f3d7c0d1d4601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:41Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.838536 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d76d2a7d-139b-40b6-91f8-c2c8ae0ecc71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b08c9c1691470813c489fb61abd198ca2b5a94ddec2f9b5a06b9d6c9b56334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e467246736f9943a0524d6786a8c4bd4a567f60f20ffd7259a5124c844c44522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da953f01526669a6db92b3ed2d63682b200ea37d86e44ff34a3b7630877fe6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7235ba9aab9db39a23fe6b99b3278ff4c1ede47a1d58c2c986fe3244a2fa12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:41Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.858512 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8101fee3-df2e-48d3-87d7-fd9cb4da0f2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ae2d981e775beb7e08339ee2b24437a4e7faa11462128d69dc0120905075759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://714ddbb1add5e32f98fea89501df6bfa217b607e3252008af82f130bc66e69c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d21bf7c2c1ae0ae4427bdccaa3cb14eec8034ad6b89498a13e41c346aa10f33d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd26ba21f3b352fff240b18114aa47c6185f125ed89be2586e676b766434c3b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd26ba21f3b352fff240b18114aa47c6185f125ed89be2586e676b766434c3b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:41Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.879873 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa37955372c2b350c3cd8103db2d231407c8f853e02d43fcae591d66a1f2bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:41Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.899030 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a28cdb5789930fa13d9e76aeb4dc582efc2f97a117e7d3170909dddd8f6eed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a081b64cca92126512c81346c1b0005ddefcd11b702b12598c567cc627d3561f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:41Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.901046 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.901116 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.901167 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.901193 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.901213 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:41Z","lastTransitionTime":"2025-10-01T15:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.913653 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kg2qk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75c430b-f863-470c-b57f-def53bf840db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab6ca47d17c063e35d4f3109e8dbe39f42dbf245b8f439b12464e167fb57dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kg2qk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:41Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.940737 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b30af5f-469f-4bee-b77f-4b58edba325b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2df7f85dd23b39e72771fddf6c1ab34df3b9fcea664dffe05a1b8859281fff26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0d4badfc0169a3e0cbd62d6661b7e542f43cbbe9005a56c68c7e0027235b64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://728c9faef5d0d63e5dd9cd98520d62db3e75646be0aeec6ff20c71598e35b5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35e991ed458eaa7ea90ab0ae0204b7c9b865ad5c5044709257614edda19865c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde65de2ce60422cb5ddb92562725b8dcd2f8b36bee19cc6e6de8aaab216c7ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb7bdabc02a6c9442d7f842e078acef313e10cd6dbf7153b289ee93b99c9abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b25501be462a751324672b5d9b82300e2ac592102b8247557aa5a70dd38f26c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25501be462a751324672b5d9b82300e2ac592102b8247557aa5a70dd38f26c1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-01T15:42:35Z\\\",\\\"message\\\":\\\"r for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:35Z is after 2025-08-24T17:21:41Z]\\\\nI1001 15:42:35.385219 6563 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-route-controller-manager/route-controller-manager]} name:Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.239:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {18746a4d-8a63-458a-b7e3-8fb89ff95fc0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1001 15:42:35.385254 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pppfm_openshift-ovn-kubernetes(6b30af5f-469f-4bee-b77f-4b58edba325b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08a6304f2c50f5e54adf26554bd2f81aba317d4ee153f55dc25123926bc380e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pppfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:41Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:41 crc kubenswrapper[4949]: I1001 15:42:41.955592 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zxj4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"660f5a12-b71d-454a-8ec0-bae2646530a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://956d6c7f1d35dedc8f0f1255301e89801525d97a0a3837f3d085900a53bead59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbc4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4e431fdeada96dc88cfe8f1b26c2ba3502f52e3d28880a87c5c4ed7482defb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbc4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zxj4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:41Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:42 crc kubenswrapper[4949]: I1001 15:42:42.004045 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:42 crc kubenswrapper[4949]: I1001 15:42:42.004091 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:42 crc kubenswrapper[4949]: I1001 15:42:42.004105 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:42 crc kubenswrapper[4949]: I1001 15:42:42.004148 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:42 crc kubenswrapper[4949]: I1001 15:42:42.004163 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:42Z","lastTransitionTime":"2025-10-01T15:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:42 crc kubenswrapper[4949]: I1001 15:42:42.106877 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:42 crc kubenswrapper[4949]: I1001 15:42:42.107639 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:42 crc kubenswrapper[4949]: I1001 15:42:42.107702 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:42 crc kubenswrapper[4949]: I1001 15:42:42.107738 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:42 crc kubenswrapper[4949]: I1001 15:42:42.107762 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:42Z","lastTransitionTime":"2025-10-01T15:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:42 crc kubenswrapper[4949]: I1001 15:42:42.211472 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:42 crc kubenswrapper[4949]: I1001 15:42:42.211521 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:42 crc kubenswrapper[4949]: I1001 15:42:42.211534 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:42 crc kubenswrapper[4949]: I1001 15:42:42.211551 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:42 crc kubenswrapper[4949]: I1001 15:42:42.211563 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:42Z","lastTransitionTime":"2025-10-01T15:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:42 crc kubenswrapper[4949]: I1001 15:42:42.315042 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:42 crc kubenswrapper[4949]: I1001 15:42:42.315112 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:42 crc kubenswrapper[4949]: I1001 15:42:42.315167 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:42 crc kubenswrapper[4949]: I1001 15:42:42.315195 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:42 crc kubenswrapper[4949]: I1001 15:42:42.315218 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:42Z","lastTransitionTime":"2025-10-01T15:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:42 crc kubenswrapper[4949]: I1001 15:42:42.418099 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:42 crc kubenswrapper[4949]: I1001 15:42:42.418238 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:42 crc kubenswrapper[4949]: I1001 15:42:42.418265 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:42 crc kubenswrapper[4949]: I1001 15:42:42.418296 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:42 crc kubenswrapper[4949]: I1001 15:42:42.418319 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:42Z","lastTransitionTime":"2025-10-01T15:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:42 crc kubenswrapper[4949]: I1001 15:42:42.521222 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:42 crc kubenswrapper[4949]: I1001 15:42:42.521255 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:42 crc kubenswrapper[4949]: I1001 15:42:42.521264 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:42 crc kubenswrapper[4949]: I1001 15:42:42.521277 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:42 crc kubenswrapper[4949]: I1001 15:42:42.521287 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:42Z","lastTransitionTime":"2025-10-01T15:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:42 crc kubenswrapper[4949]: I1001 15:42:42.600901 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:42:42 crc kubenswrapper[4949]: I1001 15:42:42.600972 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:42:42 crc kubenswrapper[4949]: I1001 15:42:42.600991 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:42:42 crc kubenswrapper[4949]: I1001 15:42:42.600912 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:42:42 crc kubenswrapper[4949]: E1001 15:42:42.601093 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kfx8b" podUID="d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd" Oct 01 15:42:42 crc kubenswrapper[4949]: E1001 15:42:42.601452 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:42:42 crc kubenswrapper[4949]: E1001 15:42:42.601558 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:42:42 crc kubenswrapper[4949]: E1001 15:42:42.601613 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:42:42 crc kubenswrapper[4949]: I1001 15:42:42.624519 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:42 crc kubenswrapper[4949]: I1001 15:42:42.624558 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:42 crc kubenswrapper[4949]: I1001 15:42:42.624572 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:42 crc kubenswrapper[4949]: I1001 15:42:42.624599 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:42 crc kubenswrapper[4949]: I1001 15:42:42.624618 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:42Z","lastTransitionTime":"2025-10-01T15:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:42 crc kubenswrapper[4949]: I1001 15:42:42.728160 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:42 crc kubenswrapper[4949]: I1001 15:42:42.728217 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:42 crc kubenswrapper[4949]: I1001 15:42:42.728236 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:42 crc kubenswrapper[4949]: I1001 15:42:42.728260 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:42 crc kubenswrapper[4949]: I1001 15:42:42.728282 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:42Z","lastTransitionTime":"2025-10-01T15:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:42 crc kubenswrapper[4949]: I1001 15:42:42.831509 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:42 crc kubenswrapper[4949]: I1001 15:42:42.831589 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:42 crc kubenswrapper[4949]: I1001 15:42:42.831669 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:42 crc kubenswrapper[4949]: I1001 15:42:42.831704 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:42 crc kubenswrapper[4949]: I1001 15:42:42.831727 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:42Z","lastTransitionTime":"2025-10-01T15:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:42 crc kubenswrapper[4949]: I1001 15:42:42.934723 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:42 crc kubenswrapper[4949]: I1001 15:42:42.934779 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:42 crc kubenswrapper[4949]: I1001 15:42:42.934796 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:42 crc kubenswrapper[4949]: I1001 15:42:42.934822 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:42 crc kubenswrapper[4949]: I1001 15:42:42.934839 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:42Z","lastTransitionTime":"2025-10-01T15:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:43 crc kubenswrapper[4949]: I1001 15:42:43.036993 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:43 crc kubenswrapper[4949]: I1001 15:42:43.037025 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:43 crc kubenswrapper[4949]: I1001 15:42:43.037032 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:43 crc kubenswrapper[4949]: I1001 15:42:43.037047 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:43 crc kubenswrapper[4949]: I1001 15:42:43.037059 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:43Z","lastTransitionTime":"2025-10-01T15:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:43 crc kubenswrapper[4949]: I1001 15:42:43.140428 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:43 crc kubenswrapper[4949]: I1001 15:42:43.140496 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:43 crc kubenswrapper[4949]: I1001 15:42:43.140517 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:43 crc kubenswrapper[4949]: I1001 15:42:43.140543 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:43 crc kubenswrapper[4949]: I1001 15:42:43.140560 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:43Z","lastTransitionTime":"2025-10-01T15:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:43 crc kubenswrapper[4949]: I1001 15:42:43.244049 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:43 crc kubenswrapper[4949]: I1001 15:42:43.244180 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:43 crc kubenswrapper[4949]: I1001 15:42:43.244218 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:43 crc kubenswrapper[4949]: I1001 15:42:43.244249 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:43 crc kubenswrapper[4949]: I1001 15:42:43.244273 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:43Z","lastTransitionTime":"2025-10-01T15:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:43 crc kubenswrapper[4949]: I1001 15:42:43.346858 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:43 crc kubenswrapper[4949]: I1001 15:42:43.346934 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:43 crc kubenswrapper[4949]: I1001 15:42:43.346952 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:43 crc kubenswrapper[4949]: I1001 15:42:43.346970 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:43 crc kubenswrapper[4949]: I1001 15:42:43.346980 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:43Z","lastTransitionTime":"2025-10-01T15:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:43 crc kubenswrapper[4949]: I1001 15:42:43.449983 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:43 crc kubenswrapper[4949]: I1001 15:42:43.450065 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:43 crc kubenswrapper[4949]: I1001 15:42:43.450078 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:43 crc kubenswrapper[4949]: I1001 15:42:43.450099 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:43 crc kubenswrapper[4949]: I1001 15:42:43.450112 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:43Z","lastTransitionTime":"2025-10-01T15:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:43 crc kubenswrapper[4949]: I1001 15:42:43.552819 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:43 crc kubenswrapper[4949]: I1001 15:42:43.552927 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:43 crc kubenswrapper[4949]: I1001 15:42:43.552952 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:43 crc kubenswrapper[4949]: I1001 15:42:43.552983 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:43 crc kubenswrapper[4949]: I1001 15:42:43.553006 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:43Z","lastTransitionTime":"2025-10-01T15:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:43 crc kubenswrapper[4949]: I1001 15:42:43.656005 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:43 crc kubenswrapper[4949]: I1001 15:42:43.656106 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:43 crc kubenswrapper[4949]: I1001 15:42:43.656144 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:43 crc kubenswrapper[4949]: I1001 15:42:43.656169 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:43 crc kubenswrapper[4949]: I1001 15:42:43.656188 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:43Z","lastTransitionTime":"2025-10-01T15:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:43 crc kubenswrapper[4949]: I1001 15:42:43.761661 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:43 crc kubenswrapper[4949]: I1001 15:42:43.761735 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:43 crc kubenswrapper[4949]: I1001 15:42:43.761761 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:43 crc kubenswrapper[4949]: I1001 15:42:43.761792 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:43 crc kubenswrapper[4949]: I1001 15:42:43.761815 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:43Z","lastTransitionTime":"2025-10-01T15:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:43 crc kubenswrapper[4949]: I1001 15:42:43.865156 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:43 crc kubenswrapper[4949]: I1001 15:42:43.865484 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:43 crc kubenswrapper[4949]: I1001 15:42:43.865607 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:43 crc kubenswrapper[4949]: I1001 15:42:43.865737 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:43 crc kubenswrapper[4949]: I1001 15:42:43.865852 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:43Z","lastTransitionTime":"2025-10-01T15:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:43 crc kubenswrapper[4949]: I1001 15:42:43.968705 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:43 crc kubenswrapper[4949]: I1001 15:42:43.968988 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:43 crc kubenswrapper[4949]: I1001 15:42:43.969145 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:43 crc kubenswrapper[4949]: I1001 15:42:43.969284 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:43 crc kubenswrapper[4949]: I1001 15:42:43.969418 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:43Z","lastTransitionTime":"2025-10-01T15:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:44 crc kubenswrapper[4949]: I1001 15:42:44.071732 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:44 crc kubenswrapper[4949]: I1001 15:42:44.071993 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:44 crc kubenswrapper[4949]: I1001 15:42:44.072327 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:44 crc kubenswrapper[4949]: I1001 15:42:44.072550 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:44 crc kubenswrapper[4949]: I1001 15:42:44.072810 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:44Z","lastTransitionTime":"2025-10-01T15:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:44 crc kubenswrapper[4949]: I1001 15:42:44.176605 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:44 crc kubenswrapper[4949]: I1001 15:42:44.176686 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:44 crc kubenswrapper[4949]: I1001 15:42:44.176709 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:44 crc kubenswrapper[4949]: I1001 15:42:44.176739 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:44 crc kubenswrapper[4949]: I1001 15:42:44.176761 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:44Z","lastTransitionTime":"2025-10-01T15:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:44 crc kubenswrapper[4949]: I1001 15:42:44.280063 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:44 crc kubenswrapper[4949]: I1001 15:42:44.280103 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:44 crc kubenswrapper[4949]: I1001 15:42:44.280114 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:44 crc kubenswrapper[4949]: I1001 15:42:44.280144 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:44 crc kubenswrapper[4949]: I1001 15:42:44.280155 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:44Z","lastTransitionTime":"2025-10-01T15:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:44 crc kubenswrapper[4949]: I1001 15:42:44.382044 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:44 crc kubenswrapper[4949]: I1001 15:42:44.382081 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:44 crc kubenswrapper[4949]: I1001 15:42:44.382093 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:44 crc kubenswrapper[4949]: I1001 15:42:44.382107 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:44 crc kubenswrapper[4949]: I1001 15:42:44.382117 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:44Z","lastTransitionTime":"2025-10-01T15:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:44 crc kubenswrapper[4949]: I1001 15:42:44.489406 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:44 crc kubenswrapper[4949]: I1001 15:42:44.489484 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:44 crc kubenswrapper[4949]: I1001 15:42:44.489524 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:44 crc kubenswrapper[4949]: I1001 15:42:44.489554 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:44 crc kubenswrapper[4949]: I1001 15:42:44.489577 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:44Z","lastTransitionTime":"2025-10-01T15:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:44 crc kubenswrapper[4949]: I1001 15:42:44.591958 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:44 crc kubenswrapper[4949]: I1001 15:42:44.592438 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:44 crc kubenswrapper[4949]: I1001 15:42:44.592515 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:44 crc kubenswrapper[4949]: I1001 15:42:44.592636 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:44 crc kubenswrapper[4949]: I1001 15:42:44.592703 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:44Z","lastTransitionTime":"2025-10-01T15:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:44 crc kubenswrapper[4949]: I1001 15:42:44.601356 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:42:44 crc kubenswrapper[4949]: I1001 15:42:44.601404 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:42:44 crc kubenswrapper[4949]: I1001 15:42:44.601386 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:42:44 crc kubenswrapper[4949]: I1001 15:42:44.601356 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:42:44 crc kubenswrapper[4949]: E1001 15:42:44.601498 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:42:44 crc kubenswrapper[4949]: E1001 15:42:44.601616 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:42:44 crc kubenswrapper[4949]: E1001 15:42:44.601664 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:42:44 crc kubenswrapper[4949]: E1001 15:42:44.601753 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kfx8b" podUID="d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd" Oct 01 15:42:44 crc kubenswrapper[4949]: I1001 15:42:44.696026 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:44 crc kubenswrapper[4949]: I1001 15:42:44.696089 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:44 crc kubenswrapper[4949]: I1001 15:42:44.696111 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:44 crc kubenswrapper[4949]: I1001 15:42:44.696183 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:44 crc kubenswrapper[4949]: I1001 15:42:44.696201 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:44Z","lastTransitionTime":"2025-10-01T15:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:44 crc kubenswrapper[4949]: I1001 15:42:44.802032 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:44 crc kubenswrapper[4949]: I1001 15:42:44.802107 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:44 crc kubenswrapper[4949]: I1001 15:42:44.802166 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:44 crc kubenswrapper[4949]: I1001 15:42:44.802198 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:44 crc kubenswrapper[4949]: I1001 15:42:44.802221 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:44Z","lastTransitionTime":"2025-10-01T15:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:44 crc kubenswrapper[4949]: I1001 15:42:44.906638 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:44 crc kubenswrapper[4949]: I1001 15:42:44.906683 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:44 crc kubenswrapper[4949]: I1001 15:42:44.906694 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:44 crc kubenswrapper[4949]: I1001 15:42:44.906712 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:44 crc kubenswrapper[4949]: I1001 15:42:44.906728 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:44Z","lastTransitionTime":"2025-10-01T15:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:45 crc kubenswrapper[4949]: I1001 15:42:45.009784 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:45 crc kubenswrapper[4949]: I1001 15:42:45.009877 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:45 crc kubenswrapper[4949]: I1001 15:42:45.009907 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:45 crc kubenswrapper[4949]: I1001 15:42:45.009939 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:45 crc kubenswrapper[4949]: I1001 15:42:45.009961 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:45Z","lastTransitionTime":"2025-10-01T15:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:45 crc kubenswrapper[4949]: I1001 15:42:45.112831 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:45 crc kubenswrapper[4949]: I1001 15:42:45.112865 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:45 crc kubenswrapper[4949]: I1001 15:42:45.112880 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:45 crc kubenswrapper[4949]: I1001 15:42:45.112900 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:45 crc kubenswrapper[4949]: I1001 15:42:45.112915 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:45Z","lastTransitionTime":"2025-10-01T15:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:45 crc kubenswrapper[4949]: I1001 15:42:45.215828 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:45 crc kubenswrapper[4949]: I1001 15:42:45.215887 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:45 crc kubenswrapper[4949]: I1001 15:42:45.215900 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:45 crc kubenswrapper[4949]: I1001 15:42:45.215918 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:45 crc kubenswrapper[4949]: I1001 15:42:45.215930 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:45Z","lastTransitionTime":"2025-10-01T15:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:45 crc kubenswrapper[4949]: I1001 15:42:45.318367 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:45 crc kubenswrapper[4949]: I1001 15:42:45.318456 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:45 crc kubenswrapper[4949]: I1001 15:42:45.318480 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:45 crc kubenswrapper[4949]: I1001 15:42:45.319007 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:45 crc kubenswrapper[4949]: I1001 15:42:45.319324 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:45Z","lastTransitionTime":"2025-10-01T15:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:45 crc kubenswrapper[4949]: I1001 15:42:45.422856 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:45 crc kubenswrapper[4949]: I1001 15:42:45.423746 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:45 crc kubenswrapper[4949]: I1001 15:42:45.423871 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:45 crc kubenswrapper[4949]: I1001 15:42:45.423991 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:45 crc kubenswrapper[4949]: I1001 15:42:45.424103 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:45Z","lastTransitionTime":"2025-10-01T15:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:45 crc kubenswrapper[4949]: I1001 15:42:45.526745 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:45 crc kubenswrapper[4949]: I1001 15:42:45.526814 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:45 crc kubenswrapper[4949]: I1001 15:42:45.526837 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:45 crc kubenswrapper[4949]: I1001 15:42:45.526869 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:45 crc kubenswrapper[4949]: I1001 15:42:45.526891 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:45Z","lastTransitionTime":"2025-10-01T15:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:45 crc kubenswrapper[4949]: I1001 15:42:45.629708 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:45 crc kubenswrapper[4949]: I1001 15:42:45.630005 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:45 crc kubenswrapper[4949]: I1001 15:42:45.630102 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:45 crc kubenswrapper[4949]: I1001 15:42:45.630218 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:45 crc kubenswrapper[4949]: I1001 15:42:45.630305 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:45Z","lastTransitionTime":"2025-10-01T15:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:45 crc kubenswrapper[4949]: I1001 15:42:45.733237 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:45 crc kubenswrapper[4949]: I1001 15:42:45.733520 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:45 crc kubenswrapper[4949]: I1001 15:42:45.733621 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:45 crc kubenswrapper[4949]: I1001 15:42:45.733712 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:45 crc kubenswrapper[4949]: I1001 15:42:45.733789 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:45Z","lastTransitionTime":"2025-10-01T15:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:45 crc kubenswrapper[4949]: I1001 15:42:45.836868 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:45 crc kubenswrapper[4949]: I1001 15:42:45.836904 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:45 crc kubenswrapper[4949]: I1001 15:42:45.836915 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:45 crc kubenswrapper[4949]: I1001 15:42:45.836930 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:45 crc kubenswrapper[4949]: I1001 15:42:45.836941 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:45Z","lastTransitionTime":"2025-10-01T15:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:45 crc kubenswrapper[4949]: I1001 15:42:45.939444 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:45 crc kubenswrapper[4949]: I1001 15:42:45.939486 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:45 crc kubenswrapper[4949]: I1001 15:42:45.939499 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:45 crc kubenswrapper[4949]: I1001 15:42:45.939515 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:45 crc kubenswrapper[4949]: I1001 15:42:45.939527 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:45Z","lastTransitionTime":"2025-10-01T15:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:46 crc kubenswrapper[4949]: I1001 15:42:46.041719 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:46 crc kubenswrapper[4949]: I1001 15:42:46.041863 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:46 crc kubenswrapper[4949]: I1001 15:42:46.041890 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:46 crc kubenswrapper[4949]: I1001 15:42:46.041917 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:46 crc kubenswrapper[4949]: I1001 15:42:46.041939 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:46Z","lastTransitionTime":"2025-10-01T15:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:46 crc kubenswrapper[4949]: I1001 15:42:46.144446 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:46 crc kubenswrapper[4949]: I1001 15:42:46.144488 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:46 crc kubenswrapper[4949]: I1001 15:42:46.144502 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:46 crc kubenswrapper[4949]: I1001 15:42:46.144520 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:46 crc kubenswrapper[4949]: I1001 15:42:46.144534 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:46Z","lastTransitionTime":"2025-10-01T15:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:46 crc kubenswrapper[4949]: I1001 15:42:46.247214 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:46 crc kubenswrapper[4949]: I1001 15:42:46.247287 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:46 crc kubenswrapper[4949]: I1001 15:42:46.247310 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:46 crc kubenswrapper[4949]: I1001 15:42:46.247343 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:46 crc kubenswrapper[4949]: I1001 15:42:46.247364 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:46Z","lastTransitionTime":"2025-10-01T15:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:46 crc kubenswrapper[4949]: I1001 15:42:46.350217 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:46 crc kubenswrapper[4949]: I1001 15:42:46.351221 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:46 crc kubenswrapper[4949]: I1001 15:42:46.351363 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:46 crc kubenswrapper[4949]: I1001 15:42:46.351412 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:46 crc kubenswrapper[4949]: I1001 15:42:46.351467 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:46Z","lastTransitionTime":"2025-10-01T15:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:46 crc kubenswrapper[4949]: I1001 15:42:46.454486 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:46 crc kubenswrapper[4949]: I1001 15:42:46.454526 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:46 crc kubenswrapper[4949]: I1001 15:42:46.454538 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:46 crc kubenswrapper[4949]: I1001 15:42:46.454553 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:46 crc kubenswrapper[4949]: I1001 15:42:46.454566 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:46Z","lastTransitionTime":"2025-10-01T15:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:46 crc kubenswrapper[4949]: I1001 15:42:46.557845 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:46 crc kubenswrapper[4949]: I1001 15:42:46.557888 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:46 crc kubenswrapper[4949]: I1001 15:42:46.557898 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:46 crc kubenswrapper[4949]: I1001 15:42:46.557914 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:46 crc kubenswrapper[4949]: I1001 15:42:46.557927 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:46Z","lastTransitionTime":"2025-10-01T15:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:46 crc kubenswrapper[4949]: I1001 15:42:46.600851 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:42:46 crc kubenswrapper[4949]: I1001 15:42:46.600941 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:42:46 crc kubenswrapper[4949]: I1001 15:42:46.601010 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:42:46 crc kubenswrapper[4949]: E1001 15:42:46.601055 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:42:46 crc kubenswrapper[4949]: I1001 15:42:46.601072 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:42:46 crc kubenswrapper[4949]: E1001 15:42:46.601186 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kfx8b" podUID="d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd" Oct 01 15:42:46 crc kubenswrapper[4949]: E1001 15:42:46.601223 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:42:46 crc kubenswrapper[4949]: E1001 15:42:46.601302 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:42:46 crc kubenswrapper[4949]: I1001 15:42:46.661352 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:46 crc kubenswrapper[4949]: I1001 15:42:46.661398 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:46 crc kubenswrapper[4949]: I1001 15:42:46.661409 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:46 crc kubenswrapper[4949]: I1001 15:42:46.661427 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:46 crc kubenswrapper[4949]: I1001 15:42:46.661438 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:46Z","lastTransitionTime":"2025-10-01T15:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:46 crc kubenswrapper[4949]: I1001 15:42:46.764178 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:46 crc kubenswrapper[4949]: I1001 15:42:46.764241 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:46 crc kubenswrapper[4949]: I1001 15:42:46.764252 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:46 crc kubenswrapper[4949]: I1001 15:42:46.764267 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:46 crc kubenswrapper[4949]: I1001 15:42:46.764280 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:46Z","lastTransitionTime":"2025-10-01T15:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:46 crc kubenswrapper[4949]: I1001 15:42:46.867148 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:46 crc kubenswrapper[4949]: I1001 15:42:46.867193 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:46 crc kubenswrapper[4949]: I1001 15:42:46.867204 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:46 crc kubenswrapper[4949]: I1001 15:42:46.867220 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:46 crc kubenswrapper[4949]: I1001 15:42:46.867235 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:46Z","lastTransitionTime":"2025-10-01T15:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:46 crc kubenswrapper[4949]: I1001 15:42:46.969543 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:46 crc kubenswrapper[4949]: I1001 15:42:46.970260 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:46 crc kubenswrapper[4949]: I1001 15:42:46.970475 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:46 crc kubenswrapper[4949]: I1001 15:42:46.970620 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:46 crc kubenswrapper[4949]: I1001 15:42:46.970756 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:46Z","lastTransitionTime":"2025-10-01T15:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.073175 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.073220 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.073233 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.073251 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.073264 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:47Z","lastTransitionTime":"2025-10-01T15:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.175264 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.175329 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.175343 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.175359 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.175370 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:47Z","lastTransitionTime":"2025-10-01T15:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.277337 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.277374 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.277386 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.277401 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.277412 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:47Z","lastTransitionTime":"2025-10-01T15:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.380101 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.380166 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.380177 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.380192 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.380202 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:47Z","lastTransitionTime":"2025-10-01T15:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.482110 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.482192 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.482209 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.482239 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.482255 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:47Z","lastTransitionTime":"2025-10-01T15:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.584695 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.584738 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.584753 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.584768 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.584778 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:47Z","lastTransitionTime":"2025-10-01T15:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.687364 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.687419 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.687431 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.687449 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.687461 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:47Z","lastTransitionTime":"2025-10-01T15:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.790530 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.790600 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.790640 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.790656 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.790667 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:47Z","lastTransitionTime":"2025-10-01T15:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.865616 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.865654 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.865663 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.865694 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.865706 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:47Z","lastTransitionTime":"2025-10-01T15:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:47 crc kubenswrapper[4949]: E1001 15:42:47.879349 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a4890954-ee04-4573-a52a-d0437f2c0f47\\\",\\\"systemUUID\\\":\\\"de71db26-32c1-4956-9e9f-66fc023dcd38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:47Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.883305 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.883336 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.883348 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.883363 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.883372 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:47Z","lastTransitionTime":"2025-10-01T15:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:47 crc kubenswrapper[4949]: E1001 15:42:47.896619 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a4890954-ee04-4573-a52a-d0437f2c0f47\\\",\\\"systemUUID\\\":\\\"de71db26-32c1-4956-9e9f-66fc023dcd38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:47Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.900310 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.900354 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.900368 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.900385 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.900397 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:47Z","lastTransitionTime":"2025-10-01T15:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:47 crc kubenswrapper[4949]: E1001 15:42:47.917269 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a4890954-ee04-4573-a52a-d0437f2c0f47\\\",\\\"systemUUID\\\":\\\"de71db26-32c1-4956-9e9f-66fc023dcd38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:47Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.920866 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.920894 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.920903 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.920916 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.920925 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:47Z","lastTransitionTime":"2025-10-01T15:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:47 crc kubenswrapper[4949]: E1001 15:42:47.931610 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a4890954-ee04-4573-a52a-d0437f2c0f47\\\",\\\"systemUUID\\\":\\\"de71db26-32c1-4956-9e9f-66fc023dcd38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:47Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.935066 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.935108 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.935137 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.935155 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.935166 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:47Z","lastTransitionTime":"2025-10-01T15:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:47 crc kubenswrapper[4949]: E1001 15:42:47.951167 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:47Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a4890954-ee04-4573-a52a-d0437f2c0f47\\\",\\\"systemUUID\\\":\\\"de71db26-32c1-4956-9e9f-66fc023dcd38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:47Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:47 crc kubenswrapper[4949]: E1001 15:42:47.951308 4949 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.952787 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.952833 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.952845 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.952859 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:47 crc kubenswrapper[4949]: I1001 15:42:47.952870 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:47Z","lastTransitionTime":"2025-10-01T15:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:48 crc kubenswrapper[4949]: I1001 15:42:48.055625 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:48 crc kubenswrapper[4949]: I1001 15:42:48.055677 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:48 crc kubenswrapper[4949]: I1001 15:42:48.055690 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:48 crc kubenswrapper[4949]: I1001 15:42:48.055709 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:48 crc kubenswrapper[4949]: I1001 15:42:48.055723 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:48Z","lastTransitionTime":"2025-10-01T15:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:48 crc kubenswrapper[4949]: I1001 15:42:48.159152 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:48 crc kubenswrapper[4949]: I1001 15:42:48.159194 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:48 crc kubenswrapper[4949]: I1001 15:42:48.159206 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:48 crc kubenswrapper[4949]: I1001 15:42:48.159225 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:48 crc kubenswrapper[4949]: I1001 15:42:48.159237 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:48Z","lastTransitionTime":"2025-10-01T15:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:48 crc kubenswrapper[4949]: I1001 15:42:48.261865 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:48 crc kubenswrapper[4949]: I1001 15:42:48.261914 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:48 crc kubenswrapper[4949]: I1001 15:42:48.261931 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:48 crc kubenswrapper[4949]: I1001 15:42:48.261953 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:48 crc kubenswrapper[4949]: I1001 15:42:48.261969 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:48Z","lastTransitionTime":"2025-10-01T15:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:48 crc kubenswrapper[4949]: I1001 15:42:48.364933 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:48 crc kubenswrapper[4949]: I1001 15:42:48.364990 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:48 crc kubenswrapper[4949]: I1001 15:42:48.365007 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:48 crc kubenswrapper[4949]: I1001 15:42:48.365025 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:48 crc kubenswrapper[4949]: I1001 15:42:48.365038 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:48Z","lastTransitionTime":"2025-10-01T15:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:48 crc kubenswrapper[4949]: I1001 15:42:48.467295 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:48 crc kubenswrapper[4949]: I1001 15:42:48.467356 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:48 crc kubenswrapper[4949]: I1001 15:42:48.467373 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:48 crc kubenswrapper[4949]: I1001 15:42:48.467399 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:48 crc kubenswrapper[4949]: I1001 15:42:48.467418 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:48Z","lastTransitionTime":"2025-10-01T15:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:48 crc kubenswrapper[4949]: I1001 15:42:48.570569 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:48 crc kubenswrapper[4949]: I1001 15:42:48.570671 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:48 crc kubenswrapper[4949]: I1001 15:42:48.570690 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:48 crc kubenswrapper[4949]: I1001 15:42:48.570718 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:48 crc kubenswrapper[4949]: I1001 15:42:48.570739 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:48Z","lastTransitionTime":"2025-10-01T15:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:48 crc kubenswrapper[4949]: I1001 15:42:48.601589 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:42:48 crc kubenswrapper[4949]: I1001 15:42:48.601653 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:42:48 crc kubenswrapper[4949]: I1001 15:42:48.601669 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:42:48 crc kubenswrapper[4949]: E1001 15:42:48.601774 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:42:48 crc kubenswrapper[4949]: I1001 15:42:48.601849 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:42:48 crc kubenswrapper[4949]: E1001 15:42:48.601904 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kfx8b" podUID="d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd" Oct 01 15:42:48 crc kubenswrapper[4949]: E1001 15:42:48.601993 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:42:48 crc kubenswrapper[4949]: E1001 15:42:48.602111 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:42:48 crc kubenswrapper[4949]: I1001 15:42:48.672864 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:48 crc kubenswrapper[4949]: I1001 15:42:48.672931 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:48 crc kubenswrapper[4949]: I1001 15:42:48.672945 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:48 crc kubenswrapper[4949]: I1001 15:42:48.672962 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:48 crc kubenswrapper[4949]: I1001 15:42:48.672975 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:48Z","lastTransitionTime":"2025-10-01T15:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:48 crc kubenswrapper[4949]: I1001 15:42:48.775703 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:48 crc kubenswrapper[4949]: I1001 15:42:48.775746 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:48 crc kubenswrapper[4949]: I1001 15:42:48.775758 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:48 crc kubenswrapper[4949]: I1001 15:42:48.775776 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:48 crc kubenswrapper[4949]: I1001 15:42:48.775788 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:48Z","lastTransitionTime":"2025-10-01T15:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:48 crc kubenswrapper[4949]: I1001 15:42:48.878368 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:48 crc kubenswrapper[4949]: I1001 15:42:48.878401 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:48 crc kubenswrapper[4949]: I1001 15:42:48.878408 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:48 crc kubenswrapper[4949]: I1001 15:42:48.878420 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:48 crc kubenswrapper[4949]: I1001 15:42:48.878429 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:48Z","lastTransitionTime":"2025-10-01T15:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:48 crc kubenswrapper[4949]: I1001 15:42:48.981568 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:48 crc kubenswrapper[4949]: I1001 15:42:48.981631 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:48 crc kubenswrapper[4949]: I1001 15:42:48.981651 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:48 crc kubenswrapper[4949]: I1001 15:42:48.981675 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:48 crc kubenswrapper[4949]: I1001 15:42:48.981694 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:48Z","lastTransitionTime":"2025-10-01T15:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:49 crc kubenswrapper[4949]: I1001 15:42:49.088162 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:49 crc kubenswrapper[4949]: I1001 15:42:49.088200 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:49 crc kubenswrapper[4949]: I1001 15:42:49.088209 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:49 crc kubenswrapper[4949]: I1001 15:42:49.088223 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:49 crc kubenswrapper[4949]: I1001 15:42:49.088232 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:49Z","lastTransitionTime":"2025-10-01T15:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:49 crc kubenswrapper[4949]: I1001 15:42:49.191701 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:49 crc kubenswrapper[4949]: I1001 15:42:49.191749 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:49 crc kubenswrapper[4949]: I1001 15:42:49.191765 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:49 crc kubenswrapper[4949]: I1001 15:42:49.191788 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:49 crc kubenswrapper[4949]: I1001 15:42:49.191805 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:49Z","lastTransitionTime":"2025-10-01T15:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:49 crc kubenswrapper[4949]: I1001 15:42:49.293799 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:49 crc kubenswrapper[4949]: I1001 15:42:49.293824 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:49 crc kubenswrapper[4949]: I1001 15:42:49.293831 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:49 crc kubenswrapper[4949]: I1001 15:42:49.293841 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:49 crc kubenswrapper[4949]: I1001 15:42:49.293849 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:49Z","lastTransitionTime":"2025-10-01T15:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:49 crc kubenswrapper[4949]: I1001 15:42:49.396644 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:49 crc kubenswrapper[4949]: I1001 15:42:49.396710 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:49 crc kubenswrapper[4949]: I1001 15:42:49.396725 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:49 crc kubenswrapper[4949]: I1001 15:42:49.396742 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:49 crc kubenswrapper[4949]: I1001 15:42:49.396754 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:49Z","lastTransitionTime":"2025-10-01T15:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:49 crc kubenswrapper[4949]: I1001 15:42:49.499603 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:49 crc kubenswrapper[4949]: I1001 15:42:49.499739 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:49 crc kubenswrapper[4949]: I1001 15:42:49.499793 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:49 crc kubenswrapper[4949]: I1001 15:42:49.499819 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:49 crc kubenswrapper[4949]: I1001 15:42:49.499837 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:49Z","lastTransitionTime":"2025-10-01T15:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:49 crc kubenswrapper[4949]: I1001 15:42:49.602752 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:49 crc kubenswrapper[4949]: I1001 15:42:49.602804 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:49 crc kubenswrapper[4949]: I1001 15:42:49.602816 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:49 crc kubenswrapper[4949]: I1001 15:42:49.602833 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:49 crc kubenswrapper[4949]: I1001 15:42:49.602845 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:49Z","lastTransitionTime":"2025-10-01T15:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:49 crc kubenswrapper[4949]: I1001 15:42:49.705620 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:49 crc kubenswrapper[4949]: I1001 15:42:49.705707 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:49 crc kubenswrapper[4949]: I1001 15:42:49.705727 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:49 crc kubenswrapper[4949]: I1001 15:42:49.706003 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:49 crc kubenswrapper[4949]: I1001 15:42:49.706043 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:49Z","lastTransitionTime":"2025-10-01T15:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:49 crc kubenswrapper[4949]: I1001 15:42:49.808923 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:49 crc kubenswrapper[4949]: I1001 15:42:49.808999 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:49 crc kubenswrapper[4949]: I1001 15:42:49.809014 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:49 crc kubenswrapper[4949]: I1001 15:42:49.809034 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:49 crc kubenswrapper[4949]: I1001 15:42:49.809045 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:49Z","lastTransitionTime":"2025-10-01T15:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:49 crc kubenswrapper[4949]: I1001 15:42:49.911743 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:49 crc kubenswrapper[4949]: I1001 15:42:49.911798 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:49 crc kubenswrapper[4949]: I1001 15:42:49.911835 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:49 crc kubenswrapper[4949]: I1001 15:42:49.911857 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:49 crc kubenswrapper[4949]: I1001 15:42:49.911871 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:49Z","lastTransitionTime":"2025-10-01T15:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:50 crc kubenswrapper[4949]: I1001 15:42:50.013930 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:50 crc kubenswrapper[4949]: I1001 15:42:50.013969 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:50 crc kubenswrapper[4949]: I1001 15:42:50.013981 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:50 crc kubenswrapper[4949]: I1001 15:42:50.013998 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:50 crc kubenswrapper[4949]: I1001 15:42:50.014009 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:50Z","lastTransitionTime":"2025-10-01T15:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:50 crc kubenswrapper[4949]: I1001 15:42:50.116953 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:50 crc kubenswrapper[4949]: I1001 15:42:50.116999 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:50 crc kubenswrapper[4949]: I1001 15:42:50.117010 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:50 crc kubenswrapper[4949]: I1001 15:42:50.117028 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:50 crc kubenswrapper[4949]: I1001 15:42:50.117039 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:50Z","lastTransitionTime":"2025-10-01T15:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:50 crc kubenswrapper[4949]: I1001 15:42:50.219260 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:50 crc kubenswrapper[4949]: I1001 15:42:50.219298 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:50 crc kubenswrapper[4949]: I1001 15:42:50.219307 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:50 crc kubenswrapper[4949]: I1001 15:42:50.219323 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:50 crc kubenswrapper[4949]: I1001 15:42:50.219331 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:50Z","lastTransitionTime":"2025-10-01T15:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:50 crc kubenswrapper[4949]: I1001 15:42:50.322051 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:50 crc kubenswrapper[4949]: I1001 15:42:50.322082 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:50 crc kubenswrapper[4949]: I1001 15:42:50.322091 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:50 crc kubenswrapper[4949]: I1001 15:42:50.322104 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:50 crc kubenswrapper[4949]: I1001 15:42:50.322113 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:50Z","lastTransitionTime":"2025-10-01T15:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:50 crc kubenswrapper[4949]: I1001 15:42:50.424610 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:50 crc kubenswrapper[4949]: I1001 15:42:50.424642 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:50 crc kubenswrapper[4949]: I1001 15:42:50.424650 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:50 crc kubenswrapper[4949]: I1001 15:42:50.424663 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:50 crc kubenswrapper[4949]: I1001 15:42:50.424674 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:50Z","lastTransitionTime":"2025-10-01T15:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:50 crc kubenswrapper[4949]: I1001 15:42:50.527052 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:50 crc kubenswrapper[4949]: I1001 15:42:50.527114 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:50 crc kubenswrapper[4949]: I1001 15:42:50.527183 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:50 crc kubenswrapper[4949]: I1001 15:42:50.527212 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:50 crc kubenswrapper[4949]: I1001 15:42:50.527236 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:50Z","lastTransitionTime":"2025-10-01T15:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:50 crc kubenswrapper[4949]: I1001 15:42:50.601065 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:42:50 crc kubenswrapper[4949]: I1001 15:42:50.601185 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:42:50 crc kubenswrapper[4949]: I1001 15:42:50.601250 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:42:50 crc kubenswrapper[4949]: I1001 15:42:50.601078 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:42:50 crc kubenswrapper[4949]: E1001 15:42:50.601280 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:42:50 crc kubenswrapper[4949]: E1001 15:42:50.601412 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kfx8b" podUID="d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd" Oct 01 15:42:50 crc kubenswrapper[4949]: E1001 15:42:50.601511 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:42:50 crc kubenswrapper[4949]: E1001 15:42:50.601589 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:42:50 crc kubenswrapper[4949]: I1001 15:42:50.630592 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:50 crc kubenswrapper[4949]: I1001 15:42:50.630652 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:50 crc kubenswrapper[4949]: I1001 15:42:50.630672 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:50 crc kubenswrapper[4949]: I1001 15:42:50.630697 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:50 crc kubenswrapper[4949]: I1001 15:42:50.630716 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:50Z","lastTransitionTime":"2025-10-01T15:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:50 crc kubenswrapper[4949]: I1001 15:42:50.733856 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:50 crc kubenswrapper[4949]: I1001 15:42:50.733942 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:50 crc kubenswrapper[4949]: I1001 15:42:50.733966 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:50 crc kubenswrapper[4949]: I1001 15:42:50.733996 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:50 crc kubenswrapper[4949]: I1001 15:42:50.734020 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:50Z","lastTransitionTime":"2025-10-01T15:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:50 crc kubenswrapper[4949]: I1001 15:42:50.836310 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:50 crc kubenswrapper[4949]: I1001 15:42:50.836347 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:50 crc kubenswrapper[4949]: I1001 15:42:50.836358 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:50 crc kubenswrapper[4949]: I1001 15:42:50.836373 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:50 crc kubenswrapper[4949]: I1001 15:42:50.836384 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:50Z","lastTransitionTime":"2025-10-01T15:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:50 crc kubenswrapper[4949]: I1001 15:42:50.938563 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:50 crc kubenswrapper[4949]: I1001 15:42:50.938595 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:50 crc kubenswrapper[4949]: I1001 15:42:50.938621 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:50 crc kubenswrapper[4949]: I1001 15:42:50.938637 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:50 crc kubenswrapper[4949]: I1001 15:42:50.938646 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:50Z","lastTransitionTime":"2025-10-01T15:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.009608 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd-metrics-certs\") pod \"network-metrics-daemon-kfx8b\" (UID: \"d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd\") " pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:42:51 crc kubenswrapper[4949]: E1001 15:42:51.009803 4949 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 01 15:42:51 crc kubenswrapper[4949]: E1001 15:42:51.009886 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd-metrics-certs podName:d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd nodeName:}" failed. No retries permitted until 2025-10-01 15:43:23.009863429 +0000 UTC m=+102.315469620 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd-metrics-certs") pod "network-metrics-daemon-kfx8b" (UID: "d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.040828 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.040860 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.040869 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.040882 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.040891 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:51Z","lastTransitionTime":"2025-10-01T15:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.143639 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.143686 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.143707 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.143731 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.143747 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:51Z","lastTransitionTime":"2025-10-01T15:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.246865 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.246921 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.246936 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.246957 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.246973 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:51Z","lastTransitionTime":"2025-10-01T15:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.350286 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.350347 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.350365 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.350391 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.350409 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:51Z","lastTransitionTime":"2025-10-01T15:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.452237 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.452309 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.452327 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.452361 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.452378 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:51Z","lastTransitionTime":"2025-10-01T15:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.557413 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.557689 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.557832 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.557877 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.558008 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:51Z","lastTransitionTime":"2025-10-01T15:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.601854 4949 scope.go:117] "RemoveContainer" containerID="b25501be462a751324672b5d9b82300e2ac592102b8247557aa5a70dd38f26c1" Oct 01 15:42:51 crc kubenswrapper[4949]: E1001 15:42:51.602099 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pppfm_openshift-ovn-kubernetes(6b30af5f-469f-4bee-b77f-4b58edba325b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.614821 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pgg4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25c0759d-c94a-438c-b478-48161acbb035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9037ea44408cee3d00ee27bb38ea8e49344ab06d6b173d86689a131010eb7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r42ms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pgg4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:51Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.637647 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439d2f19-631a-4560-aa58-70cdeef997a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://110e9a59347a665dfa5d6fcdf9e4937525c5e799c8356b2f28d3ba523f925d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513e0be5244aa9da6671888fedd775306c0824650cc687f0faea952c28d1b6b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47ab556b4721804041f9abee5effa082f4f29fbffe7681a7fc8efe0cb4f2b7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41f38cee75135e15635fdda772af2bdd1588e57025d28247223af8b5c34202e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02eba524109342a0990f10dfdf5432c2504acbb394b4f6e23b370becc256f51c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-01T15:41:55Z\\\",\\\"message\\\":\\\"W1001 15:41:44.880099 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1001 15:41:44.880530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759333304 cert, and key in /tmp/serving-cert-2775419030/serving-signer.crt, /tmp/serving-cert-2775419030/serving-signer.key\\\\nI1001 15:41:45.097803 1 observer_polling.go:159] Starting file observer\\\\nW1001 15:41:45.100841 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1001 15:41:45.103153 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1001 15:41:45.104329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2775419030/tls.crt::/tmp/serving-cert-2775419030/tls.key\\\\\\\"\\\\nF1001 15:41:55.441587 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2ceea2906c651debca4b72a5654ba5f8291a20d513a96b2fde7f1a0373674c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:51Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.661377 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.661414 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.661424 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.661437 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.661446 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:51Z","lastTransitionTime":"2025-10-01T15:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.674655 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:51Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.697794 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:51Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.708550 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e15cd67-d4ad-49b8-96a6-da114105e558\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee03163adf8037e21674f8ff81a9eba22d580bf29b7c9a81d9505dd1e768169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a240a198e037219428db0de4c633c5e99064f7f13a6a8922eba7fa3fac633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6287\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:51Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.723396 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s5r4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a54b1244d03fd99a21c9ca116873fe7a80ec3053c95c8b99de3fb62e581419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghb86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s5r4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:51Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.734892 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kfx8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2bpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2bpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kfx8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:51Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.746068 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:51Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.760508 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d76d2a7d-139b-40b6-91f8-c2c8ae0ecc71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b08c9c1691470813c489fb61abd198ca2b5a94ddec2f9b5a06b9d6c9b56334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e467246736f9943a0524d6786a8c4bd4a567f60f20ffd7259a5124c844c44522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da953f01526669a6db92b3ed2d63682b200ea37d86e44ff34a3b7630877fe6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7235ba9aab9db39a23fe6b99b3278ff4c1ede47a1d58c2c986fe3244a2fa12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:51Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.763249 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.763278 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.763286 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.763298 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.763307 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:51Z","lastTransitionTime":"2025-10-01T15:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.772529 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8101fee3-df2e-48d3-87d7-fd9cb4da0f2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ae2d981e775beb7e08339ee2b24437a4e7faa11462128d69dc0120905075759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://714ddbb1add5e32f98fea89501df6bfa217b607e3252008af82f130bc66e69c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d21bf7c2c1ae0ae4427bdccaa3cb14eec8034ad6b89498a13e41c346aa10f33d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd26ba21f3b352fff240b18114aa47c6185f125ed89be2586e676b766434c3b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd26ba21f3b352fff240b18114aa47c6185f125ed89be2586e676b766434c3b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:51Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.783375 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa37955372c2b350c3cd8103db2d231407c8f853e02d43fcae591d66a1f2bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:51Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.792769 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a5390e0a0285f487962def201f71183a2cfdbcc8db56f1bd0c64b2f6014c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:51Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.804783 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xr96p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6656de7a-9b8f-4714-81ae-3685c01f11fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24da01cfcdff6460cccc5cf06d24aa758d33ee7d086c7b93078b5848c650fcc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0296aee184dcef13f03cc1da4987bef80c13c394f067300167eadd8a4091f4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0296aee184dcef13f03cc1da4987bef80c13c394f067300167eadd8a4091f4c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xr96p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:51Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.829838 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5897a864-bb97-4443-a301-b30b22d13a88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02305e2d3e998f2b11185e65d51381d27f69cc544cba61b0878457e4816e5371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05aabfc445359dba1707d194e9dcf27bb57037b582f1181556cc65e2f2f712f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34912853a2f81af97497fc66053b02383114c5a42d46e39f2e7a98b48bf9e78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31382514a5f6e8d234356e87d2c1aba55e02a4edb54835382fe810f10e8df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c11c5a8bd9444a04b072ad656edec0e3f7d287392a2dffd509f3d7c0d1d4601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:51Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.840116 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kg2qk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75c430b-f863-470c-b57f-def53bf840db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab6ca47d17c063e35d4f3109e8dbe39f42dbf245b8f439b12464e167fb57dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kg2qk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:51Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.862338 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b30af5f-469f-4bee-b77f-4b58edba325b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2df7f85dd23b39e72771fddf6c1ab34df3b9fcea664dffe05a1b8859281fff26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0d4badfc0169a3e0cbd62d6661b7e542f43cbbe9005a56c68c7e0027235b64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://728c9faef5d0d63e5dd9cd98520d62db3e75646be0aeec6ff20c71598e35b5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35e991ed458eaa7ea90ab0ae0204b7c9b865ad5c5044709257614edda19865c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde65de2ce60422cb5ddb92562725b8dcd2f8b36bee19cc6e6de8aaab216c7ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb7bdabc02a6c9442d7f842e078acef313e10cd6dbf7153b289ee93b99c9abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b25501be462a751324672b5d9b82300e2ac592102b8247557aa5a70dd38f26c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25501be462a751324672b5d9b82300e2ac592102b8247557aa5a70dd38f26c1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-01T15:42:35Z\\\",\\\"message\\\":\\\"r for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:35Z is after 2025-08-24T17:21:41Z]\\\\nI1001 15:42:35.385219 6563 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-route-controller-manager/route-controller-manager]} name:Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.239:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {18746a4d-8a63-458a-b7e3-8fb89ff95fc0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1001 15:42:35.385254 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pppfm_openshift-ovn-kubernetes(6b30af5f-469f-4bee-b77f-4b58edba325b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08a6304f2c50f5e54adf26554bd2f81aba317d4ee153f55dc25123926bc380e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pppfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:51Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.865705 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.865727 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.865736 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.865749 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.865757 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:51Z","lastTransitionTime":"2025-10-01T15:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.874607 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zxj4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"660f5a12-b71d-454a-8ec0-bae2646530a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://956d6c7f1d35dedc8f0f1255301e89801525d97a0a3837f3d085900a53bead59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbc4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4e431fdeada96dc88cfe8f1b26c2ba3502f52e3d28880a87c5c4ed7482defb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbc4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zxj4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:51Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.887393 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a28cdb5789930fa13d9e76aeb4dc582efc2f97a117e7d3170909dddd8f6eed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a081b64cca92126512c81346c1b0005ddefcd11b702b12598c567cc627d3561f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:51Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.968199 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.968234 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.968245 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.968261 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:51 crc kubenswrapper[4949]: I1001 15:42:51.968272 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:51Z","lastTransitionTime":"2025-10-01T15:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:52 crc kubenswrapper[4949]: I1001 15:42:52.070588 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:52 crc kubenswrapper[4949]: I1001 15:42:52.070657 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:52 crc kubenswrapper[4949]: I1001 15:42:52.070667 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:52 crc kubenswrapper[4949]: I1001 15:42:52.070680 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:52 crc kubenswrapper[4949]: I1001 15:42:52.070690 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:52Z","lastTransitionTime":"2025-10-01T15:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:52 crc kubenswrapper[4949]: I1001 15:42:52.173183 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:52 crc kubenswrapper[4949]: I1001 15:42:52.173239 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:52 crc kubenswrapper[4949]: I1001 15:42:52.173250 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:52 crc kubenswrapper[4949]: I1001 15:42:52.173270 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:52 crc kubenswrapper[4949]: I1001 15:42:52.173281 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:52Z","lastTransitionTime":"2025-10-01T15:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:52 crc kubenswrapper[4949]: I1001 15:42:52.275439 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:52 crc kubenswrapper[4949]: I1001 15:42:52.275502 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:52 crc kubenswrapper[4949]: I1001 15:42:52.275520 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:52 crc kubenswrapper[4949]: I1001 15:42:52.275545 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:52 crc kubenswrapper[4949]: I1001 15:42:52.275562 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:52Z","lastTransitionTime":"2025-10-01T15:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:52 crc kubenswrapper[4949]: I1001 15:42:52.378655 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:52 crc kubenswrapper[4949]: I1001 15:42:52.378709 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:52 crc kubenswrapper[4949]: I1001 15:42:52.378723 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:52 crc kubenswrapper[4949]: I1001 15:42:52.378747 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:52 crc kubenswrapper[4949]: I1001 15:42:52.378762 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:52Z","lastTransitionTime":"2025-10-01T15:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:52 crc kubenswrapper[4949]: I1001 15:42:52.481569 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:52 crc kubenswrapper[4949]: I1001 15:42:52.481616 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:52 crc kubenswrapper[4949]: I1001 15:42:52.481628 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:52 crc kubenswrapper[4949]: I1001 15:42:52.481647 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:52 crc kubenswrapper[4949]: I1001 15:42:52.481660 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:52Z","lastTransitionTime":"2025-10-01T15:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:52 crc kubenswrapper[4949]: I1001 15:42:52.584684 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:52 crc kubenswrapper[4949]: I1001 15:42:52.584716 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:52 crc kubenswrapper[4949]: I1001 15:42:52.584725 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:52 crc kubenswrapper[4949]: I1001 15:42:52.584773 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:52 crc kubenswrapper[4949]: I1001 15:42:52.584785 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:52Z","lastTransitionTime":"2025-10-01T15:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:52 crc kubenswrapper[4949]: I1001 15:42:52.601201 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:42:52 crc kubenswrapper[4949]: I1001 15:42:52.601252 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:42:52 crc kubenswrapper[4949]: I1001 15:42:52.601201 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:42:52 crc kubenswrapper[4949]: E1001 15:42:52.601323 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:42:52 crc kubenswrapper[4949]: I1001 15:42:52.601271 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:42:52 crc kubenswrapper[4949]: E1001 15:42:52.601398 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:42:52 crc kubenswrapper[4949]: E1001 15:42:52.601466 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:42:52 crc kubenswrapper[4949]: E1001 15:42:52.601537 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kfx8b" podUID="d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd" Oct 01 15:42:52 crc kubenswrapper[4949]: I1001 15:42:52.687546 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:52 crc kubenswrapper[4949]: I1001 15:42:52.687580 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:52 crc kubenswrapper[4949]: I1001 15:42:52.687588 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:52 crc kubenswrapper[4949]: I1001 15:42:52.687602 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:52 crc kubenswrapper[4949]: I1001 15:42:52.687612 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:52Z","lastTransitionTime":"2025-10-01T15:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:52 crc kubenswrapper[4949]: I1001 15:42:52.789924 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:52 crc kubenswrapper[4949]: I1001 15:42:52.790004 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:52 crc kubenswrapper[4949]: I1001 15:42:52.790029 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:52 crc kubenswrapper[4949]: I1001 15:42:52.790062 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:52 crc kubenswrapper[4949]: I1001 15:42:52.790085 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:52Z","lastTransitionTime":"2025-10-01T15:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:52 crc kubenswrapper[4949]: I1001 15:42:52.893340 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:52 crc kubenswrapper[4949]: I1001 15:42:52.893372 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:52 crc kubenswrapper[4949]: I1001 15:42:52.893381 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:52 crc kubenswrapper[4949]: I1001 15:42:52.893395 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:52 crc kubenswrapper[4949]: I1001 15:42:52.893404 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:52Z","lastTransitionTime":"2025-10-01T15:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:52 crc kubenswrapper[4949]: I1001 15:42:52.995788 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:52 crc kubenswrapper[4949]: I1001 15:42:52.995834 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:52 crc kubenswrapper[4949]: I1001 15:42:52.995845 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:52 crc kubenswrapper[4949]: I1001 15:42:52.995864 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:52 crc kubenswrapper[4949]: I1001 15:42:52.995874 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:52Z","lastTransitionTime":"2025-10-01T15:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:53 crc kubenswrapper[4949]: I1001 15:42:53.099621 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:53 crc kubenswrapper[4949]: I1001 15:42:53.099886 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:53 crc kubenswrapper[4949]: I1001 15:42:53.099950 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:53 crc kubenswrapper[4949]: I1001 15:42:53.100019 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:53 crc kubenswrapper[4949]: I1001 15:42:53.100086 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:53Z","lastTransitionTime":"2025-10-01T15:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:53 crc kubenswrapper[4949]: I1001 15:42:53.202704 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:53 crc kubenswrapper[4949]: I1001 15:42:53.202777 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:53 crc kubenswrapper[4949]: I1001 15:42:53.202795 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:53 crc kubenswrapper[4949]: I1001 15:42:53.202813 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:53 crc kubenswrapper[4949]: I1001 15:42:53.202824 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:53Z","lastTransitionTime":"2025-10-01T15:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:53 crc kubenswrapper[4949]: I1001 15:42:53.304799 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:53 crc kubenswrapper[4949]: I1001 15:42:53.305236 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:53 crc kubenswrapper[4949]: I1001 15:42:53.305396 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:53 crc kubenswrapper[4949]: I1001 15:42:53.305550 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:53 crc kubenswrapper[4949]: I1001 15:42:53.305679 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:53Z","lastTransitionTime":"2025-10-01T15:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:53 crc kubenswrapper[4949]: I1001 15:42:53.408908 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:53 crc kubenswrapper[4949]: I1001 15:42:53.408968 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:53 crc kubenswrapper[4949]: I1001 15:42:53.408987 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:53 crc kubenswrapper[4949]: I1001 15:42:53.409012 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:53 crc kubenswrapper[4949]: I1001 15:42:53.409028 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:53Z","lastTransitionTime":"2025-10-01T15:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:53 crc kubenswrapper[4949]: I1001 15:42:53.511404 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:53 crc kubenswrapper[4949]: I1001 15:42:53.511439 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:53 crc kubenswrapper[4949]: I1001 15:42:53.511451 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:53 crc kubenswrapper[4949]: I1001 15:42:53.511467 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:53 crc kubenswrapper[4949]: I1001 15:42:53.511478 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:53Z","lastTransitionTime":"2025-10-01T15:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:53 crc kubenswrapper[4949]: I1001 15:42:53.614135 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:53 crc kubenswrapper[4949]: I1001 15:42:53.614169 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:53 crc kubenswrapper[4949]: I1001 15:42:53.614178 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:53 crc kubenswrapper[4949]: I1001 15:42:53.614190 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:53 crc kubenswrapper[4949]: I1001 15:42:53.614200 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:53Z","lastTransitionTime":"2025-10-01T15:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:53 crc kubenswrapper[4949]: I1001 15:42:53.716068 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:53 crc kubenswrapper[4949]: I1001 15:42:53.716700 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:53 crc kubenswrapper[4949]: I1001 15:42:53.716823 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:53 crc kubenswrapper[4949]: I1001 15:42:53.716927 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:53 crc kubenswrapper[4949]: I1001 15:42:53.717191 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:53Z","lastTransitionTime":"2025-10-01T15:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:53 crc kubenswrapper[4949]: I1001 15:42:53.820467 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:53 crc kubenswrapper[4949]: I1001 15:42:53.820549 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:53 crc kubenswrapper[4949]: I1001 15:42:53.820565 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:53 crc kubenswrapper[4949]: I1001 15:42:53.820594 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:53 crc kubenswrapper[4949]: I1001 15:42:53.820612 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:53Z","lastTransitionTime":"2025-10-01T15:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:53 crc kubenswrapper[4949]: I1001 15:42:53.923145 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:53 crc kubenswrapper[4949]: I1001 15:42:53.923182 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:53 crc kubenswrapper[4949]: I1001 15:42:53.923195 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:53 crc kubenswrapper[4949]: I1001 15:42:53.923211 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:53 crc kubenswrapper[4949]: I1001 15:42:53.923223 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:53Z","lastTransitionTime":"2025-10-01T15:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.025682 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.025758 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.025781 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.025811 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.025833 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:54Z","lastTransitionTime":"2025-10-01T15:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.049763 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-s5r4m_ffe32683-6bbe-472a-811e-8fe0fd1d1bb6/kube-multus/0.log" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.049819 4949 generic.go:334] "Generic (PLEG): container finished" podID="ffe32683-6bbe-472a-811e-8fe0fd1d1bb6" containerID="68a54b1244d03fd99a21c9ca116873fe7a80ec3053c95c8b99de3fb62e581419" exitCode=1 Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.049849 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-s5r4m" event={"ID":"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6","Type":"ContainerDied","Data":"68a54b1244d03fd99a21c9ca116873fe7a80ec3053c95c8b99de3fb62e581419"} Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.050219 4949 scope.go:117] "RemoveContainer" containerID="68a54b1244d03fd99a21c9ca116873fe7a80ec3053c95c8b99de3fb62e581419" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.069386 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a28cdb5789930fa13d9e76aeb4dc582efc2f97a117e7d3170909dddd8f6eed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a081b64cca92126512c81346c1b0005ddefcd11b702b12598c567cc627d3561f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:54Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.082255 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kg2qk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75c430b-f863-470c-b57f-def53bf840db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab6ca47d17c063e35d4f3109e8dbe39f42dbf245b8f439b12464e167fb57dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kg2qk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:54Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.109710 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b30af5f-469f-4bee-b77f-4b58edba325b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2df7f85dd23b39e72771fddf6c1ab34df3b9fcea664dffe05a1b8859281fff26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0d4badfc0169a3e0cbd62d6661b7e542f43cbbe9005a56c68c7e0027235b64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://728c9faef5d0d63e5dd9cd98520d62db3e75646be0aeec6ff20c71598e35b5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35e991ed458eaa7ea90ab0ae0204b7c9b865ad5c5044709257614edda19865c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde65de2ce60422cb5ddb92562725b8dcd2f8b36bee19cc6e6de8aaab216c7ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb7bdabc02a6c9442d7f842e078acef313e10cd6dbf7153b289ee93b99c9abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b25501be462a751324672b5d9b82300e2ac592102b8247557aa5a70dd38f26c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25501be462a751324672b5d9b82300e2ac592102b8247557aa5a70dd38f26c1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-01T15:42:35Z\\\",\\\"message\\\":\\\"r for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:35Z is after 2025-08-24T17:21:41Z]\\\\nI1001 15:42:35.385219 6563 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-route-controller-manager/route-controller-manager]} name:Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.239:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {18746a4d-8a63-458a-b7e3-8fb89ff95fc0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1001 15:42:35.385254 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pppfm_openshift-ovn-kubernetes(6b30af5f-469f-4bee-b77f-4b58edba325b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08a6304f2c50f5e54adf26554bd2f81aba317d4ee153f55dc25123926bc380e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pppfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:54Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.124744 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zxj4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"660f5a12-b71d-454a-8ec0-bae2646530a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://956d6c7f1d35dedc8f0f1255301e89801525d97a0a3837f3d085900a53bead59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbc4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4e431fdeada96dc88cfe8f1b26c2ba3502f52e3d28880a87c5c4ed7482defb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbc4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zxj4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:54Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.128576 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.128610 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.128621 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.128640 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.128655 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:54Z","lastTransitionTime":"2025-10-01T15:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.141316 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439d2f19-631a-4560-aa58-70cdeef997a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://110e9a59347a665dfa5d6fcdf9e4937525c5e799c8356b2f28d3ba523f925d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513e0be5244aa9da6671888fedd775306c0824650cc687f0faea952c28d1b6b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47ab556b4721804041f9abee5effa082f4f29fbffe7681a7fc8efe0cb4f2b7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41f38cee75135e15635fdda772af2bdd1588e57025d28247223af8b5c34202e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02eba524109342a0990f10dfdf5432c2504acbb394b4f6e23b370becc256f51c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-01T15:41:55Z\\\",\\\"message\\\":\\\"W1001 15:41:44.880099 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1001 15:41:44.880530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759333304 cert, and key in /tmp/serving-cert-2775419030/serving-signer.crt, /tmp/serving-cert-2775419030/serving-signer.key\\\\nI1001 15:41:45.097803 1 observer_polling.go:159] Starting file observer\\\\nW1001 15:41:45.100841 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1001 15:41:45.103153 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1001 15:41:45.104329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2775419030/tls.crt::/tmp/serving-cert-2775419030/tls.key\\\\\\\"\\\\nF1001 15:41:55.441587 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2ceea2906c651debca4b72a5654ba5f8291a20d513a96b2fde7f1a0373674c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:54Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.150829 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pgg4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25c0759d-c94a-438c-b478-48161acbb035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9037ea44408cee3d00ee27bb38ea8e49344ab06d6b173d86689a131010eb7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r42ms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pgg4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:54Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.162246 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:54Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.177182 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:54Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.188923 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:54Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.199611 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e15cd67-d4ad-49b8-96a6-da114105e558\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee03163adf8037e21674f8ff81a9eba22d580bf29b7c9a81d9505dd1e768169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a240a198e037219428db0de4c633c5e99064f7f13a6a8922eba7fa3fac633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6287\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:54Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.214432 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s5r4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:54Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://68a54b1244d03fd99a21c9ca116873fe7a80ec3053c95c8b99de3fb62e581419\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a54b1244d03fd99a21c9ca116873fe7a80ec3053c95c8b99de3fb62e581419\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-01T15:42:53Z\\\",\\\"message\\\":\\\"2025-10-01T15:42:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c8f86e39-7a0c-46dd-b100-b876fbdfae38\\\\n2025-10-01T15:42:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c8f86e39-7a0c-46dd-b100-b876fbdfae38 to /host/opt/cni/bin/\\\\n2025-10-01T15:42:08Z [verbose] multus-daemon started\\\\n2025-10-01T15:42:08Z [verbose] Readiness Indicator file check\\\\n2025-10-01T15:42:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghb86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s5r4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:54Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.222759 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kfx8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2bpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2bpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kfx8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:54Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.232236 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.232283 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.232300 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.232384 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.232463 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:54Z","lastTransitionTime":"2025-10-01T15:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.240081 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5897a864-bb97-4443-a301-b30b22d13a88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02305e2d3e998f2b11185e65d51381d27f69cc544cba61b0878457e4816e5371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05aabfc445359dba1707d194e9dcf27bb57037b582f1181556cc65e2f2f712f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34912853a2f81af97497fc66053b02383114c5a42d46e39f2e7a98b48bf9e78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31382514a5f6e8d234356e87d2c1aba55e02a4edb54835382fe810f10e8df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c11c5a8bd9444a04b072ad656edec0e3f7d287392a2dffd509f3d7c0d1d4601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:54Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.255540 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d76d2a7d-139b-40b6-91f8-c2c8ae0ecc71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b08c9c1691470813c489fb61abd198ca2b5a94ddec2f9b5a06b9d6c9b56334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e467246736f9943a0524d6786a8c4bd4a567f60f20ffd7259a5124c844c44522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da953f01526669a6db92b3ed2d63682b200ea37d86e44ff34a3b7630877fe6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7235ba9aab9db39a23fe6b99b3278ff4c1ede47a1d58c2c986fe3244a2fa12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:54Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.268731 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8101fee3-df2e-48d3-87d7-fd9cb4da0f2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ae2d981e775beb7e08339ee2b24437a4e7faa11462128d69dc0120905075759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://714ddbb1add5e32f98fea89501df6bfa217b607e3252008af82f130bc66e69c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d21bf7c2c1ae0ae4427bdccaa3cb14eec8034ad6b89498a13e41c346aa10f33d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd26ba21f3b352fff240b18114aa47c6185f125ed89be2586e676b766434c3b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd26ba21f3b352fff240b18114aa47c6185f125ed89be2586e676b766434c3b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:54Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.283647 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa37955372c2b350c3cd8103db2d231407c8f853e02d43fcae591d66a1f2bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:54Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.296326 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a5390e0a0285f487962def201f71183a2cfdbcc8db56f1bd0c64b2f6014c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:54Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.312006 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xr96p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6656de7a-9b8f-4714-81ae-3685c01f11fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24da01cfcdff6460cccc5cf06d24aa758d33ee7d086c7b93078b5848c650fcc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0296aee184dcef13f03cc1da4987bef80c13c394f067300167eadd8a4091f4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0296aee184dcef13f03cc1da4987bef80c13c394f067300167eadd8a4091f4c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xr96p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:54Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.334307 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.334345 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.334360 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.334375 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.334386 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:54Z","lastTransitionTime":"2025-10-01T15:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.437000 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.437051 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.437063 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.437086 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.437098 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:54Z","lastTransitionTime":"2025-10-01T15:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.539359 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.539431 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.539455 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.539790 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.539812 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:54Z","lastTransitionTime":"2025-10-01T15:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.600880 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.600964 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:42:54 crc kubenswrapper[4949]: E1001 15:42:54.601044 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kfx8b" podUID="d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd" Oct 01 15:42:54 crc kubenswrapper[4949]: E1001 15:42:54.601190 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.601273 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:42:54 crc kubenswrapper[4949]: E1001 15:42:54.601330 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.601389 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:42:54 crc kubenswrapper[4949]: E1001 15:42:54.601431 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.642916 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.642980 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.643001 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.643024 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.643038 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:54Z","lastTransitionTime":"2025-10-01T15:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.745622 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.745659 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.745667 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.745681 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.745690 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:54Z","lastTransitionTime":"2025-10-01T15:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.848878 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.848946 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.848963 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.848987 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.849004 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:54Z","lastTransitionTime":"2025-10-01T15:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.951799 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.951831 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.951839 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.951851 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:54 crc kubenswrapper[4949]: I1001 15:42:54.951859 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:54Z","lastTransitionTime":"2025-10-01T15:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.053535 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.053770 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.053841 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.053903 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.053977 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:55Z","lastTransitionTime":"2025-10-01T15:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.054018 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-s5r4m_ffe32683-6bbe-472a-811e-8fe0fd1d1bb6/kube-multus/0.log" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.054223 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-s5r4m" event={"ID":"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6","Type":"ContainerStarted","Data":"b3d51987d4ca121795b1960ac1901175eb67d69f9380770888791f085e0f3bcc"} Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.070344 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a28cdb5789930fa13d9e76aeb4dc582efc2f97a117e7d3170909dddd8f6eed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a081b64cca92126512c81346c1b0005ddefcd11b702b12598c567cc627d3561f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:55Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.083025 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kg2qk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75c430b-f863-470c-b57f-def53bf840db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab6ca47d17c063e35d4f3109e8dbe39f42dbf245b8f439b12464e167fb57dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kg2qk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:55Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.106600 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b30af5f-469f-4bee-b77f-4b58edba325b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2df7f85dd23b39e72771fddf6c1ab34df3b9fcea664dffe05a1b8859281fff26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0d4badfc0169a3e0cbd62d6661b7e542f43cbbe9005a56c68c7e0027235b64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://728c9faef5d0d63e5dd9cd98520d62db3e75646be0aeec6ff20c71598e35b5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35e991ed458eaa7ea90ab0ae0204b7c9b865ad5c5044709257614edda19865c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde65de2ce60422cb5ddb92562725b8dcd2f8b36bee19cc6e6de8aaab216c7ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb7bdabc02a6c9442d7f842e078acef313e10cd6dbf7153b289ee93b99c9abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b25501be462a751324672b5d9b82300e2ac592102b8247557aa5a70dd38f26c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25501be462a751324672b5d9b82300e2ac592102b8247557aa5a70dd38f26c1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-01T15:42:35Z\\\",\\\"message\\\":\\\"r for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:35Z is after 2025-08-24T17:21:41Z]\\\\nI1001 15:42:35.385219 6563 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-route-controller-manager/route-controller-manager]} name:Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.239:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {18746a4d-8a63-458a-b7e3-8fb89ff95fc0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1001 15:42:35.385254 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pppfm_openshift-ovn-kubernetes(6b30af5f-469f-4bee-b77f-4b58edba325b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08a6304f2c50f5e54adf26554bd2f81aba317d4ee153f55dc25123926bc380e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pppfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:55Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.119175 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zxj4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"660f5a12-b71d-454a-8ec0-bae2646530a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://956d6c7f1d35dedc8f0f1255301e89801525d97a0a3837f3d085900a53bead59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbc4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4e431fdeada96dc88cfe8f1b26c2ba3502f52e3d28880a87c5c4ed7482defb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbc4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zxj4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:55Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.133313 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439d2f19-631a-4560-aa58-70cdeef997a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://110e9a59347a665dfa5d6fcdf9e4937525c5e799c8356b2f28d3ba523f925d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513e0be5244aa9da6671888fedd775306c0824650cc687f0faea952c28d1b6b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47ab556b4721804041f9abee5effa082f4f29fbffe7681a7fc8efe0cb4f2b7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41f38cee75135e15635fdda772af2bdd1588e57025d28247223af8b5c34202e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02eba524109342a0990f10dfdf5432c2504acbb394b4f6e23b370becc256f51c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-01T15:41:55Z\\\",\\\"message\\\":\\\"W1001 15:41:44.880099 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1001 15:41:44.880530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759333304 cert, and key in /tmp/serving-cert-2775419030/serving-signer.crt, /tmp/serving-cert-2775419030/serving-signer.key\\\\nI1001 15:41:45.097803 1 observer_polling.go:159] Starting file observer\\\\nW1001 15:41:45.100841 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1001 15:41:45.103153 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1001 15:41:45.104329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2775419030/tls.crt::/tmp/serving-cert-2775419030/tls.key\\\\\\\"\\\\nF1001 15:41:55.441587 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2ceea2906c651debca4b72a5654ba5f8291a20d513a96b2fde7f1a0373674c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:55Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.144022 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pgg4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25c0759d-c94a-438c-b478-48161acbb035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9037ea44408cee3d00ee27bb38ea8e49344ab06d6b173d86689a131010eb7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r42ms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pgg4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:55Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.157066 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.157100 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.157109 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.157135 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.157145 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:55Z","lastTransitionTime":"2025-10-01T15:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.157833 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kfx8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2bpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2bpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kfx8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:55Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.172207 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:55Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.186482 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:55Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.197430 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:55Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.208629 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e15cd67-d4ad-49b8-96a6-da114105e558\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee03163adf8037e21674f8ff81a9eba22d580bf29b7c9a81d9505dd1e768169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a240a198e037219428db0de4c633c5e99064f7f13a6a8922eba7fa3fac633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6287\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:55Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.222298 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s5r4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3d51987d4ca121795b1960ac1901175eb67d69f9380770888791f085e0f3bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a54b1244d03fd99a21c9ca116873fe7a80ec3053c95c8b99de3fb62e581419\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-01T15:42:53Z\\\",\\\"message\\\":\\\"2025-10-01T15:42:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c8f86e39-7a0c-46dd-b100-b876fbdfae38\\\\n2025-10-01T15:42:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c8f86e39-7a0c-46dd-b100-b876fbdfae38 to /host/opt/cni/bin/\\\\n2025-10-01T15:42:08Z [verbose] multus-daemon started\\\\n2025-10-01T15:42:08Z [verbose] Readiness Indicator file check\\\\n2025-10-01T15:42:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghb86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s5r4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:55Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.235662 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xr96p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6656de7a-9b8f-4714-81ae-3685c01f11fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24da01cfcdff6460cccc5cf06d24aa758d33ee7d086c7b93078b5848c650fcc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0296aee184dcef13f03cc1da4987bef80c13c394f067300167eadd8a4091f4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0296aee184dcef13f03cc1da4987bef80c13c394f067300167eadd8a4091f4c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xr96p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:55Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.258976 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.259005 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.259016 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.259033 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.259044 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:55Z","lastTransitionTime":"2025-10-01T15:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.261840 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5897a864-bb97-4443-a301-b30b22d13a88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02305e2d3e998f2b11185e65d51381d27f69cc544cba61b0878457e4816e5371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05aabfc445359dba1707d194e9dcf27bb57037b582f1181556cc65e2f2f712f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34912853a2f81af97497fc66053b02383114c5a42d46e39f2e7a98b48bf9e78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31382514a5f6e8d234356e87d2c1aba55e02a4edb54835382fe810f10e8df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c11c5a8bd9444a04b072ad656edec0e3f7d287392a2dffd509f3d7c0d1d4601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:55Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.276279 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d76d2a7d-139b-40b6-91f8-c2c8ae0ecc71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b08c9c1691470813c489fb61abd198ca2b5a94ddec2f9b5a06b9d6c9b56334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e467246736f9943a0524d6786a8c4bd4a567f60f20ffd7259a5124c844c44522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da953f01526669a6db92b3ed2d63682b200ea37d86e44ff34a3b7630877fe6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7235ba9aab9db39a23fe6b99b3278ff4c1ede47a1d58c2c986fe3244a2fa12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:55Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.287980 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8101fee3-df2e-48d3-87d7-fd9cb4da0f2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ae2d981e775beb7e08339ee2b24437a4e7faa11462128d69dc0120905075759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://714ddbb1add5e32f98fea89501df6bfa217b607e3252008af82f130bc66e69c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d21bf7c2c1ae0ae4427bdccaa3cb14eec8034ad6b89498a13e41c346aa10f33d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd26ba21f3b352fff240b18114aa47c6185f125ed89be2586e676b766434c3b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd26ba21f3b352fff240b18114aa47c6185f125ed89be2586e676b766434c3b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:55Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.306522 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa37955372c2b350c3cd8103db2d231407c8f853e02d43fcae591d66a1f2bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:55Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.318713 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a5390e0a0285f487962def201f71183a2cfdbcc8db56f1bd0c64b2f6014c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:55Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.362050 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.362104 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.362153 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.362178 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.362195 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:55Z","lastTransitionTime":"2025-10-01T15:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.464581 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.464627 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.464643 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.464664 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.464679 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:55Z","lastTransitionTime":"2025-10-01T15:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.567882 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.567970 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.567984 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.568008 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.568024 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:55Z","lastTransitionTime":"2025-10-01T15:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.671426 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.671757 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.671877 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.671984 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.672073 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:55Z","lastTransitionTime":"2025-10-01T15:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.774354 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.775271 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.775417 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.775573 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.775720 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:55Z","lastTransitionTime":"2025-10-01T15:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.877530 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.877885 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.877898 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.877917 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.877928 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:55Z","lastTransitionTime":"2025-10-01T15:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.981105 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.981511 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.981665 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.981796 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:55 crc kubenswrapper[4949]: I1001 15:42:55.981910 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:55Z","lastTransitionTime":"2025-10-01T15:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:56 crc kubenswrapper[4949]: I1001 15:42:56.084877 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:56 crc kubenswrapper[4949]: I1001 15:42:56.084961 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:56 crc kubenswrapper[4949]: I1001 15:42:56.084986 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:56 crc kubenswrapper[4949]: I1001 15:42:56.085014 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:56 crc kubenswrapper[4949]: I1001 15:42:56.085036 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:56Z","lastTransitionTime":"2025-10-01T15:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:56 crc kubenswrapper[4949]: I1001 15:42:56.188995 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:56 crc kubenswrapper[4949]: I1001 15:42:56.189065 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:56 crc kubenswrapper[4949]: I1001 15:42:56.189084 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:56 crc kubenswrapper[4949]: I1001 15:42:56.189113 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:56 crc kubenswrapper[4949]: I1001 15:42:56.189164 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:56Z","lastTransitionTime":"2025-10-01T15:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:56 crc kubenswrapper[4949]: I1001 15:42:56.292482 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:56 crc kubenswrapper[4949]: I1001 15:42:56.292544 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:56 crc kubenswrapper[4949]: I1001 15:42:56.292562 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:56 crc kubenswrapper[4949]: I1001 15:42:56.292585 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:56 crc kubenswrapper[4949]: I1001 15:42:56.292602 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:56Z","lastTransitionTime":"2025-10-01T15:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:56 crc kubenswrapper[4949]: I1001 15:42:56.395319 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:56 crc kubenswrapper[4949]: I1001 15:42:56.395385 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:56 crc kubenswrapper[4949]: I1001 15:42:56.395410 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:56 crc kubenswrapper[4949]: I1001 15:42:56.395440 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:56 crc kubenswrapper[4949]: I1001 15:42:56.395457 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:56Z","lastTransitionTime":"2025-10-01T15:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:56 crc kubenswrapper[4949]: I1001 15:42:56.498597 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:56 crc kubenswrapper[4949]: I1001 15:42:56.498667 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:56 crc kubenswrapper[4949]: I1001 15:42:56.498686 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:56 crc kubenswrapper[4949]: I1001 15:42:56.498710 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:56 crc kubenswrapper[4949]: I1001 15:42:56.498728 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:56Z","lastTransitionTime":"2025-10-01T15:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:56 crc kubenswrapper[4949]: I1001 15:42:56.600618 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:42:56 crc kubenswrapper[4949]: I1001 15:42:56.600740 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:42:56 crc kubenswrapper[4949]: I1001 15:42:56.600622 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:42:56 crc kubenswrapper[4949]: E1001 15:42:56.600803 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:42:56 crc kubenswrapper[4949]: I1001 15:42:56.600832 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:42:56 crc kubenswrapper[4949]: E1001 15:42:56.601095 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kfx8b" podUID="d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd" Oct 01 15:42:56 crc kubenswrapper[4949]: E1001 15:42:56.601272 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:42:56 crc kubenswrapper[4949]: E1001 15:42:56.601417 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:42:56 crc kubenswrapper[4949]: I1001 15:42:56.603882 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:56 crc kubenswrapper[4949]: I1001 15:42:56.604166 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:56 crc kubenswrapper[4949]: I1001 15:42:56.604395 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:56 crc kubenswrapper[4949]: I1001 15:42:56.604548 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:56 crc kubenswrapper[4949]: I1001 15:42:56.604687 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:56Z","lastTransitionTime":"2025-10-01T15:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:56 crc kubenswrapper[4949]: I1001 15:42:56.707737 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:56 crc kubenswrapper[4949]: I1001 15:42:56.707806 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:56 crc kubenswrapper[4949]: I1001 15:42:56.707826 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:56 crc kubenswrapper[4949]: I1001 15:42:56.707861 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:56 crc kubenswrapper[4949]: I1001 15:42:56.707882 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:56Z","lastTransitionTime":"2025-10-01T15:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:56 crc kubenswrapper[4949]: I1001 15:42:56.810679 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:56 crc kubenswrapper[4949]: I1001 15:42:56.810766 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:56 crc kubenswrapper[4949]: I1001 15:42:56.810801 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:56 crc kubenswrapper[4949]: I1001 15:42:56.810834 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:56 crc kubenswrapper[4949]: I1001 15:42:56.810859 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:56Z","lastTransitionTime":"2025-10-01T15:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:56 crc kubenswrapper[4949]: I1001 15:42:56.915044 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:56 crc kubenswrapper[4949]: I1001 15:42:56.915091 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:56 crc kubenswrapper[4949]: I1001 15:42:56.915105 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:56 crc kubenswrapper[4949]: I1001 15:42:56.915146 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:56 crc kubenswrapper[4949]: I1001 15:42:56.915162 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:56Z","lastTransitionTime":"2025-10-01T15:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.018996 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.019070 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.019150 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.019182 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.019204 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:57Z","lastTransitionTime":"2025-10-01T15:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.122001 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.122093 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.122108 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.122155 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.122171 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:57Z","lastTransitionTime":"2025-10-01T15:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.225216 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.225255 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.225265 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.225282 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.225298 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:57Z","lastTransitionTime":"2025-10-01T15:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.328934 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.329023 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.329043 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.329082 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.329099 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:57Z","lastTransitionTime":"2025-10-01T15:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.432033 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.432086 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.432100 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.432168 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.432204 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:57Z","lastTransitionTime":"2025-10-01T15:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.535373 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.535420 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.535429 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.535445 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.535454 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:57Z","lastTransitionTime":"2025-10-01T15:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.638524 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.638595 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.638612 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.638637 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.638655 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:57Z","lastTransitionTime":"2025-10-01T15:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.741423 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.741509 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.741531 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.741560 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.741578 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:57Z","lastTransitionTime":"2025-10-01T15:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.845726 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.845813 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.845839 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.845886 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.845919 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:57Z","lastTransitionTime":"2025-10-01T15:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.949092 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.949175 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.949199 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.949229 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.949251 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:57Z","lastTransitionTime":"2025-10-01T15:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.961081 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.961239 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.961325 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.961412 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.961502 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:57Z","lastTransitionTime":"2025-10-01T15:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:57 crc kubenswrapper[4949]: E1001 15:42:57.980599 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a4890954-ee04-4573-a52a-d0437f2c0f47\\\",\\\"systemUUID\\\":\\\"de71db26-32c1-4956-9e9f-66fc023dcd38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:57Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.985448 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.985493 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.985509 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.985530 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:57 crc kubenswrapper[4949]: I1001 15:42:57.985546 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:57Z","lastTransitionTime":"2025-10-01T15:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:58 crc kubenswrapper[4949]: E1001 15:42:58.003488 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:57Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:57Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:57Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:57Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a4890954-ee04-4573-a52a-d0437f2c0f47\\\",\\\"systemUUID\\\":\\\"de71db26-32c1-4956-9e9f-66fc023dcd38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:58Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.007971 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.008050 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.008063 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.008080 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.008091 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:58Z","lastTransitionTime":"2025-10-01T15:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:58 crc kubenswrapper[4949]: E1001 15:42:58.023745 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a4890954-ee04-4573-a52a-d0437f2c0f47\\\",\\\"systemUUID\\\":\\\"de71db26-32c1-4956-9e9f-66fc023dcd38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:58Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.026926 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.026956 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.026965 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.026978 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.026987 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:58Z","lastTransitionTime":"2025-10-01T15:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:58 crc kubenswrapper[4949]: E1001 15:42:58.038800 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a4890954-ee04-4573-a52a-d0437f2c0f47\\\",\\\"systemUUID\\\":\\\"de71db26-32c1-4956-9e9f-66fc023dcd38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:58Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.042535 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.042581 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.042591 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.042606 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.042616 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:58Z","lastTransitionTime":"2025-10-01T15:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:58 crc kubenswrapper[4949]: E1001 15:42:58.053879 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:42:58Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a4890954-ee04-4573-a52a-d0437f2c0f47\\\",\\\"systemUUID\\\":\\\"de71db26-32c1-4956-9e9f-66fc023dcd38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:58Z is after 2025-08-24T17:21:41Z" Oct 01 15:42:58 crc kubenswrapper[4949]: E1001 15:42:58.053992 4949 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.055407 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.055516 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.055585 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.055695 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.055772 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:58Z","lastTransitionTime":"2025-10-01T15:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.158460 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.158846 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.158973 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.159111 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.159261 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:58Z","lastTransitionTime":"2025-10-01T15:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.262738 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.262797 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.262814 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.262840 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.262859 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:58Z","lastTransitionTime":"2025-10-01T15:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.365699 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.365761 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.365772 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.365789 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.365802 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:58Z","lastTransitionTime":"2025-10-01T15:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.468427 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.468502 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.468521 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.468548 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.468567 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:58Z","lastTransitionTime":"2025-10-01T15:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.571664 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.572117 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.572314 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.572471 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.572621 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:58Z","lastTransitionTime":"2025-10-01T15:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.601599 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.601619 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.601664 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:42:58 crc kubenswrapper[4949]: E1001 15:42:58.602317 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:42:58 crc kubenswrapper[4949]: E1001 15:42:58.602110 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.601685 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:42:58 crc kubenswrapper[4949]: E1001 15:42:58.602504 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:42:58 crc kubenswrapper[4949]: E1001 15:42:58.602716 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kfx8b" podUID="d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.676003 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.676062 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.676074 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.676093 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.676106 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:58Z","lastTransitionTime":"2025-10-01T15:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.785142 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.785178 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.785188 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.785203 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.785214 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:58Z","lastTransitionTime":"2025-10-01T15:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.887926 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.887988 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.888002 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.888023 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.888038 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:58Z","lastTransitionTime":"2025-10-01T15:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.991147 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.991189 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.991198 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.991215 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:58 crc kubenswrapper[4949]: I1001 15:42:58.991224 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:58Z","lastTransitionTime":"2025-10-01T15:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:59 crc kubenswrapper[4949]: I1001 15:42:59.093495 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:59 crc kubenswrapper[4949]: I1001 15:42:59.093541 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:59 crc kubenswrapper[4949]: I1001 15:42:59.093553 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:59 crc kubenswrapper[4949]: I1001 15:42:59.093570 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:59 crc kubenswrapper[4949]: I1001 15:42:59.093583 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:59Z","lastTransitionTime":"2025-10-01T15:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:59 crc kubenswrapper[4949]: I1001 15:42:59.196593 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:59 crc kubenswrapper[4949]: I1001 15:42:59.196638 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:59 crc kubenswrapper[4949]: I1001 15:42:59.196659 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:59 crc kubenswrapper[4949]: I1001 15:42:59.196679 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:59 crc kubenswrapper[4949]: I1001 15:42:59.196693 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:59Z","lastTransitionTime":"2025-10-01T15:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:59 crc kubenswrapper[4949]: I1001 15:42:59.299996 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:59 crc kubenswrapper[4949]: I1001 15:42:59.300072 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:59 crc kubenswrapper[4949]: I1001 15:42:59.300086 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:59 crc kubenswrapper[4949]: I1001 15:42:59.300106 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:59 crc kubenswrapper[4949]: I1001 15:42:59.300145 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:59Z","lastTransitionTime":"2025-10-01T15:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:59 crc kubenswrapper[4949]: I1001 15:42:59.403098 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:59 crc kubenswrapper[4949]: I1001 15:42:59.403168 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:59 crc kubenswrapper[4949]: I1001 15:42:59.403186 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:59 crc kubenswrapper[4949]: I1001 15:42:59.403209 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:59 crc kubenswrapper[4949]: I1001 15:42:59.403220 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:59Z","lastTransitionTime":"2025-10-01T15:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:59 crc kubenswrapper[4949]: I1001 15:42:59.506406 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:59 crc kubenswrapper[4949]: I1001 15:42:59.506484 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:59 crc kubenswrapper[4949]: I1001 15:42:59.506510 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:59 crc kubenswrapper[4949]: I1001 15:42:59.506539 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:59 crc kubenswrapper[4949]: I1001 15:42:59.506560 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:59Z","lastTransitionTime":"2025-10-01T15:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:59 crc kubenswrapper[4949]: I1001 15:42:59.608714 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:59 crc kubenswrapper[4949]: I1001 15:42:59.608783 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:59 crc kubenswrapper[4949]: I1001 15:42:59.608801 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:59 crc kubenswrapper[4949]: I1001 15:42:59.608839 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:59 crc kubenswrapper[4949]: I1001 15:42:59.608857 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:59Z","lastTransitionTime":"2025-10-01T15:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:59 crc kubenswrapper[4949]: I1001 15:42:59.712101 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:59 crc kubenswrapper[4949]: I1001 15:42:59.712399 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:59 crc kubenswrapper[4949]: I1001 15:42:59.712676 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:59 crc kubenswrapper[4949]: I1001 15:42:59.712882 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:59 crc kubenswrapper[4949]: I1001 15:42:59.713081 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:59Z","lastTransitionTime":"2025-10-01T15:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:59 crc kubenswrapper[4949]: I1001 15:42:59.816749 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:59 crc kubenswrapper[4949]: I1001 15:42:59.817172 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:59 crc kubenswrapper[4949]: I1001 15:42:59.817343 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:59 crc kubenswrapper[4949]: I1001 15:42:59.817556 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:59 crc kubenswrapper[4949]: I1001 15:42:59.817731 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:59Z","lastTransitionTime":"2025-10-01T15:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:42:59 crc kubenswrapper[4949]: I1001 15:42:59.920959 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:42:59 crc kubenswrapper[4949]: I1001 15:42:59.920989 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:42:59 crc kubenswrapper[4949]: I1001 15:42:59.920998 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:42:59 crc kubenswrapper[4949]: I1001 15:42:59.921012 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:42:59 crc kubenswrapper[4949]: I1001 15:42:59.921023 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:42:59Z","lastTransitionTime":"2025-10-01T15:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:00 crc kubenswrapper[4949]: I1001 15:43:00.024244 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:00 crc kubenswrapper[4949]: I1001 15:43:00.024567 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:00 crc kubenswrapper[4949]: I1001 15:43:00.024697 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:00 crc kubenswrapper[4949]: I1001 15:43:00.024796 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:00 crc kubenswrapper[4949]: I1001 15:43:00.024882 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:00Z","lastTransitionTime":"2025-10-01T15:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:00 crc kubenswrapper[4949]: I1001 15:43:00.128287 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:00 crc kubenswrapper[4949]: I1001 15:43:00.128393 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:00 crc kubenswrapper[4949]: I1001 15:43:00.128416 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:00 crc kubenswrapper[4949]: I1001 15:43:00.128440 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:00 crc kubenswrapper[4949]: I1001 15:43:00.128459 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:00Z","lastTransitionTime":"2025-10-01T15:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:00 crc kubenswrapper[4949]: I1001 15:43:00.231475 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:00 crc kubenswrapper[4949]: I1001 15:43:00.231516 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:00 crc kubenswrapper[4949]: I1001 15:43:00.231526 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:00 crc kubenswrapper[4949]: I1001 15:43:00.231544 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:00 crc kubenswrapper[4949]: I1001 15:43:00.231553 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:00Z","lastTransitionTime":"2025-10-01T15:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:00 crc kubenswrapper[4949]: I1001 15:43:00.336050 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:00 crc kubenswrapper[4949]: I1001 15:43:00.336089 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:00 crc kubenswrapper[4949]: I1001 15:43:00.336098 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:00 crc kubenswrapper[4949]: I1001 15:43:00.336111 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:00 crc kubenswrapper[4949]: I1001 15:43:00.336135 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:00Z","lastTransitionTime":"2025-10-01T15:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:00 crc kubenswrapper[4949]: I1001 15:43:00.439205 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:00 crc kubenswrapper[4949]: I1001 15:43:00.439282 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:00 crc kubenswrapper[4949]: I1001 15:43:00.439295 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:00 crc kubenswrapper[4949]: I1001 15:43:00.439318 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:00 crc kubenswrapper[4949]: I1001 15:43:00.439336 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:00Z","lastTransitionTime":"2025-10-01T15:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:00 crc kubenswrapper[4949]: I1001 15:43:00.541659 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:00 crc kubenswrapper[4949]: I1001 15:43:00.541689 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:00 crc kubenswrapper[4949]: I1001 15:43:00.541701 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:00 crc kubenswrapper[4949]: I1001 15:43:00.541717 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:00 crc kubenswrapper[4949]: I1001 15:43:00.541727 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:00Z","lastTransitionTime":"2025-10-01T15:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:00 crc kubenswrapper[4949]: I1001 15:43:00.601434 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:43:00 crc kubenswrapper[4949]: E1001 15:43:00.601550 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:43:00 crc kubenswrapper[4949]: I1001 15:43:00.601690 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:43:00 crc kubenswrapper[4949]: E1001 15:43:00.601733 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:43:00 crc kubenswrapper[4949]: I1001 15:43:00.601435 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:43:00 crc kubenswrapper[4949]: E1001 15:43:00.601838 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:43:00 crc kubenswrapper[4949]: I1001 15:43:00.601955 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:43:00 crc kubenswrapper[4949]: E1001 15:43:00.602107 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kfx8b" podUID="d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd" Oct 01 15:43:00 crc kubenswrapper[4949]: I1001 15:43:00.644616 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:00 crc kubenswrapper[4949]: I1001 15:43:00.644658 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:00 crc kubenswrapper[4949]: I1001 15:43:00.644671 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:00 crc kubenswrapper[4949]: I1001 15:43:00.644687 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:00 crc kubenswrapper[4949]: I1001 15:43:00.644699 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:00Z","lastTransitionTime":"2025-10-01T15:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:00 crc kubenswrapper[4949]: I1001 15:43:00.748023 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:00 crc kubenswrapper[4949]: I1001 15:43:00.748076 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:00 crc kubenswrapper[4949]: I1001 15:43:00.748093 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:00 crc kubenswrapper[4949]: I1001 15:43:00.748147 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:00 crc kubenswrapper[4949]: I1001 15:43:00.748166 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:00Z","lastTransitionTime":"2025-10-01T15:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:00 crc kubenswrapper[4949]: I1001 15:43:00.850344 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:00 crc kubenswrapper[4949]: I1001 15:43:00.850397 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:00 crc kubenswrapper[4949]: I1001 15:43:00.850411 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:00 crc kubenswrapper[4949]: I1001 15:43:00.850433 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:00 crc kubenswrapper[4949]: I1001 15:43:00.850446 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:00Z","lastTransitionTime":"2025-10-01T15:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:00 crc kubenswrapper[4949]: I1001 15:43:00.952938 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:00 crc kubenswrapper[4949]: I1001 15:43:00.953501 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:00 crc kubenswrapper[4949]: I1001 15:43:00.953602 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:00 crc kubenswrapper[4949]: I1001 15:43:00.953685 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:00 crc kubenswrapper[4949]: I1001 15:43:00.953757 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:00Z","lastTransitionTime":"2025-10-01T15:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.056725 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.056800 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.056823 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.056851 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.056871 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:01Z","lastTransitionTime":"2025-10-01T15:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.159771 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.159816 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.159824 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.159839 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.159847 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:01Z","lastTransitionTime":"2025-10-01T15:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.261830 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.261871 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.261882 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.261908 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.261920 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:01Z","lastTransitionTime":"2025-10-01T15:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.365256 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.365320 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.365337 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.365362 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.365421 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:01Z","lastTransitionTime":"2025-10-01T15:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.468056 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.468165 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.468189 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.468211 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.468227 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:01Z","lastTransitionTime":"2025-10-01T15:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.570184 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.570225 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.570237 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.570254 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.570266 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:01Z","lastTransitionTime":"2025-10-01T15:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.629679 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a28cdb5789930fa13d9e76aeb4dc582efc2f97a117e7d3170909dddd8f6eed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a081b64cca92126512c81346c1b0005ddefcd11b702b12598c567cc627d3561f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:01Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.643084 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kg2qk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75c430b-f863-470c-b57f-def53bf840db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab6ca47d17c063e35d4f3109e8dbe39f42dbf245b8f439b12464e167fb57dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kg2qk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:01Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.667078 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b30af5f-469f-4bee-b77f-4b58edba325b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2df7f85dd23b39e72771fddf6c1ab34df3b9fcea664dffe05a1b8859281fff26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0d4badfc0169a3e0cbd62d6661b7e542f43cbbe9005a56c68c7e0027235b64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://728c9faef5d0d63e5dd9cd98520d62db3e75646be0aeec6ff20c71598e35b5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35e991ed458eaa7ea90ab0ae0204b7c9b865ad5c5044709257614edda19865c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde65de2ce60422cb5ddb92562725b8dcd2f8b36bee19cc6e6de8aaab216c7ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb7bdabc02a6c9442d7f842e078acef313e10cd6dbf7153b289ee93b99c9abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://b25501be462a751324672b5d9b82300e2ac592102b8247557aa5a70dd38f26c1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25501be462a751324672b5d9b82300e2ac592102b8247557aa5a70dd38f26c1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-01T15:42:35Z\\\",\\\"message\\\":\\\"r for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:35Z is after 2025-08-24T17:21:41Z]\\\\nI1001 15:42:35.385219 6563 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-route-controller-manager/route-controller-manager]} name:Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.239:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {18746a4d-8a63-458a-b7e3-8fb89ff95fc0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1001 15:42:35.385254 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-pppfm_openshift-ovn-kubernetes(6b30af5f-469f-4bee-b77f-4b58edba325b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08a6304f2c50f5e54adf26554bd2f81aba317d4ee153f55dc25123926bc380e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pppfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:01Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.672097 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.672139 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.672151 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.672166 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.672178 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:01Z","lastTransitionTime":"2025-10-01T15:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.684822 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zxj4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"660f5a12-b71d-454a-8ec0-bae2646530a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://956d6c7f1d35dedc8f0f1255301e89801525d97a0a3837f3d085900a53bead59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbc4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4e431fdeada96dc88cfe8f1b26c2ba3502f52e3d28880a87c5c4ed7482defb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbc4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zxj4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:01Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.701904 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439d2f19-631a-4560-aa58-70cdeef997a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://110e9a59347a665dfa5d6fcdf9e4937525c5e799c8356b2f28d3ba523f925d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513e0be5244aa9da6671888fedd775306c0824650cc687f0faea952c28d1b6b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47ab556b4721804041f9abee5effa082f4f29fbffe7681a7fc8efe0cb4f2b7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41f38cee75135e15635fdda772af2bdd1588e57025d28247223af8b5c34202e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02eba524109342a0990f10dfdf5432c2504acbb394b4f6e23b370becc256f51c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-01T15:41:55Z\\\",\\\"message\\\":\\\"W1001 15:41:44.880099 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1001 15:41:44.880530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759333304 cert, and key in /tmp/serving-cert-2775419030/serving-signer.crt, /tmp/serving-cert-2775419030/serving-signer.key\\\\nI1001 15:41:45.097803 1 observer_polling.go:159] Starting file observer\\\\nW1001 15:41:45.100841 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1001 15:41:45.103153 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1001 15:41:45.104329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2775419030/tls.crt::/tmp/serving-cert-2775419030/tls.key\\\\\\\"\\\\nF1001 15:41:55.441587 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2ceea2906c651debca4b72a5654ba5f8291a20d513a96b2fde7f1a0373674c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:01Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.713957 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pgg4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25c0759d-c94a-438c-b478-48161acbb035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9037ea44408cee3d00ee27bb38ea8e49344ab06d6b173d86689a131010eb7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r42ms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pgg4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:01Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.727440 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kfx8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2bpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2bpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kfx8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:01Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.742988 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:01Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.755859 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:01Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.769961 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:01Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.774450 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.774495 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.774511 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.774534 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.774548 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:01Z","lastTransitionTime":"2025-10-01T15:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.782910 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e15cd67-d4ad-49b8-96a6-da114105e558\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee03163adf8037e21674f8ff81a9eba22d580bf29b7c9a81d9505dd1e768169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a240a198e037219428db0de4c633c5e99064f7f13a6a8922eba7fa3fac633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6287\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:01Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.797033 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s5r4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3d51987d4ca121795b1960ac1901175eb67d69f9380770888791f085e0f3bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a54b1244d03fd99a21c9ca116873fe7a80ec3053c95c8b99de3fb62e581419\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-01T15:42:53Z\\\",\\\"message\\\":\\\"2025-10-01T15:42:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c8f86e39-7a0c-46dd-b100-b876fbdfae38\\\\n2025-10-01T15:42:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c8f86e39-7a0c-46dd-b100-b876fbdfae38 to /host/opt/cni/bin/\\\\n2025-10-01T15:42:08Z [verbose] multus-daemon started\\\\n2025-10-01T15:42:08Z [verbose] Readiness Indicator file check\\\\n2025-10-01T15:42:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghb86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s5r4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:01Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.809239 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xr96p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6656de7a-9b8f-4714-81ae-3685c01f11fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24da01cfcdff6460cccc5cf06d24aa758d33ee7d086c7b93078b5848c650fcc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0296aee184dcef13f03cc1da4987bef80c13c394f067300167eadd8a4091f4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0296aee184dcef13f03cc1da4987bef80c13c394f067300167eadd8a4091f4c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xr96p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:01Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.827993 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5897a864-bb97-4443-a301-b30b22d13a88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02305e2d3e998f2b11185e65d51381d27f69cc544cba61b0878457e4816e5371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05aabfc445359dba1707d194e9dcf27bb57037b582f1181556cc65e2f2f712f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34912853a2f81af97497fc66053b02383114c5a42d46e39f2e7a98b48bf9e78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31382514a5f6e8d234356e87d2c1aba55e02a4edb54835382fe810f10e8df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c11c5a8bd9444a04b072ad656edec0e3f7d287392a2dffd509f3d7c0d1d4601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:01Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.838789 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d76d2a7d-139b-40b6-91f8-c2c8ae0ecc71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b08c9c1691470813c489fb61abd198ca2b5a94ddec2f9b5a06b9d6c9b56334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e467246736f9943a0524d6786a8c4bd4a567f60f20ffd7259a5124c844c44522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da953f01526669a6db92b3ed2d63682b200ea37d86e44ff34a3b7630877fe6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7235ba9aab9db39a23fe6b99b3278ff4c1ede47a1d58c2c986fe3244a2fa12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:01Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.848570 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8101fee3-df2e-48d3-87d7-fd9cb4da0f2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ae2d981e775beb7e08339ee2b24437a4e7faa11462128d69dc0120905075759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://714ddbb1add5e32f98fea89501df6bfa217b607e3252008af82f130bc66e69c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d21bf7c2c1ae0ae4427bdccaa3cb14eec8034ad6b89498a13e41c346aa10f33d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd26ba21f3b352fff240b18114aa47c6185f125ed89be2586e676b766434c3b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd26ba21f3b352fff240b18114aa47c6185f125ed89be2586e676b766434c3b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:01Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.862558 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa37955372c2b350c3cd8103db2d231407c8f853e02d43fcae591d66a1f2bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:01Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.872458 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a5390e0a0285f487962def201f71183a2cfdbcc8db56f1bd0c64b2f6014c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:01Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.877050 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.877086 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.877095 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.877109 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.877118 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:01Z","lastTransitionTime":"2025-10-01T15:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.980249 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.980329 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.980353 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.980379 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:01 crc kubenswrapper[4949]: I1001 15:43:01.980398 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:01Z","lastTransitionTime":"2025-10-01T15:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:02 crc kubenswrapper[4949]: I1001 15:43:02.082054 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:02 crc kubenswrapper[4949]: I1001 15:43:02.082100 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:02 crc kubenswrapper[4949]: I1001 15:43:02.082112 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:02 crc kubenswrapper[4949]: I1001 15:43:02.082149 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:02 crc kubenswrapper[4949]: I1001 15:43:02.082160 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:02Z","lastTransitionTime":"2025-10-01T15:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:02 crc kubenswrapper[4949]: I1001 15:43:02.185266 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:02 crc kubenswrapper[4949]: I1001 15:43:02.185319 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:02 crc kubenswrapper[4949]: I1001 15:43:02.185338 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:02 crc kubenswrapper[4949]: I1001 15:43:02.185362 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:02 crc kubenswrapper[4949]: I1001 15:43:02.185380 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:02Z","lastTransitionTime":"2025-10-01T15:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:02 crc kubenswrapper[4949]: I1001 15:43:02.289089 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:02 crc kubenswrapper[4949]: I1001 15:43:02.289335 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:02 crc kubenswrapper[4949]: I1001 15:43:02.289372 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:02 crc kubenswrapper[4949]: I1001 15:43:02.289450 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:02 crc kubenswrapper[4949]: I1001 15:43:02.289487 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:02Z","lastTransitionTime":"2025-10-01T15:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:02 crc kubenswrapper[4949]: I1001 15:43:02.392449 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:02 crc kubenswrapper[4949]: I1001 15:43:02.392519 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:02 crc kubenswrapper[4949]: I1001 15:43:02.392536 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:02 crc kubenswrapper[4949]: I1001 15:43:02.392561 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:02 crc kubenswrapper[4949]: I1001 15:43:02.392580 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:02Z","lastTransitionTime":"2025-10-01T15:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:02 crc kubenswrapper[4949]: I1001 15:43:02.495099 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:02 crc kubenswrapper[4949]: I1001 15:43:02.495211 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:02 crc kubenswrapper[4949]: I1001 15:43:02.495235 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:02 crc kubenswrapper[4949]: I1001 15:43:02.495264 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:02 crc kubenswrapper[4949]: I1001 15:43:02.495283 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:02Z","lastTransitionTime":"2025-10-01T15:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:02 crc kubenswrapper[4949]: I1001 15:43:02.598476 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:02 crc kubenswrapper[4949]: I1001 15:43:02.598536 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:02 crc kubenswrapper[4949]: I1001 15:43:02.598555 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:02 crc kubenswrapper[4949]: I1001 15:43:02.598579 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:02 crc kubenswrapper[4949]: I1001 15:43:02.598595 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:02Z","lastTransitionTime":"2025-10-01T15:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:02 crc kubenswrapper[4949]: I1001 15:43:02.601078 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:43:02 crc kubenswrapper[4949]: I1001 15:43:02.601153 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:43:02 crc kubenswrapper[4949]: I1001 15:43:02.601234 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:43:02 crc kubenswrapper[4949]: E1001 15:43:02.601423 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:43:02 crc kubenswrapper[4949]: I1001 15:43:02.601480 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:43:02 crc kubenswrapper[4949]: E1001 15:43:02.601687 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:43:02 crc kubenswrapper[4949]: E1001 15:43:02.601785 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kfx8b" podUID="d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd" Oct 01 15:43:02 crc kubenswrapper[4949]: E1001 15:43:02.601954 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:43:02 crc kubenswrapper[4949]: I1001 15:43:02.706164 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:02 crc kubenswrapper[4949]: I1001 15:43:02.706245 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:02 crc kubenswrapper[4949]: I1001 15:43:02.706272 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:02 crc kubenswrapper[4949]: I1001 15:43:02.706305 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:02 crc kubenswrapper[4949]: I1001 15:43:02.706346 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:02Z","lastTransitionTime":"2025-10-01T15:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:02 crc kubenswrapper[4949]: I1001 15:43:02.808933 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:02 crc kubenswrapper[4949]: I1001 15:43:02.808973 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:02 crc kubenswrapper[4949]: I1001 15:43:02.808984 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:02 crc kubenswrapper[4949]: I1001 15:43:02.808998 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:02 crc kubenswrapper[4949]: I1001 15:43:02.809009 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:02Z","lastTransitionTime":"2025-10-01T15:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:02 crc kubenswrapper[4949]: I1001 15:43:02.911516 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:02 crc kubenswrapper[4949]: I1001 15:43:02.911551 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:02 crc kubenswrapper[4949]: I1001 15:43:02.911559 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:02 crc kubenswrapper[4949]: I1001 15:43:02.911584 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:02 crc kubenswrapper[4949]: I1001 15:43:02.911594 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:02Z","lastTransitionTime":"2025-10-01T15:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:03 crc kubenswrapper[4949]: I1001 15:43:03.014810 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:03 crc kubenswrapper[4949]: I1001 15:43:03.014878 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:03 crc kubenswrapper[4949]: I1001 15:43:03.014919 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:03 crc kubenswrapper[4949]: I1001 15:43:03.014944 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:03 crc kubenswrapper[4949]: I1001 15:43:03.014961 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:03Z","lastTransitionTime":"2025-10-01T15:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:03 crc kubenswrapper[4949]: I1001 15:43:03.117081 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:03 crc kubenswrapper[4949]: I1001 15:43:03.117156 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:03 crc kubenswrapper[4949]: I1001 15:43:03.117167 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:03 crc kubenswrapper[4949]: I1001 15:43:03.117192 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:03 crc kubenswrapper[4949]: I1001 15:43:03.117200 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:03Z","lastTransitionTime":"2025-10-01T15:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:03 crc kubenswrapper[4949]: I1001 15:43:03.219740 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:03 crc kubenswrapper[4949]: I1001 15:43:03.219805 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:03 crc kubenswrapper[4949]: I1001 15:43:03.219822 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:03 crc kubenswrapper[4949]: I1001 15:43:03.219847 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:03 crc kubenswrapper[4949]: I1001 15:43:03.219863 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:03Z","lastTransitionTime":"2025-10-01T15:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:03 crc kubenswrapper[4949]: I1001 15:43:03.323824 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:03 crc kubenswrapper[4949]: I1001 15:43:03.323927 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:03 crc kubenswrapper[4949]: I1001 15:43:03.323952 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:03 crc kubenswrapper[4949]: I1001 15:43:03.323988 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:03 crc kubenswrapper[4949]: I1001 15:43:03.324036 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:03Z","lastTransitionTime":"2025-10-01T15:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:03 crc kubenswrapper[4949]: I1001 15:43:03.427417 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:03 crc kubenswrapper[4949]: I1001 15:43:03.427493 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:03 crc kubenswrapper[4949]: I1001 15:43:03.427512 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:03 crc kubenswrapper[4949]: I1001 15:43:03.427536 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:03 crc kubenswrapper[4949]: I1001 15:43:03.427558 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:03Z","lastTransitionTime":"2025-10-01T15:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:03 crc kubenswrapper[4949]: I1001 15:43:03.531286 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:03 crc kubenswrapper[4949]: I1001 15:43:03.531351 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:03 crc kubenswrapper[4949]: I1001 15:43:03.531366 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:03 crc kubenswrapper[4949]: I1001 15:43:03.531406 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:03 crc kubenswrapper[4949]: I1001 15:43:03.531425 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:03Z","lastTransitionTime":"2025-10-01T15:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:03 crc kubenswrapper[4949]: I1001 15:43:03.633619 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:03 crc kubenswrapper[4949]: I1001 15:43:03.633686 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:03 crc kubenswrapper[4949]: I1001 15:43:03.633704 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:03 crc kubenswrapper[4949]: I1001 15:43:03.633727 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:03 crc kubenswrapper[4949]: I1001 15:43:03.633746 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:03Z","lastTransitionTime":"2025-10-01T15:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:03 crc kubenswrapper[4949]: I1001 15:43:03.736458 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:03 crc kubenswrapper[4949]: I1001 15:43:03.736522 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:03 crc kubenswrapper[4949]: I1001 15:43:03.736539 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:03 crc kubenswrapper[4949]: I1001 15:43:03.736563 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:03 crc kubenswrapper[4949]: I1001 15:43:03.736580 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:03Z","lastTransitionTime":"2025-10-01T15:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:03 crc kubenswrapper[4949]: I1001 15:43:03.839871 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:03 crc kubenswrapper[4949]: I1001 15:43:03.839922 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:03 crc kubenswrapper[4949]: I1001 15:43:03.839934 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:03 crc kubenswrapper[4949]: I1001 15:43:03.839952 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:03 crc kubenswrapper[4949]: I1001 15:43:03.839964 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:03Z","lastTransitionTime":"2025-10-01T15:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:03 crc kubenswrapper[4949]: I1001 15:43:03.942685 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:03 crc kubenswrapper[4949]: I1001 15:43:03.942782 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:03 crc kubenswrapper[4949]: I1001 15:43:03.942798 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:03 crc kubenswrapper[4949]: I1001 15:43:03.942817 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:03 crc kubenswrapper[4949]: I1001 15:43:03.942833 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:03Z","lastTransitionTime":"2025-10-01T15:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.045674 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.045739 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.045757 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.045783 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.045802 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:04Z","lastTransitionTime":"2025-10-01T15:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.149681 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.149734 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.149748 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.149766 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.149778 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:04Z","lastTransitionTime":"2025-10-01T15:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.252484 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.252567 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.252604 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.252635 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.252695 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:04Z","lastTransitionTime":"2025-10-01T15:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.352993 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.353273 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.353335 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:43:04 crc kubenswrapper[4949]: E1001 15:43:04.353484 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:08.353386174 +0000 UTC m=+147.658992395 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:43:04 crc kubenswrapper[4949]: E1001 15:43:04.353557 4949 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 01 15:43:04 crc kubenswrapper[4949]: E1001 15:43:04.353591 4949 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Oct 01 15:43:04 crc kubenswrapper[4949]: E1001 15:43:04.353638 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-01 15:44:08.353613881 +0000 UTC m=+147.659220112 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Oct 01 15:43:04 crc kubenswrapper[4949]: E1001 15:43:04.353672 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2025-10-01 15:44:08.353655482 +0000 UTC m=+147.659261723 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.356047 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.356100 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.356145 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.356195 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.356247 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:04Z","lastTransitionTime":"2025-10-01T15:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.455212 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.455350 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:43:04 crc kubenswrapper[4949]: E1001 15:43:04.455464 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 01 15:43:04 crc kubenswrapper[4949]: E1001 15:43:04.455512 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 01 15:43:04 crc kubenswrapper[4949]: E1001 15:43:04.455540 4949 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 01 15:43:04 crc kubenswrapper[4949]: E1001 15:43:04.455688 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2025-10-01 15:44:08.455641201 +0000 UTC m=+147.761247432 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 01 15:43:04 crc kubenswrapper[4949]: E1001 15:43:04.455732 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Oct 01 15:43:04 crc kubenswrapper[4949]: E1001 15:43:04.455793 4949 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Oct 01 15:43:04 crc kubenswrapper[4949]: E1001 15:43:04.455820 4949 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 01 15:43:04 crc kubenswrapper[4949]: E1001 15:43:04.455964 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2025-10-01 15:44:08.455909449 +0000 UTC m=+147.761515680 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.459620 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.459670 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.459689 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.459712 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.459728 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:04Z","lastTransitionTime":"2025-10-01T15:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.563354 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.563478 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.563500 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.563530 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.563552 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:04Z","lastTransitionTime":"2025-10-01T15:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.600958 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.601076 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.601237 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.601291 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:43:04 crc kubenswrapper[4949]: E1001 15:43:04.601468 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:43:04 crc kubenswrapper[4949]: E1001 15:43:04.601623 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kfx8b" podUID="d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd" Oct 01 15:43:04 crc kubenswrapper[4949]: E1001 15:43:04.601740 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:43:04 crc kubenswrapper[4949]: E1001 15:43:04.601834 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.616739 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.666975 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.667035 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.667048 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.667064 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.667077 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:04Z","lastTransitionTime":"2025-10-01T15:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.769880 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.769939 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.769956 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.769982 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.770002 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:04Z","lastTransitionTime":"2025-10-01T15:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.873005 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.873069 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.873086 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.873111 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.873162 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:04Z","lastTransitionTime":"2025-10-01T15:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.976345 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.976410 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.976428 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.976451 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:04 crc kubenswrapper[4949]: I1001 15:43:04.976469 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:04Z","lastTransitionTime":"2025-10-01T15:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:05 crc kubenswrapper[4949]: I1001 15:43:05.080252 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:05 crc kubenswrapper[4949]: I1001 15:43:05.080327 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:05 crc kubenswrapper[4949]: I1001 15:43:05.080344 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:05 crc kubenswrapper[4949]: I1001 15:43:05.080368 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:05 crc kubenswrapper[4949]: I1001 15:43:05.080386 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:05Z","lastTransitionTime":"2025-10-01T15:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:05 crc kubenswrapper[4949]: I1001 15:43:05.183892 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:05 crc kubenswrapper[4949]: I1001 15:43:05.183969 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:05 crc kubenswrapper[4949]: I1001 15:43:05.183986 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:05 crc kubenswrapper[4949]: I1001 15:43:05.184010 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:05 crc kubenswrapper[4949]: I1001 15:43:05.184054 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:05Z","lastTransitionTime":"2025-10-01T15:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:05 crc kubenswrapper[4949]: I1001 15:43:05.287337 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:05 crc kubenswrapper[4949]: I1001 15:43:05.287407 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:05 crc kubenswrapper[4949]: I1001 15:43:05.287424 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:05 crc kubenswrapper[4949]: I1001 15:43:05.287448 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:05 crc kubenswrapper[4949]: I1001 15:43:05.287465 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:05Z","lastTransitionTime":"2025-10-01T15:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:05 crc kubenswrapper[4949]: I1001 15:43:05.390690 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:05 crc kubenswrapper[4949]: I1001 15:43:05.390777 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:05 crc kubenswrapper[4949]: I1001 15:43:05.390805 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:05 crc kubenswrapper[4949]: I1001 15:43:05.390838 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:05 crc kubenswrapper[4949]: I1001 15:43:05.390864 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:05Z","lastTransitionTime":"2025-10-01T15:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:05 crc kubenswrapper[4949]: I1001 15:43:05.494564 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:05 crc kubenswrapper[4949]: I1001 15:43:05.494606 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:05 crc kubenswrapper[4949]: I1001 15:43:05.494617 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:05 crc kubenswrapper[4949]: I1001 15:43:05.494635 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:05 crc kubenswrapper[4949]: I1001 15:43:05.494647 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:05Z","lastTransitionTime":"2025-10-01T15:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:05 crc kubenswrapper[4949]: I1001 15:43:05.597607 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:05 crc kubenswrapper[4949]: I1001 15:43:05.597685 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:05 crc kubenswrapper[4949]: I1001 15:43:05.597709 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:05 crc kubenswrapper[4949]: I1001 15:43:05.597734 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:05 crc kubenswrapper[4949]: I1001 15:43:05.597751 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:05Z","lastTransitionTime":"2025-10-01T15:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:05 crc kubenswrapper[4949]: I1001 15:43:05.602747 4949 scope.go:117] "RemoveContainer" containerID="b25501be462a751324672b5d9b82300e2ac592102b8247557aa5a70dd38f26c1" Oct 01 15:43:05 crc kubenswrapper[4949]: I1001 15:43:05.700868 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:05 crc kubenswrapper[4949]: I1001 15:43:05.700923 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:05 crc kubenswrapper[4949]: I1001 15:43:05.700942 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:05 crc kubenswrapper[4949]: I1001 15:43:05.700965 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:05 crc kubenswrapper[4949]: I1001 15:43:05.700982 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:05Z","lastTransitionTime":"2025-10-01T15:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:05 crc kubenswrapper[4949]: I1001 15:43:05.804659 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:05 crc kubenswrapper[4949]: I1001 15:43:05.804704 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:05 crc kubenswrapper[4949]: I1001 15:43:05.804722 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:05 crc kubenswrapper[4949]: I1001 15:43:05.804745 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:05 crc kubenswrapper[4949]: I1001 15:43:05.804762 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:05Z","lastTransitionTime":"2025-10-01T15:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:05 crc kubenswrapper[4949]: I1001 15:43:05.907495 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:05 crc kubenswrapper[4949]: I1001 15:43:05.907543 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:05 crc kubenswrapper[4949]: I1001 15:43:05.907555 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:05 crc kubenswrapper[4949]: I1001 15:43:05.907573 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:05 crc kubenswrapper[4949]: I1001 15:43:05.907584 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:05Z","lastTransitionTime":"2025-10-01T15:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.010548 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.010595 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.010606 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.010625 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.010637 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:06Z","lastTransitionTime":"2025-10-01T15:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.096559 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pppfm_6b30af5f-469f-4bee-b77f-4b58edba325b/ovnkube-controller/2.log" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.102446 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" event={"ID":"6b30af5f-469f-4bee-b77f-4b58edba325b","Type":"ContainerStarted","Data":"1e7482d935a30cc08710ffb6a3f58cda0f46736eedf37f8852d2d027ef121e6a"} Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.103703 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.113015 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.113042 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.113052 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.113064 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.113074 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:06Z","lastTransitionTime":"2025-10-01T15:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.128803 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439d2f19-631a-4560-aa58-70cdeef997a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://110e9a59347a665dfa5d6fcdf9e4937525c5e799c8356b2f28d3ba523f925d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513e0be5244aa9da6671888fedd775306c0824650cc687f0faea952c28d1b6b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47ab556b4721804041f9abee5effa082f4f29fbffe7681a7fc8efe0cb4f2b7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41f38cee75135e15635fdda772af2bdd1588e57025d28247223af8b5c34202e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02eba524109342a0990f10dfdf5432c2504acbb394b4f6e23b370becc256f51c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-01T15:41:55Z\\\",\\\"message\\\":\\\"W1001 15:41:44.880099 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1001 15:41:44.880530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759333304 cert, and key in /tmp/serving-cert-2775419030/serving-signer.crt, /tmp/serving-cert-2775419030/serving-signer.key\\\\nI1001 15:41:45.097803 1 observer_polling.go:159] Starting file observer\\\\nW1001 15:41:45.100841 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1001 15:41:45.103153 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1001 15:41:45.104329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2775419030/tls.crt::/tmp/serving-cert-2775419030/tls.key\\\\\\\"\\\\nF1001 15:41:55.441587 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2ceea2906c651debca4b72a5654ba5f8291a20d513a96b2fde7f1a0373674c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:06Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.145758 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57b836d2-a87c-4356-b8cb-71eed69e089e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e4ba077e53324e0a6cb25d94228b1a6265878a923ef88934ee0d87967385c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbb0d610e0cf4a4beb09c6beae9121ac6b8f448a5d215da1d15debfaf9a4770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cbb0d610e0cf4a4beb09c6beae9121ac6b8f448a5d215da1d15debfaf9a4770\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:06Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.160278 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pgg4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25c0759d-c94a-438c-b478-48161acbb035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9037ea44408cee3d00ee27bb38ea8e49344ab06d6b173d86689a131010eb7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r42ms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pgg4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:06Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.173473 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kfx8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2bpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2bpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kfx8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:06Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.189180 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:06Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.203277 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:06Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.215626 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.215653 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.215663 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.215676 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.215685 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:06Z","lastTransitionTime":"2025-10-01T15:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.217620 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:06Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.230317 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e15cd67-d4ad-49b8-96a6-da114105e558\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee03163adf8037e21674f8ff81a9eba22d580bf29b7c9a81d9505dd1e768169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a240a198e037219428db0de4c633c5e99064f7f13a6a8922eba7fa3fac633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6287\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:06Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.243168 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s5r4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3d51987d4ca121795b1960ac1901175eb67d69f9380770888791f085e0f3bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a54b1244d03fd99a21c9ca116873fe7a80ec3053c95c8b99de3fb62e581419\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-01T15:42:53Z\\\",\\\"message\\\":\\\"2025-10-01T15:42:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c8f86e39-7a0c-46dd-b100-b876fbdfae38\\\\n2025-10-01T15:42:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c8f86e39-7a0c-46dd-b100-b876fbdfae38 to /host/opt/cni/bin/\\\\n2025-10-01T15:42:08Z [verbose] multus-daemon started\\\\n2025-10-01T15:42:08Z [verbose] Readiness Indicator file check\\\\n2025-10-01T15:42:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghb86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s5r4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:06Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.260943 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xr96p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6656de7a-9b8f-4714-81ae-3685c01f11fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24da01cfcdff6460cccc5cf06d24aa758d33ee7d086c7b93078b5848c650fcc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0296aee184dcef13f03cc1da4987bef80c13c394f067300167eadd8a4091f4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0296aee184dcef13f03cc1da4987bef80c13c394f067300167eadd8a4091f4c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xr96p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:06Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.283288 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5897a864-bb97-4443-a301-b30b22d13a88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02305e2d3e998f2b11185e65d51381d27f69cc544cba61b0878457e4816e5371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05aabfc445359dba1707d194e9dcf27bb57037b582f1181556cc65e2f2f712f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34912853a2f81af97497fc66053b02383114c5a42d46e39f2e7a98b48bf9e78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31382514a5f6e8d234356e87d2c1aba55e02a4edb54835382fe810f10e8df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c11c5a8bd9444a04b072ad656edec0e3f7d287392a2dffd509f3d7c0d1d4601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:06Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.300267 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d76d2a7d-139b-40b6-91f8-c2c8ae0ecc71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b08c9c1691470813c489fb61abd198ca2b5a94ddec2f9b5a06b9d6c9b56334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e467246736f9943a0524d6786a8c4bd4a567f60f20ffd7259a5124c844c44522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da953f01526669a6db92b3ed2d63682b200ea37d86e44ff34a3b7630877fe6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7235ba9aab9db39a23fe6b99b3278ff4c1ede47a1d58c2c986fe3244a2fa12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:06Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.317196 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8101fee3-df2e-48d3-87d7-fd9cb4da0f2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ae2d981e775beb7e08339ee2b24437a4e7faa11462128d69dc0120905075759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://714ddbb1add5e32f98fea89501df6bfa217b607e3252008af82f130bc66e69c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d21bf7c2c1ae0ae4427bdccaa3cb14eec8034ad6b89498a13e41c346aa10f33d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd26ba21f3b352fff240b18114aa47c6185f125ed89be2586e676b766434c3b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd26ba21f3b352fff240b18114aa47c6185f125ed89be2586e676b766434c3b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:06Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.318067 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.318108 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.318149 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.318169 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.318179 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:06Z","lastTransitionTime":"2025-10-01T15:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.340011 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa37955372c2b350c3cd8103db2d231407c8f853e02d43fcae591d66a1f2bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:06Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.356800 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a5390e0a0285f487962def201f71183a2cfdbcc8db56f1bd0c64b2f6014c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:06Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.379691 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a28cdb5789930fa13d9e76aeb4dc582efc2f97a117e7d3170909dddd8f6eed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a081b64cca92126512c81346c1b0005ddefcd11b702b12598c567cc627d3561f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:06Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.395901 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kg2qk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75c430b-f863-470c-b57f-def53bf840db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab6ca47d17c063e35d4f3109e8dbe39f42dbf245b8f439b12464e167fb57dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kg2qk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:06Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.420920 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.420958 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.420967 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.420980 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.420989 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:06Z","lastTransitionTime":"2025-10-01T15:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.425458 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b30af5f-469f-4bee-b77f-4b58edba325b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2df7f85dd23b39e72771fddf6c1ab34df3b9fcea664dffe05a1b8859281fff26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0d4badfc0169a3e0cbd62d6661b7e542f43cbbe9005a56c68c7e0027235b64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://728c9faef5d0d63e5dd9cd98520d62db3e75646be0aeec6ff20c71598e35b5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35e991ed458eaa7ea90ab0ae0204b7c9b865ad5c5044709257614edda19865c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde65de2ce60422cb5ddb92562725b8dcd2f8b36bee19cc6e6de8aaab216c7ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb7bdabc02a6c9442d7f842e078acef313e10cd6dbf7153b289ee93b99c9abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7482d935a30cc08710ffb6a3f58cda0f46736eedf37f8852d2d027ef121e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25501be462a751324672b5d9b82300e2ac592102b8247557aa5a70dd38f26c1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-01T15:42:35Z\\\",\\\"message\\\":\\\"r for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:35Z is after 2025-08-24T17:21:41Z]\\\\nI1001 15:42:35.385219 6563 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-route-controller-manager/route-controller-manager]} name:Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.239:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {18746a4d-8a63-458a-b7e3-8fb89ff95fc0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1001 15:42:35.385254 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08a6304f2c50f5e54adf26554bd2f81aba317d4ee153f55dc25123926bc380e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pppfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:06Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.442529 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zxj4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"660f5a12-b71d-454a-8ec0-bae2646530a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://956d6c7f1d35dedc8f0f1255301e89801525d97a0a3837f3d085900a53bead59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbc4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4e431fdeada96dc88cfe8f1b26c2ba3502f52e3d28880a87c5c4ed7482defb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbc4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zxj4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:06Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.523182 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.523210 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.523221 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.523237 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.523248 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:06Z","lastTransitionTime":"2025-10-01T15:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.601049 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.601153 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.601152 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.601232 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:43:06 crc kubenswrapper[4949]: E1001 15:43:06.601235 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:43:06 crc kubenswrapper[4949]: E1001 15:43:06.601336 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kfx8b" podUID="d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd" Oct 01 15:43:06 crc kubenswrapper[4949]: E1001 15:43:06.601421 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:43:06 crc kubenswrapper[4949]: E1001 15:43:06.601454 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.626093 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.626318 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.626380 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.626461 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.626531 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:06Z","lastTransitionTime":"2025-10-01T15:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.729178 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.729256 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.729271 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.729465 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.729483 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:06Z","lastTransitionTime":"2025-10-01T15:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.832695 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.832768 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.832788 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.832812 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.832829 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:06Z","lastTransitionTime":"2025-10-01T15:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.935945 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.936058 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.936085 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.936114 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:06 crc kubenswrapper[4949]: I1001 15:43:06.936167 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:06Z","lastTransitionTime":"2025-10-01T15:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.038213 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.038249 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.038258 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.038272 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.038282 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:07Z","lastTransitionTime":"2025-10-01T15:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.108217 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pppfm_6b30af5f-469f-4bee-b77f-4b58edba325b/ovnkube-controller/3.log" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.109290 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pppfm_6b30af5f-469f-4bee-b77f-4b58edba325b/ovnkube-controller/2.log" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.112909 4949 generic.go:334] "Generic (PLEG): container finished" podID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerID="1e7482d935a30cc08710ffb6a3f58cda0f46736eedf37f8852d2d027ef121e6a" exitCode=1 Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.112957 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" event={"ID":"6b30af5f-469f-4bee-b77f-4b58edba325b","Type":"ContainerDied","Data":"1e7482d935a30cc08710ffb6a3f58cda0f46736eedf37f8852d2d027ef121e6a"} Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.112997 4949 scope.go:117] "RemoveContainer" containerID="b25501be462a751324672b5d9b82300e2ac592102b8247557aa5a70dd38f26c1" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.113622 4949 scope.go:117] "RemoveContainer" containerID="1e7482d935a30cc08710ffb6a3f58cda0f46736eedf37f8852d2d027ef121e6a" Oct 01 15:43:07 crc kubenswrapper[4949]: E1001 15:43:07.113769 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pppfm_openshift-ovn-kubernetes(6b30af5f-469f-4bee-b77f-4b58edba325b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.128208 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:07Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.143258 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.143304 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.143315 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.143330 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.143341 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:07Z","lastTransitionTime":"2025-10-01T15:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.147156 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:07Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.160359 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:07Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.175252 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e15cd67-d4ad-49b8-96a6-da114105e558\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee03163adf8037e21674f8ff81a9eba22d580bf29b7c9a81d9505dd1e768169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a240a198e037219428db0de4c633c5e99064f7f13a6a8922eba7fa3fac633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6287\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:07Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.190832 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s5r4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3d51987d4ca121795b1960ac1901175eb67d69f9380770888791f085e0f3bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a54b1244d03fd99a21c9ca116873fe7a80ec3053c95c8b99de3fb62e581419\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-01T15:42:53Z\\\",\\\"message\\\":\\\"2025-10-01T15:42:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c8f86e39-7a0c-46dd-b100-b876fbdfae38\\\\n2025-10-01T15:42:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c8f86e39-7a0c-46dd-b100-b876fbdfae38 to /host/opt/cni/bin/\\\\n2025-10-01T15:42:08Z [verbose] multus-daemon started\\\\n2025-10-01T15:42:08Z [verbose] Readiness Indicator file check\\\\n2025-10-01T15:42:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghb86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s5r4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:07Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.202437 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kfx8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2bpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2bpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kfx8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:07Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.225864 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5897a864-bb97-4443-a301-b30b22d13a88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02305e2d3e998f2b11185e65d51381d27f69cc544cba61b0878457e4816e5371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05aabfc445359dba1707d194e9dcf27bb57037b582f1181556cc65e2f2f712f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34912853a2f81af97497fc66053b02383114c5a42d46e39f2e7a98b48bf9e78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31382514a5f6e8d234356e87d2c1aba55e02a4edb54835382fe810f10e8df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c11c5a8bd9444a04b072ad656edec0e3f7d287392a2dffd509f3d7c0d1d4601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:07Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.244084 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d76d2a7d-139b-40b6-91f8-c2c8ae0ecc71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b08c9c1691470813c489fb61abd198ca2b5a94ddec2f9b5a06b9d6c9b56334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e467246736f9943a0524d6786a8c4bd4a567f60f20ffd7259a5124c844c44522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da953f01526669a6db92b3ed2d63682b200ea37d86e44ff34a3b7630877fe6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7235ba9aab9db39a23fe6b99b3278ff4c1ede47a1d58c2c986fe3244a2fa12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:07Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.245179 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.245257 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.245276 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.245296 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.245344 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:07Z","lastTransitionTime":"2025-10-01T15:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.257968 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8101fee3-df2e-48d3-87d7-fd9cb4da0f2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ae2d981e775beb7e08339ee2b24437a4e7faa11462128d69dc0120905075759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://714ddbb1add5e32f98fea89501df6bfa217b607e3252008af82f130bc66e69c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d21bf7c2c1ae0ae4427bdccaa3cb14eec8034ad6b89498a13e41c346aa10f33d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd26ba21f3b352fff240b18114aa47c6185f125ed89be2586e676b766434c3b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd26ba21f3b352fff240b18114aa47c6185f125ed89be2586e676b766434c3b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:07Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.271535 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa37955372c2b350c3cd8103db2d231407c8f853e02d43fcae591d66a1f2bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:07Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.282265 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a5390e0a0285f487962def201f71183a2cfdbcc8db56f1bd0c64b2f6014c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:07Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.301937 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xr96p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6656de7a-9b8f-4714-81ae-3685c01f11fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24da01cfcdff6460cccc5cf06d24aa758d33ee7d086c7b93078b5848c650fcc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0296aee184dcef13f03cc1da4987bef80c13c394f067300167eadd8a4091f4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0296aee184dcef13f03cc1da4987bef80c13c394f067300167eadd8a4091f4c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xr96p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:07Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.316391 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a28cdb5789930fa13d9e76aeb4dc582efc2f97a117e7d3170909dddd8f6eed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a081b64cca92126512c81346c1b0005ddefcd11b702b12598c567cc627d3561f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:07Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.329401 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kg2qk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75c430b-f863-470c-b57f-def53bf840db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab6ca47d17c063e35d4f3109e8dbe39f42dbf245b8f439b12464e167fb57dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kg2qk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:07Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.348669 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.348743 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.348767 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.348798 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.348820 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:07Z","lastTransitionTime":"2025-10-01T15:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.361505 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b30af5f-469f-4bee-b77f-4b58edba325b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2df7f85dd23b39e72771fddf6c1ab34df3b9fcea664dffe05a1b8859281fff26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0d4badfc0169a3e0cbd62d6661b7e542f43cbbe9005a56c68c7e0027235b64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://728c9faef5d0d63e5dd9cd98520d62db3e75646be0aeec6ff20c71598e35b5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35e991ed458eaa7ea90ab0ae0204b7c9b865ad5c5044709257614edda19865c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde65de2ce60422cb5ddb92562725b8dcd2f8b36bee19cc6e6de8aaab216c7ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb7bdabc02a6c9442d7f842e078acef313e10cd6dbf7153b289ee93b99c9abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7482d935a30cc08710ffb6a3f58cda0f46736eedf37f8852d2d027ef121e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b25501be462a751324672b5d9b82300e2ac592102b8247557aa5a70dd38f26c1\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-01T15:42:35Z\\\",\\\"message\\\":\\\"r for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:42:35Z is after 2025-08-24T17:21:41Z]\\\\nI1001 15:42:35.385219 6563 model_client.go:382] Update operations generated as: [{Op:update Table:Load_Balancer Row:map[external_ids:{GoMap:map[k8s.ovn.org/kind:Service k8s.ovn.org/owner:openshift-route-controller-manager/route-controller-manager]} name:Service_openshift-route-controller-manager/route-controller-manager_TCP_cluster options:{GoMap:map[event:false hairpin_snat_ip:169.254.0.5 fd69::5 neighbor_responder:none reject:true skip_snat:false]} protocol:{GoSet:[tcp]} selection_fields:{GoSet:[]} vips:{GoMap:map[10.217.5.239:443:]}] Rows:[] Columns:[] Mutations:[] Timeout:\\\\u003cnil\\\\u003e Where:[where column _uuid == {18746a4d-8a63-458a-b7e3-8fb89ff95fc0}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI1001 15:42:35.385254 \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:34Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e7482d935a30cc08710ffb6a3f58cda0f46736eedf37f8852d2d027ef121e6a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-01T15:43:06Z\\\",\\\"message\\\":\\\"ontroller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:06Z is after 2025-08-24T17:21:41Z]\\\\nI1001 15:43:06.742091 6988 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster\\\\\\\", UUID:\\\\\\\"3f1b9878-e751-4e46-a226-ce007d2c4aa7\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:43:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08a6304f2c50f5e54adf26554bd2f81aba317d4ee153f55dc25123926bc380e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pppfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:07Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.377558 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zxj4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"660f5a12-b71d-454a-8ec0-bae2646530a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://956d6c7f1d35dedc8f0f1255301e89801525d97a0a3837f3d085900a53bead59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbc4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4e431fdeada96dc88cfe8f1b26c2ba3502f52e3d28880a87c5c4ed7482defb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbc4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zxj4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:07Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.398235 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439d2f19-631a-4560-aa58-70cdeef997a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://110e9a59347a665dfa5d6fcdf9e4937525c5e799c8356b2f28d3ba523f925d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513e0be5244aa9da6671888fedd775306c0824650cc687f0faea952c28d1b6b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47ab556b4721804041f9abee5effa082f4f29fbffe7681a7fc8efe0cb4f2b7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41f38cee75135e15635fdda772af2bdd1588e57025d28247223af8b5c34202e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02eba524109342a0990f10dfdf5432c2504acbb394b4f6e23b370becc256f51c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-01T15:41:55Z\\\",\\\"message\\\":\\\"W1001 15:41:44.880099 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1001 15:41:44.880530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759333304 cert, and key in /tmp/serving-cert-2775419030/serving-signer.crt, /tmp/serving-cert-2775419030/serving-signer.key\\\\nI1001 15:41:45.097803 1 observer_polling.go:159] Starting file observer\\\\nW1001 15:41:45.100841 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1001 15:41:45.103153 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1001 15:41:45.104329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2775419030/tls.crt::/tmp/serving-cert-2775419030/tls.key\\\\\\\"\\\\nF1001 15:41:55.441587 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2ceea2906c651debca4b72a5654ba5f8291a20d513a96b2fde7f1a0373674c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:07Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.415818 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57b836d2-a87c-4356-b8cb-71eed69e089e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e4ba077e53324e0a6cb25d94228b1a6265878a923ef88934ee0d87967385c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbb0d610e0cf4a4beb09c6beae9121ac6b8f448a5d215da1d15debfaf9a4770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cbb0d610e0cf4a4beb09c6beae9121ac6b8f448a5d215da1d15debfaf9a4770\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:07Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.430906 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pgg4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25c0759d-c94a-438c-b478-48161acbb035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9037ea44408cee3d00ee27bb38ea8e49344ab06d6b173d86689a131010eb7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r42ms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pgg4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:07Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.452432 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.452496 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.452520 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.452549 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.452571 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:07Z","lastTransitionTime":"2025-10-01T15:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.555235 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.555268 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.555276 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.555289 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.555298 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:07Z","lastTransitionTime":"2025-10-01T15:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.657939 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.658012 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.658029 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.658053 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.658071 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:07Z","lastTransitionTime":"2025-10-01T15:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.761061 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.761120 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.761160 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.761184 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.761201 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:07Z","lastTransitionTime":"2025-10-01T15:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.864603 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.864660 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.864678 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.864704 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.864721 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:07Z","lastTransitionTime":"2025-10-01T15:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.967940 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.968000 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.968033 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.968062 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:07 crc kubenswrapper[4949]: I1001 15:43:07.968083 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:07Z","lastTransitionTime":"2025-10-01T15:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.071141 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.071188 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.071200 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.071218 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.071232 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:08Z","lastTransitionTime":"2025-10-01T15:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.119891 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pppfm_6b30af5f-469f-4bee-b77f-4b58edba325b/ovnkube-controller/3.log" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.125378 4949 scope.go:117] "RemoveContainer" containerID="1e7482d935a30cc08710ffb6a3f58cda0f46736eedf37f8852d2d027ef121e6a" Oct 01 15:43:08 crc kubenswrapper[4949]: E1001 15:43:08.125679 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pppfm_openshift-ovn-kubernetes(6b30af5f-469f-4bee-b77f-4b58edba325b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.142349 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439d2f19-631a-4560-aa58-70cdeef997a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://110e9a59347a665dfa5d6fcdf9e4937525c5e799c8356b2f28d3ba523f925d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513e0be5244aa9da6671888fedd775306c0824650cc687f0faea952c28d1b6b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47ab556b4721804041f9abee5effa082f4f29fbffe7681a7fc8efe0cb4f2b7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41f38cee75135e15635fdda772af2bdd1588e57025d28247223af8b5c34202e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02eba524109342a0990f10dfdf5432c2504acbb394b4f6e23b370becc256f51c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-01T15:41:55Z\\\",\\\"message\\\":\\\"W1001 15:41:44.880099 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1001 15:41:44.880530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759333304 cert, and key in /tmp/serving-cert-2775419030/serving-signer.crt, /tmp/serving-cert-2775419030/serving-signer.key\\\\nI1001 15:41:45.097803 1 observer_polling.go:159] Starting file observer\\\\nW1001 15:41:45.100841 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1001 15:41:45.103153 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1001 15:41:45.104329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2775419030/tls.crt::/tmp/serving-cert-2775419030/tls.key\\\\\\\"\\\\nF1001 15:41:55.441587 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2ceea2906c651debca4b72a5654ba5f8291a20d513a96b2fde7f1a0373674c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:08Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.157703 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57b836d2-a87c-4356-b8cb-71eed69e089e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e4ba077e53324e0a6cb25d94228b1a6265878a923ef88934ee0d87967385c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbb0d610e0cf4a4beb09c6beae9121ac6b8f448a5d215da1d15debfaf9a4770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cbb0d610e0cf4a4beb09c6beae9121ac6b8f448a5d215da1d15debfaf9a4770\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:08Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.171931 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pgg4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25c0759d-c94a-438c-b478-48161acbb035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9037ea44408cee3d00ee27bb38ea8e49344ab06d6b173d86689a131010eb7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r42ms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pgg4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:08Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.174156 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.174245 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.174263 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.174320 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.174340 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:08Z","lastTransitionTime":"2025-10-01T15:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.186440 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kfx8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2bpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2bpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kfx8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:08Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.205433 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:08Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.222314 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:08Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.240379 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:08Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.257795 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e15cd67-d4ad-49b8-96a6-da114105e558\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee03163adf8037e21674f8ff81a9eba22d580bf29b7c9a81d9505dd1e768169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a240a198e037219428db0de4c633c5e99064f7f13a6a8922eba7fa3fac633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6287\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:08Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.276821 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.276880 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.276903 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.276931 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.276954 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:08Z","lastTransitionTime":"2025-10-01T15:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.278531 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s5r4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3d51987d4ca121795b1960ac1901175eb67d69f9380770888791f085e0f3bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a54b1244d03fd99a21c9ca116873fe7a80ec3053c95c8b99de3fb62e581419\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-01T15:42:53Z\\\",\\\"message\\\":\\\"2025-10-01T15:42:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c8f86e39-7a0c-46dd-b100-b876fbdfae38\\\\n2025-10-01T15:42:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c8f86e39-7a0c-46dd-b100-b876fbdfae38 to /host/opt/cni/bin/\\\\n2025-10-01T15:42:08Z [verbose] multus-daemon started\\\\n2025-10-01T15:42:08Z [verbose] Readiness Indicator file check\\\\n2025-10-01T15:42:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghb86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s5r4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:08Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.301669 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xr96p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6656de7a-9b8f-4714-81ae-3685c01f11fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24da01cfcdff6460cccc5cf06d24aa758d33ee7d086c7b93078b5848c650fcc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0296aee184dcef13f03cc1da4987bef80c13c394f067300167eadd8a4091f4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0296aee184dcef13f03cc1da4987bef80c13c394f067300167eadd8a4091f4c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xr96p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:08Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.320639 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5897a864-bb97-4443-a301-b30b22d13a88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02305e2d3e998f2b11185e65d51381d27f69cc544cba61b0878457e4816e5371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05aabfc445359dba1707d194e9dcf27bb57037b582f1181556cc65e2f2f712f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34912853a2f81af97497fc66053b02383114c5a42d46e39f2e7a98b48bf9e78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31382514a5f6e8d234356e87d2c1aba55e02a4edb54835382fe810f10e8df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c11c5a8bd9444a04b072ad656edec0e3f7d287392a2dffd509f3d7c0d1d4601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:08Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.337215 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d76d2a7d-139b-40b6-91f8-c2c8ae0ecc71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b08c9c1691470813c489fb61abd198ca2b5a94ddec2f9b5a06b9d6c9b56334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e467246736f9943a0524d6786a8c4bd4a567f60f20ffd7259a5124c844c44522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da953f01526669a6db92b3ed2d63682b200ea37d86e44ff34a3b7630877fe6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7235ba9aab9db39a23fe6b99b3278ff4c1ede47a1d58c2c986fe3244a2fa12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:08Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.338061 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.338107 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.338116 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.338159 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.338170 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:08Z","lastTransitionTime":"2025-10-01T15:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:08 crc kubenswrapper[4949]: E1001 15:43:08.351077 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a4890954-ee04-4573-a52a-d0437f2c0f47\\\",\\\"systemUUID\\\":\\\"de71db26-32c1-4956-9e9f-66fc023dcd38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:08Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.354279 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8101fee3-df2e-48d3-87d7-fd9cb4da0f2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ae2d981e775beb7e08339ee2b24437a4e7faa11462128d69dc0120905075759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://714ddbb1add5e32f98fea89501df6bfa217b607e3252008af82f130bc66e69c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d21bf7c2c1ae0ae4427bdccaa3cb14eec8034ad6b89498a13e41c346aa10f33d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd26ba21f3b352fff240b18114aa47c6185f125ed89be2586e676b766434c3b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd26ba21f3b352fff240b18114aa47c6185f125ed89be2586e676b766434c3b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:08Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.355293 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.355352 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.355377 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.355407 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.355432 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:08Z","lastTransitionTime":"2025-10-01T15:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.369283 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa37955372c2b350c3cd8103db2d231407c8f853e02d43fcae591d66a1f2bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:08Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:08 crc kubenswrapper[4949]: E1001 15:43:08.373470 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a4890954-ee04-4573-a52a-d0437f2c0f47\\\",\\\"systemUUID\\\":\\\"de71db26-32c1-4956-9e9f-66fc023dcd38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:08Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.377464 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.377526 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.377553 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.377579 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.377596 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:08Z","lastTransitionTime":"2025-10-01T15:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.382694 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a5390e0a0285f487962def201f71183a2cfdbcc8db56f1bd0c64b2f6014c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:08Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.394527 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a28cdb5789930fa13d9e76aeb4dc582efc2f97a117e7d3170909dddd8f6eed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a081b64cca92126512c81346c1b0005ddefcd11b702b12598c567cc627d3561f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:08Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:08 crc kubenswrapper[4949]: E1001 15:43:08.395188 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a4890954-ee04-4573-a52a-d0437f2c0f47\\\",\\\"systemUUID\\\":\\\"de71db26-32c1-4956-9e9f-66fc023dcd38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:08Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.398796 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.398839 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.398853 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.398871 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.398883 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:08Z","lastTransitionTime":"2025-10-01T15:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.407996 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kg2qk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75c430b-f863-470c-b57f-def53bf840db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab6ca47d17c063e35d4f3109e8dbe39f42dbf245b8f439b12464e167fb57dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kg2qk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:08Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:08 crc kubenswrapper[4949]: E1001 15:43:08.417313 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a4890954-ee04-4573-a52a-d0437f2c0f47\\\",\\\"systemUUID\\\":\\\"de71db26-32c1-4956-9e9f-66fc023dcd38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:08Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.421851 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.421878 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.421887 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.421901 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.421910 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:08Z","lastTransitionTime":"2025-10-01T15:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.434927 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b30af5f-469f-4bee-b77f-4b58edba325b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2df7f85dd23b39e72771fddf6c1ab34df3b9fcea664dffe05a1b8859281fff26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0d4badfc0169a3e0cbd62d6661b7e542f43cbbe9005a56c68c7e0027235b64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://728c9faef5d0d63e5dd9cd98520d62db3e75646be0aeec6ff20c71598e35b5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35e991ed458eaa7ea90ab0ae0204b7c9b865ad5c5044709257614edda19865c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde65de2ce60422cb5ddb92562725b8dcd2f8b36bee19cc6e6de8aaab216c7ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb7bdabc02a6c9442d7f842e078acef313e10cd6dbf7153b289ee93b99c9abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7482d935a30cc08710ffb6a3f58cda0f46736eedf37f8852d2d027ef121e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e7482d935a30cc08710ffb6a3f58cda0f46736eedf37f8852d2d027ef121e6a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-01T15:43:06Z\\\",\\\"message\\\":\\\"ontroller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:06Z is after 2025-08-24T17:21:41Z]\\\\nI1001 15:43:06.742091 6988 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster\\\\\\\", UUID:\\\\\\\"3f1b9878-e751-4e46-a226-ce007d2c4aa7\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:43:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pppfm_openshift-ovn-kubernetes(6b30af5f-469f-4bee-b77f-4b58edba325b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08a6304f2c50f5e54adf26554bd2f81aba317d4ee153f55dc25123926bc380e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pppfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:08Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:08 crc kubenswrapper[4949]: E1001 15:43:08.435328 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:08Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a4890954-ee04-4573-a52a-d0437f2c0f47\\\",\\\"systemUUID\\\":\\\"de71db26-32c1-4956-9e9f-66fc023dcd38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:08Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:08 crc kubenswrapper[4949]: E1001 15:43:08.435421 4949 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.436867 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.436892 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.436900 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.436912 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.436920 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:08Z","lastTransitionTime":"2025-10-01T15:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.445696 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zxj4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"660f5a12-b71d-454a-8ec0-bae2646530a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://956d6c7f1d35dedc8f0f1255301e89801525d97a0a3837f3d085900a53bead59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbc4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4e431fdeada96dc88cfe8f1b26c2ba3502f52e3d28880a87c5c4ed7482defb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbc4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zxj4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:08Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.539580 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.539618 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.539630 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.539646 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.539658 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:08Z","lastTransitionTime":"2025-10-01T15:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.601482 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.601520 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.601581 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:43:08 crc kubenswrapper[4949]: E1001 15:43:08.601686 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.601826 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:43:08 crc kubenswrapper[4949]: E1001 15:43:08.601872 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:43:08 crc kubenswrapper[4949]: E1001 15:43:08.601977 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kfx8b" podUID="d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd" Oct 01 15:43:08 crc kubenswrapper[4949]: E1001 15:43:08.602076 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.641686 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.641711 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.641718 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.641730 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.641739 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:08Z","lastTransitionTime":"2025-10-01T15:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.744284 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.744340 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.744357 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.744377 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.744393 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:08Z","lastTransitionTime":"2025-10-01T15:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.847222 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.847267 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.847278 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.847295 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.847307 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:08Z","lastTransitionTime":"2025-10-01T15:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.950173 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.950209 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.950218 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.950231 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:08 crc kubenswrapper[4949]: I1001 15:43:08.950240 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:08Z","lastTransitionTime":"2025-10-01T15:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:09 crc kubenswrapper[4949]: I1001 15:43:09.052147 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:09 crc kubenswrapper[4949]: I1001 15:43:09.052188 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:09 crc kubenswrapper[4949]: I1001 15:43:09.052198 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:09 crc kubenswrapper[4949]: I1001 15:43:09.052213 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:09 crc kubenswrapper[4949]: I1001 15:43:09.052224 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:09Z","lastTransitionTime":"2025-10-01T15:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:09 crc kubenswrapper[4949]: I1001 15:43:09.154730 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:09 crc kubenswrapper[4949]: I1001 15:43:09.154778 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:09 crc kubenswrapper[4949]: I1001 15:43:09.154787 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:09 crc kubenswrapper[4949]: I1001 15:43:09.154801 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:09 crc kubenswrapper[4949]: I1001 15:43:09.154809 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:09Z","lastTransitionTime":"2025-10-01T15:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:09 crc kubenswrapper[4949]: I1001 15:43:09.258445 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:09 crc kubenswrapper[4949]: I1001 15:43:09.258513 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:09 crc kubenswrapper[4949]: I1001 15:43:09.258535 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:09 crc kubenswrapper[4949]: I1001 15:43:09.258567 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:09 crc kubenswrapper[4949]: I1001 15:43:09.258589 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:09Z","lastTransitionTime":"2025-10-01T15:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:09 crc kubenswrapper[4949]: I1001 15:43:09.361090 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:09 crc kubenswrapper[4949]: I1001 15:43:09.361158 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:09 crc kubenswrapper[4949]: I1001 15:43:09.361172 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:09 crc kubenswrapper[4949]: I1001 15:43:09.361188 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:09 crc kubenswrapper[4949]: I1001 15:43:09.361200 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:09Z","lastTransitionTime":"2025-10-01T15:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:09 crc kubenswrapper[4949]: I1001 15:43:09.463177 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:09 crc kubenswrapper[4949]: I1001 15:43:09.463241 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:09 crc kubenswrapper[4949]: I1001 15:43:09.463254 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:09 crc kubenswrapper[4949]: I1001 15:43:09.463273 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:09 crc kubenswrapper[4949]: I1001 15:43:09.463289 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:09Z","lastTransitionTime":"2025-10-01T15:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:09 crc kubenswrapper[4949]: I1001 15:43:09.566870 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:09 crc kubenswrapper[4949]: I1001 15:43:09.566949 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:09 crc kubenswrapper[4949]: I1001 15:43:09.566962 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:09 crc kubenswrapper[4949]: I1001 15:43:09.566981 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:09 crc kubenswrapper[4949]: I1001 15:43:09.566993 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:09Z","lastTransitionTime":"2025-10-01T15:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:09 crc kubenswrapper[4949]: I1001 15:43:09.670102 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:09 crc kubenswrapper[4949]: I1001 15:43:09.670218 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:09 crc kubenswrapper[4949]: I1001 15:43:09.670241 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:09 crc kubenswrapper[4949]: I1001 15:43:09.670269 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:09 crc kubenswrapper[4949]: I1001 15:43:09.670291 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:09Z","lastTransitionTime":"2025-10-01T15:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:09 crc kubenswrapper[4949]: I1001 15:43:09.773502 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:09 crc kubenswrapper[4949]: I1001 15:43:09.773565 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:09 crc kubenswrapper[4949]: I1001 15:43:09.773584 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:09 crc kubenswrapper[4949]: I1001 15:43:09.773609 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:09 crc kubenswrapper[4949]: I1001 15:43:09.773627 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:09Z","lastTransitionTime":"2025-10-01T15:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:09 crc kubenswrapper[4949]: I1001 15:43:09.876781 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:09 crc kubenswrapper[4949]: I1001 15:43:09.876848 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:09 crc kubenswrapper[4949]: I1001 15:43:09.876870 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:09 crc kubenswrapper[4949]: I1001 15:43:09.876899 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:09 crc kubenswrapper[4949]: I1001 15:43:09.876916 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:09Z","lastTransitionTime":"2025-10-01T15:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:09 crc kubenswrapper[4949]: I1001 15:43:09.980379 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:09 crc kubenswrapper[4949]: I1001 15:43:09.980419 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:09 crc kubenswrapper[4949]: I1001 15:43:09.980436 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:09 crc kubenswrapper[4949]: I1001 15:43:09.980455 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:09 crc kubenswrapper[4949]: I1001 15:43:09.980468 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:09Z","lastTransitionTime":"2025-10-01T15:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:10 crc kubenswrapper[4949]: I1001 15:43:10.083052 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:10 crc kubenswrapper[4949]: I1001 15:43:10.083117 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:10 crc kubenswrapper[4949]: I1001 15:43:10.083164 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:10 crc kubenswrapper[4949]: I1001 15:43:10.083191 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:10 crc kubenswrapper[4949]: I1001 15:43:10.083210 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:10Z","lastTransitionTime":"2025-10-01T15:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:10 crc kubenswrapper[4949]: I1001 15:43:10.186470 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:10 crc kubenswrapper[4949]: I1001 15:43:10.186541 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:10 crc kubenswrapper[4949]: I1001 15:43:10.186562 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:10 crc kubenswrapper[4949]: I1001 15:43:10.186591 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:10 crc kubenswrapper[4949]: I1001 15:43:10.186615 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:10Z","lastTransitionTime":"2025-10-01T15:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:10 crc kubenswrapper[4949]: I1001 15:43:10.289138 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:10 crc kubenswrapper[4949]: I1001 15:43:10.289196 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:10 crc kubenswrapper[4949]: I1001 15:43:10.289238 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:10 crc kubenswrapper[4949]: I1001 15:43:10.289257 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:10 crc kubenswrapper[4949]: I1001 15:43:10.289268 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:10Z","lastTransitionTime":"2025-10-01T15:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:10 crc kubenswrapper[4949]: I1001 15:43:10.391884 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:10 crc kubenswrapper[4949]: I1001 15:43:10.391917 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:10 crc kubenswrapper[4949]: I1001 15:43:10.391932 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:10 crc kubenswrapper[4949]: I1001 15:43:10.391947 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:10 crc kubenswrapper[4949]: I1001 15:43:10.391958 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:10Z","lastTransitionTime":"2025-10-01T15:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:10 crc kubenswrapper[4949]: I1001 15:43:10.494069 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:10 crc kubenswrapper[4949]: I1001 15:43:10.494116 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:10 crc kubenswrapper[4949]: I1001 15:43:10.494146 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:10 crc kubenswrapper[4949]: I1001 15:43:10.494162 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:10 crc kubenswrapper[4949]: I1001 15:43:10.494172 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:10Z","lastTransitionTime":"2025-10-01T15:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:10 crc kubenswrapper[4949]: I1001 15:43:10.597045 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:10 crc kubenswrapper[4949]: I1001 15:43:10.597089 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:10 crc kubenswrapper[4949]: I1001 15:43:10.597100 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:10 crc kubenswrapper[4949]: I1001 15:43:10.597115 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:10 crc kubenswrapper[4949]: I1001 15:43:10.597147 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:10Z","lastTransitionTime":"2025-10-01T15:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:10 crc kubenswrapper[4949]: I1001 15:43:10.601490 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:43:10 crc kubenswrapper[4949]: I1001 15:43:10.601521 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:43:10 crc kubenswrapper[4949]: I1001 15:43:10.601571 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:43:10 crc kubenswrapper[4949]: I1001 15:43:10.601584 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:43:10 crc kubenswrapper[4949]: E1001 15:43:10.601641 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:43:10 crc kubenswrapper[4949]: E1001 15:43:10.601817 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:43:10 crc kubenswrapper[4949]: E1001 15:43:10.601859 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:43:10 crc kubenswrapper[4949]: E1001 15:43:10.601912 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kfx8b" podUID="d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd" Oct 01 15:43:10 crc kubenswrapper[4949]: I1001 15:43:10.698728 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:10 crc kubenswrapper[4949]: I1001 15:43:10.698806 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:10 crc kubenswrapper[4949]: I1001 15:43:10.698831 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:10 crc kubenswrapper[4949]: I1001 15:43:10.698861 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:10 crc kubenswrapper[4949]: I1001 15:43:10.698883 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:10Z","lastTransitionTime":"2025-10-01T15:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:10 crc kubenswrapper[4949]: I1001 15:43:10.800569 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:10 crc kubenswrapper[4949]: I1001 15:43:10.800627 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:10 crc kubenswrapper[4949]: I1001 15:43:10.800654 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:10 crc kubenswrapper[4949]: I1001 15:43:10.800682 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:10 crc kubenswrapper[4949]: I1001 15:43:10.800704 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:10Z","lastTransitionTime":"2025-10-01T15:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:10 crc kubenswrapper[4949]: I1001 15:43:10.903318 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:10 crc kubenswrapper[4949]: I1001 15:43:10.903384 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:10 crc kubenswrapper[4949]: I1001 15:43:10.903412 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:10 crc kubenswrapper[4949]: I1001 15:43:10.903443 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:10 crc kubenswrapper[4949]: I1001 15:43:10.903466 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:10Z","lastTransitionTime":"2025-10-01T15:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.007097 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.007177 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.007194 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.007212 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.007251 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:11Z","lastTransitionTime":"2025-10-01T15:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.109464 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.109534 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.109556 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.109588 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.109610 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:11Z","lastTransitionTime":"2025-10-01T15:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.212813 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.212854 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.212864 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.212880 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.212891 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:11Z","lastTransitionTime":"2025-10-01T15:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.315402 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.315443 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.315452 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.315482 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.315490 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:11Z","lastTransitionTime":"2025-10-01T15:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.417653 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.417694 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.417705 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.417722 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.417734 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:11Z","lastTransitionTime":"2025-10-01T15:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.521058 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.521169 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.521190 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.521215 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.521234 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:11Z","lastTransitionTime":"2025-10-01T15:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.621522 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pgg4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25c0759d-c94a-438c-b478-48161acbb035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9037ea44408cee3d00ee27bb38ea8e49344ab06d6b173d86689a131010eb7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r42ms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pgg4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:11Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.623685 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.623902 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.624068 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.624281 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.624465 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:11Z","lastTransitionTime":"2025-10-01T15:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.644034 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439d2f19-631a-4560-aa58-70cdeef997a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://110e9a59347a665dfa5d6fcdf9e4937525c5e799c8356b2f28d3ba523f925d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513e0be5244aa9da6671888fedd775306c0824650cc687f0faea952c28d1b6b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47ab556b4721804041f9abee5effa082f4f29fbffe7681a7fc8efe0cb4f2b7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41f38cee75135e15635fdda772af2bdd1588e57025d28247223af8b5c34202e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02eba524109342a0990f10dfdf5432c2504acbb394b4f6e23b370becc256f51c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-01T15:41:55Z\\\",\\\"message\\\":\\\"W1001 15:41:44.880099 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1001 15:41:44.880530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759333304 cert, and key in /tmp/serving-cert-2775419030/serving-signer.crt, /tmp/serving-cert-2775419030/serving-signer.key\\\\nI1001 15:41:45.097803 1 observer_polling.go:159] Starting file observer\\\\nW1001 15:41:45.100841 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1001 15:41:45.103153 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1001 15:41:45.104329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2775419030/tls.crt::/tmp/serving-cert-2775419030/tls.key\\\\\\\"\\\\nF1001 15:41:55.441587 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2ceea2906c651debca4b72a5654ba5f8291a20d513a96b2fde7f1a0373674c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:11Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.656465 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57b836d2-a87c-4356-b8cb-71eed69e089e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e4ba077e53324e0a6cb25d94228b1a6265878a923ef88934ee0d87967385c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbb0d610e0cf4a4beb09c6beae9121ac6b8f448a5d215da1d15debfaf9a4770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cbb0d610e0cf4a4beb09c6beae9121ac6b8f448a5d215da1d15debfaf9a4770\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:11Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.669617 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:11Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.690084 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e15cd67-d4ad-49b8-96a6-da114105e558\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee03163adf8037e21674f8ff81a9eba22d580bf29b7c9a81d9505dd1e768169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a240a198e037219428db0de4c633c5e99064f7f13a6a8922eba7fa3fac633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6287\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:11Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.708033 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s5r4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3d51987d4ca121795b1960ac1901175eb67d69f9380770888791f085e0f3bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a54b1244d03fd99a21c9ca116873fe7a80ec3053c95c8b99de3fb62e581419\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-01T15:42:53Z\\\",\\\"message\\\":\\\"2025-10-01T15:42:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c8f86e39-7a0c-46dd-b100-b876fbdfae38\\\\n2025-10-01T15:42:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c8f86e39-7a0c-46dd-b100-b876fbdfae38 to /host/opt/cni/bin/\\\\n2025-10-01T15:42:08Z [verbose] multus-daemon started\\\\n2025-10-01T15:42:08Z [verbose] Readiness Indicator file check\\\\n2025-10-01T15:42:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghb86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s5r4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:11Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.720025 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kfx8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2bpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2bpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kfx8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:11Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.726599 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.726662 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.726689 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.726719 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.726741 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:11Z","lastTransitionTime":"2025-10-01T15:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.731963 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:11Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.743757 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:11Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.755580 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8101fee3-df2e-48d3-87d7-fd9cb4da0f2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ae2d981e775beb7e08339ee2b24437a4e7faa11462128d69dc0120905075759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://714ddbb1add5e32f98fea89501df6bfa217b607e3252008af82f130bc66e69c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d21bf7c2c1ae0ae4427bdccaa3cb14eec8034ad6b89498a13e41c346aa10f33d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd26ba21f3b352fff240b18114aa47c6185f125ed89be2586e676b766434c3b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd26ba21f3b352fff240b18114aa47c6185f125ed89be2586e676b766434c3b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:11Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.770291 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa37955372c2b350c3cd8103db2d231407c8f853e02d43fcae591d66a1f2bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:11Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.782663 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a5390e0a0285f487962def201f71183a2cfdbcc8db56f1bd0c64b2f6014c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:11Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.799493 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xr96p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6656de7a-9b8f-4714-81ae-3685c01f11fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24da01cfcdff6460cccc5cf06d24aa758d33ee7d086c7b93078b5848c650fcc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0296aee184dcef13f03cc1da4987bef80c13c394f067300167eadd8a4091f4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0296aee184dcef13f03cc1da4987bef80c13c394f067300167eadd8a4091f4c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xr96p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:11Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.822443 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5897a864-bb97-4443-a301-b30b22d13a88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02305e2d3e998f2b11185e65d51381d27f69cc544cba61b0878457e4816e5371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05aabfc445359dba1707d194e9dcf27bb57037b582f1181556cc65e2f2f712f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34912853a2f81af97497fc66053b02383114c5a42d46e39f2e7a98b48bf9e78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31382514a5f6e8d234356e87d2c1aba55e02a4edb54835382fe810f10e8df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c11c5a8bd9444a04b072ad656edec0e3f7d287392a2dffd509f3d7c0d1d4601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:11Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.829177 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.829226 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.829239 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.829259 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.829272 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:11Z","lastTransitionTime":"2025-10-01T15:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.835065 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d76d2a7d-139b-40b6-91f8-c2c8ae0ecc71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b08c9c1691470813c489fb61abd198ca2b5a94ddec2f9b5a06b9d6c9b56334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e467246736f9943a0524d6786a8c4bd4a567f60f20ffd7259a5124c844c44522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da953f01526669a6db92b3ed2d63682b200ea37d86e44ff34a3b7630877fe6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7235ba9aab9db39a23fe6b99b3278ff4c1ede47a1d58c2c986fe3244a2fa12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:11Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.852945 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b30af5f-469f-4bee-b77f-4b58edba325b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2df7f85dd23b39e72771fddf6c1ab34df3b9fcea664dffe05a1b8859281fff26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0d4badfc0169a3e0cbd62d6661b7e542f43cbbe9005a56c68c7e0027235b64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://728c9faef5d0d63e5dd9cd98520d62db3e75646be0aeec6ff20c71598e35b5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35e991ed458eaa7ea90ab0ae0204b7c9b865ad5c5044709257614edda19865c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde65de2ce60422cb5ddb92562725b8dcd2f8b36bee19cc6e6de8aaab216c7ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb7bdabc02a6c9442d7f842e078acef313e10cd6dbf7153b289ee93b99c9abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7482d935a30cc08710ffb6a3f58cda0f46736eedf37f8852d2d027ef121e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e7482d935a30cc08710ffb6a3f58cda0f46736eedf37f8852d2d027ef121e6a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-01T15:43:06Z\\\",\\\"message\\\":\\\"ontroller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:06Z is after 2025-08-24T17:21:41Z]\\\\nI1001 15:43:06.742091 6988 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster\\\\\\\", UUID:\\\\\\\"3f1b9878-e751-4e46-a226-ce007d2c4aa7\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:43:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pppfm_openshift-ovn-kubernetes(6b30af5f-469f-4bee-b77f-4b58edba325b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08a6304f2c50f5e54adf26554bd2f81aba317d4ee153f55dc25123926bc380e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pppfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:11Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.864611 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zxj4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"660f5a12-b71d-454a-8ec0-bae2646530a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://956d6c7f1d35dedc8f0f1255301e89801525d97a0a3837f3d085900a53bead59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbc4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4e431fdeada96dc88cfe8f1b26c2ba3502f52e3d28880a87c5c4ed7482defb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbc4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zxj4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:11Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.876824 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a28cdb5789930fa13d9e76aeb4dc582efc2f97a117e7d3170909dddd8f6eed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a081b64cca92126512c81346c1b0005ddefcd11b702b12598c567cc627d3561f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:11Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.887556 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kg2qk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75c430b-f863-470c-b57f-def53bf840db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab6ca47d17c063e35d4f3109e8dbe39f42dbf245b8f439b12464e167fb57dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kg2qk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:11Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.931361 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.931398 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.931407 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.931421 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:11 crc kubenswrapper[4949]: I1001 15:43:11.931429 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:11Z","lastTransitionTime":"2025-10-01T15:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:12 crc kubenswrapper[4949]: I1001 15:43:12.033372 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:12 crc kubenswrapper[4949]: I1001 15:43:12.033410 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:12 crc kubenswrapper[4949]: I1001 15:43:12.033418 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:12 crc kubenswrapper[4949]: I1001 15:43:12.033430 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:12 crc kubenswrapper[4949]: I1001 15:43:12.033439 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:12Z","lastTransitionTime":"2025-10-01T15:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:12 crc kubenswrapper[4949]: I1001 15:43:12.135608 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:12 crc kubenswrapper[4949]: I1001 15:43:12.135639 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:12 crc kubenswrapper[4949]: I1001 15:43:12.135649 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:12 crc kubenswrapper[4949]: I1001 15:43:12.135663 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:12 crc kubenswrapper[4949]: I1001 15:43:12.135672 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:12Z","lastTransitionTime":"2025-10-01T15:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:12 crc kubenswrapper[4949]: I1001 15:43:12.238412 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:12 crc kubenswrapper[4949]: I1001 15:43:12.238460 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:12 crc kubenswrapper[4949]: I1001 15:43:12.238472 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:12 crc kubenswrapper[4949]: I1001 15:43:12.238486 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:12 crc kubenswrapper[4949]: I1001 15:43:12.238495 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:12Z","lastTransitionTime":"2025-10-01T15:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:12 crc kubenswrapper[4949]: I1001 15:43:12.340748 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:12 crc kubenswrapper[4949]: I1001 15:43:12.340794 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:12 crc kubenswrapper[4949]: I1001 15:43:12.340807 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:12 crc kubenswrapper[4949]: I1001 15:43:12.340824 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:12 crc kubenswrapper[4949]: I1001 15:43:12.340836 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:12Z","lastTransitionTime":"2025-10-01T15:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:12 crc kubenswrapper[4949]: I1001 15:43:12.443103 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:12 crc kubenswrapper[4949]: I1001 15:43:12.443322 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:12 crc kubenswrapper[4949]: I1001 15:43:12.443389 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:12 crc kubenswrapper[4949]: I1001 15:43:12.443519 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:12 crc kubenswrapper[4949]: I1001 15:43:12.443599 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:12Z","lastTransitionTime":"2025-10-01T15:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:12 crc kubenswrapper[4949]: I1001 15:43:12.546475 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:12 crc kubenswrapper[4949]: I1001 15:43:12.546525 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:12 crc kubenswrapper[4949]: I1001 15:43:12.546535 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:12 crc kubenswrapper[4949]: I1001 15:43:12.546550 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:12 crc kubenswrapper[4949]: I1001 15:43:12.546559 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:12Z","lastTransitionTime":"2025-10-01T15:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:12 crc kubenswrapper[4949]: I1001 15:43:12.601483 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:43:12 crc kubenswrapper[4949]: E1001 15:43:12.601679 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kfx8b" podUID="d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd" Oct 01 15:43:12 crc kubenswrapper[4949]: I1001 15:43:12.602023 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:43:12 crc kubenswrapper[4949]: E1001 15:43:12.602232 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:43:12 crc kubenswrapper[4949]: I1001 15:43:12.602247 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:43:12 crc kubenswrapper[4949]: E1001 15:43:12.602412 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:43:12 crc kubenswrapper[4949]: I1001 15:43:12.602643 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:43:12 crc kubenswrapper[4949]: E1001 15:43:12.602877 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:43:12 crc kubenswrapper[4949]: I1001 15:43:12.649490 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:12 crc kubenswrapper[4949]: I1001 15:43:12.649544 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:12 crc kubenswrapper[4949]: I1001 15:43:12.649561 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:12 crc kubenswrapper[4949]: I1001 15:43:12.649583 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:12 crc kubenswrapper[4949]: I1001 15:43:12.649599 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:12Z","lastTransitionTime":"2025-10-01T15:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:12 crc kubenswrapper[4949]: I1001 15:43:12.752763 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:12 crc kubenswrapper[4949]: I1001 15:43:12.752981 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:12 crc kubenswrapper[4949]: I1001 15:43:12.753079 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:12 crc kubenswrapper[4949]: I1001 15:43:12.753202 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:12 crc kubenswrapper[4949]: I1001 15:43:12.753281 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:12Z","lastTransitionTime":"2025-10-01T15:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:12 crc kubenswrapper[4949]: I1001 15:43:12.855387 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:12 crc kubenswrapper[4949]: I1001 15:43:12.855423 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:12 crc kubenswrapper[4949]: I1001 15:43:12.855434 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:12 crc kubenswrapper[4949]: I1001 15:43:12.855448 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:12 crc kubenswrapper[4949]: I1001 15:43:12.855457 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:12Z","lastTransitionTime":"2025-10-01T15:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:12 crc kubenswrapper[4949]: I1001 15:43:12.957355 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:12 crc kubenswrapper[4949]: I1001 15:43:12.957396 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:12 crc kubenswrapper[4949]: I1001 15:43:12.957409 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:12 crc kubenswrapper[4949]: I1001 15:43:12.957424 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:12 crc kubenswrapper[4949]: I1001 15:43:12.957435 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:12Z","lastTransitionTime":"2025-10-01T15:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:13 crc kubenswrapper[4949]: I1001 15:43:13.059984 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:13 crc kubenswrapper[4949]: I1001 15:43:13.060274 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:13 crc kubenswrapper[4949]: I1001 15:43:13.060305 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:13 crc kubenswrapper[4949]: I1001 15:43:13.060326 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:13 crc kubenswrapper[4949]: I1001 15:43:13.060339 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:13Z","lastTransitionTime":"2025-10-01T15:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:13 crc kubenswrapper[4949]: I1001 15:43:13.162534 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:13 crc kubenswrapper[4949]: I1001 15:43:13.162580 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:13 crc kubenswrapper[4949]: I1001 15:43:13.162591 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:13 crc kubenswrapper[4949]: I1001 15:43:13.162608 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:13 crc kubenswrapper[4949]: I1001 15:43:13.162619 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:13Z","lastTransitionTime":"2025-10-01T15:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:13 crc kubenswrapper[4949]: I1001 15:43:13.264309 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:13 crc kubenswrapper[4949]: I1001 15:43:13.264345 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:13 crc kubenswrapper[4949]: I1001 15:43:13.264358 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:13 crc kubenswrapper[4949]: I1001 15:43:13.264372 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:13 crc kubenswrapper[4949]: I1001 15:43:13.264385 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:13Z","lastTransitionTime":"2025-10-01T15:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:13 crc kubenswrapper[4949]: I1001 15:43:13.366338 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:13 crc kubenswrapper[4949]: I1001 15:43:13.366403 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:13 crc kubenswrapper[4949]: I1001 15:43:13.366443 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:13 crc kubenswrapper[4949]: I1001 15:43:13.366479 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:13 crc kubenswrapper[4949]: I1001 15:43:13.366504 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:13Z","lastTransitionTime":"2025-10-01T15:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:13 crc kubenswrapper[4949]: I1001 15:43:13.468943 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:13 crc kubenswrapper[4949]: I1001 15:43:13.468996 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:13 crc kubenswrapper[4949]: I1001 15:43:13.469013 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:13 crc kubenswrapper[4949]: I1001 15:43:13.469035 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:13 crc kubenswrapper[4949]: I1001 15:43:13.469050 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:13Z","lastTransitionTime":"2025-10-01T15:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:13 crc kubenswrapper[4949]: I1001 15:43:13.572494 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:13 crc kubenswrapper[4949]: I1001 15:43:13.572552 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:13 crc kubenswrapper[4949]: I1001 15:43:13.572564 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:13 crc kubenswrapper[4949]: I1001 15:43:13.572580 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:13 crc kubenswrapper[4949]: I1001 15:43:13.572593 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:13Z","lastTransitionTime":"2025-10-01T15:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:13 crc kubenswrapper[4949]: I1001 15:43:13.675396 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:13 crc kubenswrapper[4949]: I1001 15:43:13.675444 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:13 crc kubenswrapper[4949]: I1001 15:43:13.675459 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:13 crc kubenswrapper[4949]: I1001 15:43:13.675482 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:13 crc kubenswrapper[4949]: I1001 15:43:13.675499 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:13Z","lastTransitionTime":"2025-10-01T15:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:13 crc kubenswrapper[4949]: I1001 15:43:13.778602 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:13 crc kubenswrapper[4949]: I1001 15:43:13.778654 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:13 crc kubenswrapper[4949]: I1001 15:43:13.778672 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:13 crc kubenswrapper[4949]: I1001 15:43:13.778692 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:13 crc kubenswrapper[4949]: I1001 15:43:13.778705 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:13Z","lastTransitionTime":"2025-10-01T15:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:13 crc kubenswrapper[4949]: I1001 15:43:13.881159 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:13 crc kubenswrapper[4949]: I1001 15:43:13.881209 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:13 crc kubenswrapper[4949]: I1001 15:43:13.881223 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:13 crc kubenswrapper[4949]: I1001 15:43:13.881239 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:13 crc kubenswrapper[4949]: I1001 15:43:13.881250 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:13Z","lastTransitionTime":"2025-10-01T15:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:13 crc kubenswrapper[4949]: I1001 15:43:13.984213 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:13 crc kubenswrapper[4949]: I1001 15:43:13.984269 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:13 crc kubenswrapper[4949]: I1001 15:43:13.984283 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:13 crc kubenswrapper[4949]: I1001 15:43:13.984299 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:13 crc kubenswrapper[4949]: I1001 15:43:13.984312 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:13Z","lastTransitionTime":"2025-10-01T15:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:14 crc kubenswrapper[4949]: I1001 15:43:14.086990 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:14 crc kubenswrapper[4949]: I1001 15:43:14.087037 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:14 crc kubenswrapper[4949]: I1001 15:43:14.087048 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:14 crc kubenswrapper[4949]: I1001 15:43:14.087065 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:14 crc kubenswrapper[4949]: I1001 15:43:14.087077 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:14Z","lastTransitionTime":"2025-10-01T15:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:14 crc kubenswrapper[4949]: I1001 15:43:14.189277 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:14 crc kubenswrapper[4949]: I1001 15:43:14.189316 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:14 crc kubenswrapper[4949]: I1001 15:43:14.189329 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:14 crc kubenswrapper[4949]: I1001 15:43:14.189345 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:14 crc kubenswrapper[4949]: I1001 15:43:14.189354 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:14Z","lastTransitionTime":"2025-10-01T15:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:14 crc kubenswrapper[4949]: I1001 15:43:14.295883 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:14 crc kubenswrapper[4949]: I1001 15:43:14.295919 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:14 crc kubenswrapper[4949]: I1001 15:43:14.295928 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:14 crc kubenswrapper[4949]: I1001 15:43:14.295940 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:14 crc kubenswrapper[4949]: I1001 15:43:14.295949 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:14Z","lastTransitionTime":"2025-10-01T15:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:14 crc kubenswrapper[4949]: I1001 15:43:14.397980 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:14 crc kubenswrapper[4949]: I1001 15:43:14.398030 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:14 crc kubenswrapper[4949]: I1001 15:43:14.398046 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:14 crc kubenswrapper[4949]: I1001 15:43:14.398063 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:14 crc kubenswrapper[4949]: I1001 15:43:14.398074 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:14Z","lastTransitionTime":"2025-10-01T15:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:14 crc kubenswrapper[4949]: I1001 15:43:14.501217 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:14 crc kubenswrapper[4949]: I1001 15:43:14.501303 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:14 crc kubenswrapper[4949]: I1001 15:43:14.501338 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:14 crc kubenswrapper[4949]: I1001 15:43:14.501369 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:14 crc kubenswrapper[4949]: I1001 15:43:14.501391 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:14Z","lastTransitionTime":"2025-10-01T15:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:14 crc kubenswrapper[4949]: I1001 15:43:14.600838 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:43:14 crc kubenswrapper[4949]: I1001 15:43:14.600874 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:43:14 crc kubenswrapper[4949]: I1001 15:43:14.600853 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:43:14 crc kubenswrapper[4949]: I1001 15:43:14.600922 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:43:14 crc kubenswrapper[4949]: E1001 15:43:14.601025 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:43:14 crc kubenswrapper[4949]: E1001 15:43:14.601232 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:43:14 crc kubenswrapper[4949]: E1001 15:43:14.601411 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:43:14 crc kubenswrapper[4949]: E1001 15:43:14.601567 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kfx8b" podUID="d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd" Oct 01 15:43:14 crc kubenswrapper[4949]: I1001 15:43:14.603491 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:14 crc kubenswrapper[4949]: I1001 15:43:14.603529 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:14 crc kubenswrapper[4949]: I1001 15:43:14.603543 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:14 crc kubenswrapper[4949]: I1001 15:43:14.603561 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:14 crc kubenswrapper[4949]: I1001 15:43:14.603577 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:14Z","lastTransitionTime":"2025-10-01T15:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:14 crc kubenswrapper[4949]: I1001 15:43:14.706209 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:14 crc kubenswrapper[4949]: I1001 15:43:14.706516 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:14 crc kubenswrapper[4949]: I1001 15:43:14.706580 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:14 crc kubenswrapper[4949]: I1001 15:43:14.706640 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:14 crc kubenswrapper[4949]: I1001 15:43:14.706703 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:14Z","lastTransitionTime":"2025-10-01T15:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:14 crc kubenswrapper[4949]: I1001 15:43:14.809763 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:14 crc kubenswrapper[4949]: I1001 15:43:14.809810 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:14 crc kubenswrapper[4949]: I1001 15:43:14.809821 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:14 crc kubenswrapper[4949]: I1001 15:43:14.809835 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:14 crc kubenswrapper[4949]: I1001 15:43:14.809844 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:14Z","lastTransitionTime":"2025-10-01T15:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:14 crc kubenswrapper[4949]: I1001 15:43:14.912624 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:14 crc kubenswrapper[4949]: I1001 15:43:14.912675 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:14 crc kubenswrapper[4949]: I1001 15:43:14.912687 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:14 crc kubenswrapper[4949]: I1001 15:43:14.912716 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:14 crc kubenswrapper[4949]: I1001 15:43:14.912727 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:14Z","lastTransitionTime":"2025-10-01T15:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:15 crc kubenswrapper[4949]: I1001 15:43:15.015564 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:15 crc kubenswrapper[4949]: I1001 15:43:15.015611 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:15 crc kubenswrapper[4949]: I1001 15:43:15.015623 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:15 crc kubenswrapper[4949]: I1001 15:43:15.015642 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:15 crc kubenswrapper[4949]: I1001 15:43:15.015655 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:15Z","lastTransitionTime":"2025-10-01T15:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:15 crc kubenswrapper[4949]: I1001 15:43:15.119600 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:15 crc kubenswrapper[4949]: I1001 15:43:15.119649 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:15 crc kubenswrapper[4949]: I1001 15:43:15.119662 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:15 crc kubenswrapper[4949]: I1001 15:43:15.119681 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:15 crc kubenswrapper[4949]: I1001 15:43:15.119696 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:15Z","lastTransitionTime":"2025-10-01T15:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:15 crc kubenswrapper[4949]: I1001 15:43:15.223311 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:15 crc kubenswrapper[4949]: I1001 15:43:15.223439 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:15 crc kubenswrapper[4949]: I1001 15:43:15.223461 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:15 crc kubenswrapper[4949]: I1001 15:43:15.223486 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:15 crc kubenswrapper[4949]: I1001 15:43:15.223537 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:15Z","lastTransitionTime":"2025-10-01T15:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:15 crc kubenswrapper[4949]: I1001 15:43:15.326226 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:15 crc kubenswrapper[4949]: I1001 15:43:15.326264 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:15 crc kubenswrapper[4949]: I1001 15:43:15.326298 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:15 crc kubenswrapper[4949]: I1001 15:43:15.326314 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:15 crc kubenswrapper[4949]: I1001 15:43:15.326324 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:15Z","lastTransitionTime":"2025-10-01T15:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:15 crc kubenswrapper[4949]: I1001 15:43:15.428992 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:15 crc kubenswrapper[4949]: I1001 15:43:15.429197 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:15 crc kubenswrapper[4949]: I1001 15:43:15.429223 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:15 crc kubenswrapper[4949]: I1001 15:43:15.429248 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:15 crc kubenswrapper[4949]: I1001 15:43:15.429264 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:15Z","lastTransitionTime":"2025-10-01T15:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:15 crc kubenswrapper[4949]: I1001 15:43:15.532420 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:15 crc kubenswrapper[4949]: I1001 15:43:15.532715 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:15 crc kubenswrapper[4949]: I1001 15:43:15.532820 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:15 crc kubenswrapper[4949]: I1001 15:43:15.532975 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:15 crc kubenswrapper[4949]: I1001 15:43:15.533091 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:15Z","lastTransitionTime":"2025-10-01T15:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:15 crc kubenswrapper[4949]: I1001 15:43:15.636194 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:15 crc kubenswrapper[4949]: I1001 15:43:15.636237 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:15 crc kubenswrapper[4949]: I1001 15:43:15.636251 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:15 crc kubenswrapper[4949]: I1001 15:43:15.636268 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:15 crc kubenswrapper[4949]: I1001 15:43:15.636282 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:15Z","lastTransitionTime":"2025-10-01T15:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:15 crc kubenswrapper[4949]: I1001 15:43:15.738343 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:15 crc kubenswrapper[4949]: I1001 15:43:15.738378 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:15 crc kubenswrapper[4949]: I1001 15:43:15.738388 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:15 crc kubenswrapper[4949]: I1001 15:43:15.738402 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:15 crc kubenswrapper[4949]: I1001 15:43:15.738413 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:15Z","lastTransitionTime":"2025-10-01T15:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:15 crc kubenswrapper[4949]: I1001 15:43:15.841324 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:15 crc kubenswrapper[4949]: I1001 15:43:15.841373 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:15 crc kubenswrapper[4949]: I1001 15:43:15.841384 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:15 crc kubenswrapper[4949]: I1001 15:43:15.841398 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:15 crc kubenswrapper[4949]: I1001 15:43:15.841412 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:15Z","lastTransitionTime":"2025-10-01T15:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:15 crc kubenswrapper[4949]: I1001 15:43:15.943623 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:15 crc kubenswrapper[4949]: I1001 15:43:15.943661 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:15 crc kubenswrapper[4949]: I1001 15:43:15.943674 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:15 crc kubenswrapper[4949]: I1001 15:43:15.943687 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:15 crc kubenswrapper[4949]: I1001 15:43:15.943698 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:15Z","lastTransitionTime":"2025-10-01T15:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:16 crc kubenswrapper[4949]: I1001 15:43:16.046286 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:16 crc kubenswrapper[4949]: I1001 15:43:16.046328 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:16 crc kubenswrapper[4949]: I1001 15:43:16.046358 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:16 crc kubenswrapper[4949]: I1001 15:43:16.046375 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:16 crc kubenswrapper[4949]: I1001 15:43:16.046386 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:16Z","lastTransitionTime":"2025-10-01T15:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:16 crc kubenswrapper[4949]: I1001 15:43:16.150293 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:16 crc kubenswrapper[4949]: I1001 15:43:16.150339 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:16 crc kubenswrapper[4949]: I1001 15:43:16.150348 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:16 crc kubenswrapper[4949]: I1001 15:43:16.150362 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:16 crc kubenswrapper[4949]: I1001 15:43:16.150372 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:16Z","lastTransitionTime":"2025-10-01T15:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:16 crc kubenswrapper[4949]: I1001 15:43:16.252553 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:16 crc kubenswrapper[4949]: I1001 15:43:16.252599 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:16 crc kubenswrapper[4949]: I1001 15:43:16.252617 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:16 crc kubenswrapper[4949]: I1001 15:43:16.252635 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:16 crc kubenswrapper[4949]: I1001 15:43:16.252660 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:16Z","lastTransitionTime":"2025-10-01T15:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:16 crc kubenswrapper[4949]: I1001 15:43:16.355622 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:16 crc kubenswrapper[4949]: I1001 15:43:16.356266 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:16 crc kubenswrapper[4949]: I1001 15:43:16.356362 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:16 crc kubenswrapper[4949]: I1001 15:43:16.356447 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:16 crc kubenswrapper[4949]: I1001 15:43:16.356534 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:16Z","lastTransitionTime":"2025-10-01T15:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:16 crc kubenswrapper[4949]: I1001 15:43:16.459725 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:16 crc kubenswrapper[4949]: I1001 15:43:16.459817 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:16 crc kubenswrapper[4949]: I1001 15:43:16.459834 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:16 crc kubenswrapper[4949]: I1001 15:43:16.459901 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:16 crc kubenswrapper[4949]: I1001 15:43:16.459928 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:16Z","lastTransitionTime":"2025-10-01T15:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:16 crc kubenswrapper[4949]: I1001 15:43:16.562850 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:16 crc kubenswrapper[4949]: I1001 15:43:16.563390 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:16 crc kubenswrapper[4949]: I1001 15:43:16.563452 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:16 crc kubenswrapper[4949]: I1001 15:43:16.563521 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:16 crc kubenswrapper[4949]: I1001 15:43:16.563601 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:16Z","lastTransitionTime":"2025-10-01T15:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:16 crc kubenswrapper[4949]: I1001 15:43:16.600733 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:43:16 crc kubenswrapper[4949]: I1001 15:43:16.600914 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:43:16 crc kubenswrapper[4949]: E1001 15:43:16.601107 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kfx8b" podUID="d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd" Oct 01 15:43:16 crc kubenswrapper[4949]: I1001 15:43:16.601279 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:43:16 crc kubenswrapper[4949]: I1001 15:43:16.601390 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:43:16 crc kubenswrapper[4949]: E1001 15:43:16.601542 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:43:16 crc kubenswrapper[4949]: E1001 15:43:16.601623 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:43:16 crc kubenswrapper[4949]: E1001 15:43:16.601715 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:43:16 crc kubenswrapper[4949]: I1001 15:43:16.666858 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:16 crc kubenswrapper[4949]: I1001 15:43:16.666923 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:16 crc kubenswrapper[4949]: I1001 15:43:16.666943 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:16 crc kubenswrapper[4949]: I1001 15:43:16.666965 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:16 crc kubenswrapper[4949]: I1001 15:43:16.666980 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:16Z","lastTransitionTime":"2025-10-01T15:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:16 crc kubenswrapper[4949]: I1001 15:43:16.769541 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:16 crc kubenswrapper[4949]: I1001 15:43:16.769597 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:16 crc kubenswrapper[4949]: I1001 15:43:16.769614 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:16 crc kubenswrapper[4949]: I1001 15:43:16.769635 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:16 crc kubenswrapper[4949]: I1001 15:43:16.769649 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:16Z","lastTransitionTime":"2025-10-01T15:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:16 crc kubenswrapper[4949]: I1001 15:43:16.872088 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:16 crc kubenswrapper[4949]: I1001 15:43:16.872153 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:16 crc kubenswrapper[4949]: I1001 15:43:16.872168 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:16 crc kubenswrapper[4949]: I1001 15:43:16.872202 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:16 crc kubenswrapper[4949]: I1001 15:43:16.872211 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:16Z","lastTransitionTime":"2025-10-01T15:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:16 crc kubenswrapper[4949]: I1001 15:43:16.975090 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:16 crc kubenswrapper[4949]: I1001 15:43:16.975165 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:16 crc kubenswrapper[4949]: I1001 15:43:16.975178 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:16 crc kubenswrapper[4949]: I1001 15:43:16.975197 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:16 crc kubenswrapper[4949]: I1001 15:43:16.975208 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:16Z","lastTransitionTime":"2025-10-01T15:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:17 crc kubenswrapper[4949]: I1001 15:43:17.078486 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:17 crc kubenswrapper[4949]: I1001 15:43:17.078522 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:17 crc kubenswrapper[4949]: I1001 15:43:17.078532 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:17 crc kubenswrapper[4949]: I1001 15:43:17.078547 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:17 crc kubenswrapper[4949]: I1001 15:43:17.078558 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:17Z","lastTransitionTime":"2025-10-01T15:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:17 crc kubenswrapper[4949]: I1001 15:43:17.181412 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:17 crc kubenswrapper[4949]: I1001 15:43:17.181472 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:17 crc kubenswrapper[4949]: I1001 15:43:17.181485 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:17 crc kubenswrapper[4949]: I1001 15:43:17.181504 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:17 crc kubenswrapper[4949]: I1001 15:43:17.181518 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:17Z","lastTransitionTime":"2025-10-01T15:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:17 crc kubenswrapper[4949]: I1001 15:43:17.284786 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:17 crc kubenswrapper[4949]: I1001 15:43:17.284826 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:17 crc kubenswrapper[4949]: I1001 15:43:17.284835 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:17 crc kubenswrapper[4949]: I1001 15:43:17.284850 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:17 crc kubenswrapper[4949]: I1001 15:43:17.284858 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:17Z","lastTransitionTime":"2025-10-01T15:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:17 crc kubenswrapper[4949]: I1001 15:43:17.388075 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:17 crc kubenswrapper[4949]: I1001 15:43:17.388114 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:17 crc kubenswrapper[4949]: I1001 15:43:17.388135 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:17 crc kubenswrapper[4949]: I1001 15:43:17.388150 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:17 crc kubenswrapper[4949]: I1001 15:43:17.388161 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:17Z","lastTransitionTime":"2025-10-01T15:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:17 crc kubenswrapper[4949]: I1001 15:43:17.490209 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:17 crc kubenswrapper[4949]: I1001 15:43:17.490249 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:17 crc kubenswrapper[4949]: I1001 15:43:17.490260 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:17 crc kubenswrapper[4949]: I1001 15:43:17.490277 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:17 crc kubenswrapper[4949]: I1001 15:43:17.490288 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:17Z","lastTransitionTime":"2025-10-01T15:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:17 crc kubenswrapper[4949]: I1001 15:43:17.594001 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:17 crc kubenswrapper[4949]: I1001 15:43:17.594069 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:17 crc kubenswrapper[4949]: I1001 15:43:17.594094 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:17 crc kubenswrapper[4949]: I1001 15:43:17.594158 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:17 crc kubenswrapper[4949]: I1001 15:43:17.594188 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:17Z","lastTransitionTime":"2025-10-01T15:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:17 crc kubenswrapper[4949]: I1001 15:43:17.697175 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:17 crc kubenswrapper[4949]: I1001 15:43:17.697258 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:17 crc kubenswrapper[4949]: I1001 15:43:17.697286 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:17 crc kubenswrapper[4949]: I1001 15:43:17.697317 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:17 crc kubenswrapper[4949]: I1001 15:43:17.697339 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:17Z","lastTransitionTime":"2025-10-01T15:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:17 crc kubenswrapper[4949]: I1001 15:43:17.800074 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:17 crc kubenswrapper[4949]: I1001 15:43:17.800118 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:17 crc kubenswrapper[4949]: I1001 15:43:17.800160 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:17 crc kubenswrapper[4949]: I1001 15:43:17.800175 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:17 crc kubenswrapper[4949]: I1001 15:43:17.800183 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:17Z","lastTransitionTime":"2025-10-01T15:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:17 crc kubenswrapper[4949]: I1001 15:43:17.903064 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:17 crc kubenswrapper[4949]: I1001 15:43:17.903174 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:17 crc kubenswrapper[4949]: I1001 15:43:17.903193 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:17 crc kubenswrapper[4949]: I1001 15:43:17.903219 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:17 crc kubenswrapper[4949]: I1001 15:43:17.903238 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:17Z","lastTransitionTime":"2025-10-01T15:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.006490 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.006537 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.006549 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.006567 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.006579 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:18Z","lastTransitionTime":"2025-10-01T15:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.109295 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.109374 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.109386 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.109407 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.109418 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:18Z","lastTransitionTime":"2025-10-01T15:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.214208 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.214274 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.214288 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.214311 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.214324 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:18Z","lastTransitionTime":"2025-10-01T15:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.318089 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.318186 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.318201 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.318224 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.318241 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:18Z","lastTransitionTime":"2025-10-01T15:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.422431 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.422495 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.422520 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.422550 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.422575 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:18Z","lastTransitionTime":"2025-10-01T15:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.511545 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.511588 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.511600 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.511616 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.511627 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:18Z","lastTransitionTime":"2025-10-01T15:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:18 crc kubenswrapper[4949]: E1001 15:43:18.526062 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a4890954-ee04-4573-a52a-d0437f2c0f47\\\",\\\"systemUUID\\\":\\\"de71db26-32c1-4956-9e9f-66fc023dcd38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:18Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.531201 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.531235 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.531245 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.531259 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.531271 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:18Z","lastTransitionTime":"2025-10-01T15:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:18 crc kubenswrapper[4949]: E1001 15:43:18.545212 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a4890954-ee04-4573-a52a-d0437f2c0f47\\\",\\\"systemUUID\\\":\\\"de71db26-32c1-4956-9e9f-66fc023dcd38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:18Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.550594 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.550647 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.550663 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.550686 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.550705 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:18Z","lastTransitionTime":"2025-10-01T15:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:18 crc kubenswrapper[4949]: E1001 15:43:18.568093 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a4890954-ee04-4573-a52a-d0437f2c0f47\\\",\\\"systemUUID\\\":\\\"de71db26-32c1-4956-9e9f-66fc023dcd38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:18Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.573268 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.573339 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.573360 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.573450 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.573473 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:18Z","lastTransitionTime":"2025-10-01T15:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:18 crc kubenswrapper[4949]: E1001 15:43:18.593616 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a4890954-ee04-4573-a52a-d0437f2c0f47\\\",\\\"systemUUID\\\":\\\"de71db26-32c1-4956-9e9f-66fc023dcd38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:18Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.601105 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.601202 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.601186 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.601118 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:43:18 crc kubenswrapper[4949]: E1001 15:43:18.601683 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:43:18 crc kubenswrapper[4949]: E1001 15:43:18.601912 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:43:18 crc kubenswrapper[4949]: E1001 15:43:18.602718 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:43:18 crc kubenswrapper[4949]: E1001 15:43:18.602837 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kfx8b" podUID="d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.610051 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.610118 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.610203 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.610237 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.610258 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:18Z","lastTransitionTime":"2025-10-01T15:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:18 crc kubenswrapper[4949]: E1001 15:43:18.633741 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:18Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a4890954-ee04-4573-a52a-d0437f2c0f47\\\",\\\"systemUUID\\\":\\\"de71db26-32c1-4956-9e9f-66fc023dcd38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:18Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:18 crc kubenswrapper[4949]: E1001 15:43:18.633980 4949 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.635938 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.636222 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.636339 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.636464 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.636590 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:18Z","lastTransitionTime":"2025-10-01T15:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.739616 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.739687 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.739707 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.739732 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.739747 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:18Z","lastTransitionTime":"2025-10-01T15:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.842720 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.842787 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.842803 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.842834 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.842851 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:18Z","lastTransitionTime":"2025-10-01T15:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.945188 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.945284 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.945313 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.945345 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:18 crc kubenswrapper[4949]: I1001 15:43:18.945370 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:18Z","lastTransitionTime":"2025-10-01T15:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:19 crc kubenswrapper[4949]: I1001 15:43:19.047515 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:19 crc kubenswrapper[4949]: I1001 15:43:19.047563 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:19 crc kubenswrapper[4949]: I1001 15:43:19.047573 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:19 crc kubenswrapper[4949]: I1001 15:43:19.047591 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:19 crc kubenswrapper[4949]: I1001 15:43:19.047601 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:19Z","lastTransitionTime":"2025-10-01T15:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:19 crc kubenswrapper[4949]: I1001 15:43:19.150563 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:19 crc kubenswrapper[4949]: I1001 15:43:19.150645 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:19 crc kubenswrapper[4949]: I1001 15:43:19.150656 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:19 crc kubenswrapper[4949]: I1001 15:43:19.150673 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:19 crc kubenswrapper[4949]: I1001 15:43:19.150683 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:19Z","lastTransitionTime":"2025-10-01T15:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:19 crc kubenswrapper[4949]: I1001 15:43:19.253708 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:19 crc kubenswrapper[4949]: I1001 15:43:19.253752 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:19 crc kubenswrapper[4949]: I1001 15:43:19.253763 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:19 crc kubenswrapper[4949]: I1001 15:43:19.253778 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:19 crc kubenswrapper[4949]: I1001 15:43:19.253790 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:19Z","lastTransitionTime":"2025-10-01T15:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:19 crc kubenswrapper[4949]: I1001 15:43:19.356464 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:19 crc kubenswrapper[4949]: I1001 15:43:19.356500 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:19 crc kubenswrapper[4949]: I1001 15:43:19.356531 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:19 crc kubenswrapper[4949]: I1001 15:43:19.356571 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:19 crc kubenswrapper[4949]: I1001 15:43:19.356580 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:19Z","lastTransitionTime":"2025-10-01T15:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:19 crc kubenswrapper[4949]: I1001 15:43:19.459442 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:19 crc kubenswrapper[4949]: I1001 15:43:19.459497 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:19 crc kubenswrapper[4949]: I1001 15:43:19.459510 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:19 crc kubenswrapper[4949]: I1001 15:43:19.459526 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:19 crc kubenswrapper[4949]: I1001 15:43:19.459538 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:19Z","lastTransitionTime":"2025-10-01T15:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:19 crc kubenswrapper[4949]: I1001 15:43:19.562681 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:19 crc kubenswrapper[4949]: I1001 15:43:19.562738 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:19 crc kubenswrapper[4949]: I1001 15:43:19.562753 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:19 crc kubenswrapper[4949]: I1001 15:43:19.562774 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:19 crc kubenswrapper[4949]: I1001 15:43:19.562791 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:19Z","lastTransitionTime":"2025-10-01T15:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:19 crc kubenswrapper[4949]: I1001 15:43:19.665412 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:19 crc kubenswrapper[4949]: I1001 15:43:19.665477 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:19 crc kubenswrapper[4949]: I1001 15:43:19.665498 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:19 crc kubenswrapper[4949]: I1001 15:43:19.665527 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:19 crc kubenswrapper[4949]: I1001 15:43:19.665552 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:19Z","lastTransitionTime":"2025-10-01T15:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:19 crc kubenswrapper[4949]: I1001 15:43:19.767978 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:19 crc kubenswrapper[4949]: I1001 15:43:19.768057 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:19 crc kubenswrapper[4949]: I1001 15:43:19.768070 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:19 crc kubenswrapper[4949]: I1001 15:43:19.768085 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:19 crc kubenswrapper[4949]: I1001 15:43:19.768095 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:19Z","lastTransitionTime":"2025-10-01T15:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:19 crc kubenswrapper[4949]: I1001 15:43:19.870279 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:19 crc kubenswrapper[4949]: I1001 15:43:19.870353 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:19 crc kubenswrapper[4949]: I1001 15:43:19.870371 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:19 crc kubenswrapper[4949]: I1001 15:43:19.870394 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:19 crc kubenswrapper[4949]: I1001 15:43:19.870415 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:19Z","lastTransitionTime":"2025-10-01T15:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:19 crc kubenswrapper[4949]: I1001 15:43:19.973281 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:19 crc kubenswrapper[4949]: I1001 15:43:19.973318 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:19 crc kubenswrapper[4949]: I1001 15:43:19.973329 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:19 crc kubenswrapper[4949]: I1001 15:43:19.973345 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:19 crc kubenswrapper[4949]: I1001 15:43:19.973355 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:19Z","lastTransitionTime":"2025-10-01T15:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:20 crc kubenswrapper[4949]: I1001 15:43:20.077193 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:20 crc kubenswrapper[4949]: I1001 15:43:20.077253 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:20 crc kubenswrapper[4949]: I1001 15:43:20.077266 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:20 crc kubenswrapper[4949]: I1001 15:43:20.077287 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:20 crc kubenswrapper[4949]: I1001 15:43:20.077307 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:20Z","lastTransitionTime":"2025-10-01T15:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:20 crc kubenswrapper[4949]: I1001 15:43:20.179859 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:20 crc kubenswrapper[4949]: I1001 15:43:20.179928 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:20 crc kubenswrapper[4949]: I1001 15:43:20.179941 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:20 crc kubenswrapper[4949]: I1001 15:43:20.179960 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:20 crc kubenswrapper[4949]: I1001 15:43:20.179972 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:20Z","lastTransitionTime":"2025-10-01T15:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:20 crc kubenswrapper[4949]: I1001 15:43:20.282855 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:20 crc kubenswrapper[4949]: I1001 15:43:20.282896 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:20 crc kubenswrapper[4949]: I1001 15:43:20.282904 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:20 crc kubenswrapper[4949]: I1001 15:43:20.282918 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:20 crc kubenswrapper[4949]: I1001 15:43:20.282928 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:20Z","lastTransitionTime":"2025-10-01T15:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:20 crc kubenswrapper[4949]: I1001 15:43:20.385476 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:20 crc kubenswrapper[4949]: I1001 15:43:20.385517 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:20 crc kubenswrapper[4949]: I1001 15:43:20.385526 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:20 crc kubenswrapper[4949]: I1001 15:43:20.385540 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:20 crc kubenswrapper[4949]: I1001 15:43:20.385550 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:20Z","lastTransitionTime":"2025-10-01T15:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:20 crc kubenswrapper[4949]: I1001 15:43:20.488220 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:20 crc kubenswrapper[4949]: I1001 15:43:20.488255 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:20 crc kubenswrapper[4949]: I1001 15:43:20.488267 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:20 crc kubenswrapper[4949]: I1001 15:43:20.488283 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:20 crc kubenswrapper[4949]: I1001 15:43:20.488292 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:20Z","lastTransitionTime":"2025-10-01T15:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:20 crc kubenswrapper[4949]: I1001 15:43:20.590303 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:20 crc kubenswrapper[4949]: I1001 15:43:20.590347 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:20 crc kubenswrapper[4949]: I1001 15:43:20.590358 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:20 crc kubenswrapper[4949]: I1001 15:43:20.590375 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:20 crc kubenswrapper[4949]: I1001 15:43:20.590386 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:20Z","lastTransitionTime":"2025-10-01T15:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:20 crc kubenswrapper[4949]: I1001 15:43:20.600906 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:43:20 crc kubenswrapper[4949]: I1001 15:43:20.600933 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:43:20 crc kubenswrapper[4949]: I1001 15:43:20.600906 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:43:20 crc kubenswrapper[4949]: E1001 15:43:20.601022 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kfx8b" podUID="d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd" Oct 01 15:43:20 crc kubenswrapper[4949]: I1001 15:43:20.601059 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:43:20 crc kubenswrapper[4949]: E1001 15:43:20.601072 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:43:20 crc kubenswrapper[4949]: E1001 15:43:20.601228 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:43:20 crc kubenswrapper[4949]: E1001 15:43:20.601297 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:43:20 crc kubenswrapper[4949]: I1001 15:43:20.694012 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:20 crc kubenswrapper[4949]: I1001 15:43:20.694113 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:20 crc kubenswrapper[4949]: I1001 15:43:20.694153 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:20 crc kubenswrapper[4949]: I1001 15:43:20.694172 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:20 crc kubenswrapper[4949]: I1001 15:43:20.694182 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:20Z","lastTransitionTime":"2025-10-01T15:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:20 crc kubenswrapper[4949]: I1001 15:43:20.797313 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:20 crc kubenswrapper[4949]: I1001 15:43:20.797460 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:20 crc kubenswrapper[4949]: I1001 15:43:20.797508 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:20 crc kubenswrapper[4949]: I1001 15:43:20.797541 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:20 crc kubenswrapper[4949]: I1001 15:43:20.797567 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:20Z","lastTransitionTime":"2025-10-01T15:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:20 crc kubenswrapper[4949]: I1001 15:43:20.900307 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:20 crc kubenswrapper[4949]: I1001 15:43:20.900397 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:20 crc kubenswrapper[4949]: I1001 15:43:20.900416 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:20 crc kubenswrapper[4949]: I1001 15:43:20.900441 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:20 crc kubenswrapper[4949]: I1001 15:43:20.900468 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:20Z","lastTransitionTime":"2025-10-01T15:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.002050 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.002101 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.002112 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.002143 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.002153 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:21Z","lastTransitionTime":"2025-10-01T15:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.105620 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.105679 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.105703 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.105731 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.105755 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:21Z","lastTransitionTime":"2025-10-01T15:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.208117 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.208230 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.208242 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.208256 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.208265 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:21Z","lastTransitionTime":"2025-10-01T15:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.311543 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.311607 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.311622 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.311648 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.311670 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:21Z","lastTransitionTime":"2025-10-01T15:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.413882 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.413927 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.413938 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.413955 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.413965 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:21Z","lastTransitionTime":"2025-10-01T15:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.517000 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.517069 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.517094 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.517169 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.517200 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:21Z","lastTransitionTime":"2025-10-01T15:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.620315 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.620369 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.620381 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.620398 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.620409 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:21Z","lastTransitionTime":"2025-10-01T15:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.627455 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439d2f19-631a-4560-aa58-70cdeef997a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://110e9a59347a665dfa5d6fcdf9e4937525c5e799c8356b2f28d3ba523f925d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513e0be5244aa9da6671888fedd775306c0824650cc687f0faea952c28d1b6b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47ab556b4721804041f9abee5effa082f4f29fbffe7681a7fc8efe0cb4f2b7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41f38cee75135e15635fdda772af2bdd1588e57025d28247223af8b5c34202e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02eba524109342a0990f10dfdf5432c2504acbb394b4f6e23b370becc256f51c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-01T15:41:55Z\\\",\\\"message\\\":\\\"W1001 15:41:44.880099 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1001 15:41:44.880530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759333304 cert, and key in /tmp/serving-cert-2775419030/serving-signer.crt, /tmp/serving-cert-2775419030/serving-signer.key\\\\nI1001 15:41:45.097803 1 observer_polling.go:159] Starting file observer\\\\nW1001 15:41:45.100841 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1001 15:41:45.103153 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1001 15:41:45.104329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2775419030/tls.crt::/tmp/serving-cert-2775419030/tls.key\\\\\\\"\\\\nF1001 15:41:55.441587 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2ceea2906c651debca4b72a5654ba5f8291a20d513a96b2fde7f1a0373674c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:21Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.640853 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57b836d2-a87c-4356-b8cb-71eed69e089e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e4ba077e53324e0a6cb25d94228b1a6265878a923ef88934ee0d87967385c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbb0d610e0cf4a4beb09c6beae9121ac6b8f448a5d215da1d15debfaf9a4770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cbb0d610e0cf4a4beb09c6beae9121ac6b8f448a5d215da1d15debfaf9a4770\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:21Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.651475 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pgg4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25c0759d-c94a-438c-b478-48161acbb035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9037ea44408cee3d00ee27bb38ea8e49344ab06d6b173d86689a131010eb7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r42ms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pgg4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:21Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.664166 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kfx8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2bpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2bpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kfx8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:21Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.683522 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:21Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.701344 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:21Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.718919 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:21Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.723406 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.723474 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.723495 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.723522 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.723540 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:21Z","lastTransitionTime":"2025-10-01T15:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.732817 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e15cd67-d4ad-49b8-96a6-da114105e558\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee03163adf8037e21674f8ff81a9eba22d580bf29b7c9a81d9505dd1e768169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a240a198e037219428db0de4c633c5e99064f7f13a6a8922eba7fa3fac633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6287\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:21Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.748166 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s5r4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3d51987d4ca121795b1960ac1901175eb67d69f9380770888791f085e0f3bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a54b1244d03fd99a21c9ca116873fe7a80ec3053c95c8b99de3fb62e581419\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-01T15:42:53Z\\\",\\\"message\\\":\\\"2025-10-01T15:42:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c8f86e39-7a0c-46dd-b100-b876fbdfae38\\\\n2025-10-01T15:42:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c8f86e39-7a0c-46dd-b100-b876fbdfae38 to /host/opt/cni/bin/\\\\n2025-10-01T15:42:08Z [verbose] multus-daemon started\\\\n2025-10-01T15:42:08Z [verbose] Readiness Indicator file check\\\\n2025-10-01T15:42:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghb86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s5r4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:21Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.764425 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-xr96p" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6656de7a-9b8f-4714-81ae-3685c01f11fb\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:16Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://24da01cfcdff6460cccc5cf06d24aa758d33ee7d086c7b93078b5848c650fcc9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:16Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7dcd0a912a7490cf31f2438c2d9ee393640b73a759351f580436ca20ee37def7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bb083b944e63cb16e4e6950795783a029e22951c15578cdbc918ddde18e6609f\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:08Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ef38b9fcaddddc4c3c6d41242fd4d59d90c9ed706702e4ce1c7a91dff892e204\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:09Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7987fa3169fae8eee96db354788f7cb9b56831ee009b4497e47b10508bc0078c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2990fa1b9f4e2aa038a7e2a591b166864a45f2c1028d9977ce4eb2fad341bc90\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:13Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0296aee184dcef13f03cc1da4987bef80c13c394f067300167eadd8a4091f4c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0296aee184dcef13f03cc1da4987bef80c13c394f067300167eadd8a4091f4c0\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:15Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-kdnzf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-xr96p\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:21Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.786769 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5897a864-bb97-4443-a301-b30b22d13a88\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:45Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://02305e2d3e998f2b11185e65d51381d27f69cc544cba61b0878457e4816e5371\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://05aabfc445359dba1707d194e9dcf27bb57037b582f1181556cc65e2f2f712f4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e34912853a2f81af97497fc66053b02383114c5a42d46e39f2e7a98b48bf9e78\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b31382514a5f6e8d234356e87d2c1aba55e02a4edb54835382fe810f10e8df83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:46Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2c11c5a8bd9444a04b072ad656edec0e3f7d287392a2dffd509f3d7c0d1d4601\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:45Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://82f5829a6fde4ed73e823b0a45110468bcb861678edb311d20a9e4293157f43c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3bd9c086167f8ff8ae453c16bda240cb4af2910b0744a103c3a5cccbc770c9e7\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2cff5b2f8dc5490616f84e4782e54b5a8fd7843d9065cf49e5dc847e5b55129e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:21Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.800320 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d76d2a7d-139b-40b6-91f8-c2c8ae0ecc71\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9b08c9c1691470813c489fb61abd198ca2b5a94ddec2f9b5a06b9d6c9b56334\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e467246736f9943a0524d6786a8c4bd4a567f60f20ffd7259a5124c844c44522\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://da953f01526669a6db92b3ed2d63682b200ea37d86e44ff34a3b7630877fe6ce\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://dd7235ba9aab9db39a23fe6b99b3278ff4c1ede47a1d58c2c986fe3244a2fa12\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:21Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.812005 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8101fee3-df2e-48d3-87d7-fd9cb4da0f2d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3ae2d981e775beb7e08339ee2b24437a4e7faa11462128d69dc0120905075759\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://714ddbb1add5e32f98fea89501df6bfa217b607e3252008af82f130bc66e69c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d21bf7c2c1ae0ae4427bdccaa3cb14eec8034ad6b89498a13e41c346aa10f33d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd26ba21f3b352fff240b18114aa47c6185f125ed89be2586e676b766434c3b6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://dd26ba21f3b352fff240b18114aa47c6185f125ed89be2586e676b766434c3b6\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:21Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.826075 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.826269 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.826286 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.826303 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.826315 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:21Z","lastTransitionTime":"2025-10-01T15:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.827000 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daa37955372c2b350c3cd8103db2d231407c8f853e02d43fcae591d66a1f2bdc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:21Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.838358 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:03Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c5a5390e0a0285f487962def201f71183a2cfdbcc8db56f1bd0c64b2f6014c86\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:21Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.851736 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3a28cdb5789930fa13d9e76aeb4dc582efc2f97a117e7d3170909dddd8f6eed2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a081b64cca92126512c81346c1b0005ddefcd11b702b12598c567cc627d3561f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:21Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.861372 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-kg2qk" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75c430b-f863-470c-b57f-def53bf840db\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ab6ca47d17c063e35d4f3109e8dbe39f42dbf245b8f439b12464e167fb57dc4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tvfpb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-kg2qk\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:21Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.878650 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b30af5f-469f-4bee-b77f-4b58edba325b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2df7f85dd23b39e72771fddf6c1ab34df3b9fcea664dffe05a1b8859281fff26\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e0d4badfc0169a3e0cbd62d6661b7e542f43cbbe9005a56c68c7e0027235b64e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://728c9faef5d0d63e5dd9cd98520d62db3e75646be0aeec6ff20c71598e35b5a4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://35e991ed458eaa7ea90ab0ae0204b7c9b865ad5c5044709257614edda19865c0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fde65de2ce60422cb5ddb92562725b8dcd2f8b36bee19cc6e6de8aaab216c7ba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2eb7bdabc02a6c9442d7f842e078acef313e10cd6dbf7153b289ee93b99c9abc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1e7482d935a30cc08710ffb6a3f58cda0f46736eedf37f8852d2d027ef121e6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://1e7482d935a30cc08710ffb6a3f58cda0f46736eedf37f8852d2d027ef121e6a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-01T15:43:06Z\\\",\\\"message\\\":\\\"ontroller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:06Z is after 2025-08-24T17:21:41Z]\\\\nI1001 15:43:06.742091 6988 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-machine-config-operator/machine-config-controller_TCP_cluster\\\\\\\", UUID:\\\\\\\"3f1b9878-e751-4e46-a226-ce007d2c4aa7\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-machine-config-operator/machine-config-controller\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLBGroup\\\\\\\"}}}, built \\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:43:05Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pppfm_openshift-ovn-kubernetes(6b30af5f-469f-4bee-b77f-4b58edba325b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d08a6304f2c50f5e54adf26554bd2f81aba317d4ee153f55dc25123926bc380e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:11Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:42:06Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f7t8d\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-pppfm\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:21Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.890230 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zxj4z" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"660f5a12-b71d-454a-8ec0-bae2646530a0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:18Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://956d6c7f1d35dedc8f0f1255301e89801525d97a0a3837f3d085900a53bead59\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:18Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbc4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3a4e431fdeada96dc88cfe8f1b26c2ba3502f52e3d28880a87c5c4ed7482defb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:19Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cbc4b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:18Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-zxj4z\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:21Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.928537 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.928626 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.928636 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.928648 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:21 crc kubenswrapper[4949]: I1001 15:43:21.928657 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:21Z","lastTransitionTime":"2025-10-01T15:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.031093 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.031184 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.031197 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.031215 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.031227 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:22Z","lastTransitionTime":"2025-10-01T15:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.133571 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.133625 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.133667 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.133688 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.133701 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:22Z","lastTransitionTime":"2025-10-01T15:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.237453 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.237534 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.237546 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.237568 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.237580 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:22Z","lastTransitionTime":"2025-10-01T15:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.340281 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.340344 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.340362 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.340382 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.340399 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:22Z","lastTransitionTime":"2025-10-01T15:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.444454 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.444533 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.444556 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.444590 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.444630 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:22Z","lastTransitionTime":"2025-10-01T15:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.547843 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.547896 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.547910 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.547926 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.547938 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:22Z","lastTransitionTime":"2025-10-01T15:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.601021 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.601100 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:43:22 crc kubenswrapper[4949]: E1001 15:43:22.601209 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kfx8b" podUID="d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd" Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.601503 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.601522 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:43:22 crc kubenswrapper[4949]: E1001 15:43:22.601846 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:43:22 crc kubenswrapper[4949]: E1001 15:43:22.601927 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:43:22 crc kubenswrapper[4949]: E1001 15:43:22.601975 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.602169 4949 scope.go:117] "RemoveContainer" containerID="1e7482d935a30cc08710ffb6a3f58cda0f46736eedf37f8852d2d027ef121e6a" Oct 01 15:43:22 crc kubenswrapper[4949]: E1001 15:43:22.602339 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pppfm_openshift-ovn-kubernetes(6b30af5f-469f-4bee-b77f-4b58edba325b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.650411 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.650458 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.650466 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.650482 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.650490 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:22Z","lastTransitionTime":"2025-10-01T15:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.753421 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.753499 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.753526 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.753558 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.753583 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:22Z","lastTransitionTime":"2025-10-01T15:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.856412 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.856484 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.856511 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.856545 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.856570 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:22Z","lastTransitionTime":"2025-10-01T15:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.959993 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.960057 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.960075 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.960100 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:22 crc kubenswrapper[4949]: I1001 15:43:22.960117 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:22Z","lastTransitionTime":"2025-10-01T15:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:23 crc kubenswrapper[4949]: I1001 15:43:23.063764 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:23 crc kubenswrapper[4949]: I1001 15:43:23.063868 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:23 crc kubenswrapper[4949]: I1001 15:43:23.063900 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:23 crc kubenswrapper[4949]: I1001 15:43:23.063929 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:23 crc kubenswrapper[4949]: I1001 15:43:23.063949 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:23Z","lastTransitionTime":"2025-10-01T15:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:23 crc kubenswrapper[4949]: I1001 15:43:23.065538 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd-metrics-certs\") pod \"network-metrics-daemon-kfx8b\" (UID: \"d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd\") " pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:43:23 crc kubenswrapper[4949]: E1001 15:43:23.065741 4949 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Oct 01 15:43:23 crc kubenswrapper[4949]: E1001 15:43:23.065864 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd-metrics-certs podName:d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd nodeName:}" failed. No retries permitted until 2025-10-01 15:44:27.06582975 +0000 UTC m=+166.371435991 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd-metrics-certs") pod "network-metrics-daemon-kfx8b" (UID: "d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd") : object "openshift-multus"/"metrics-daemon-secret" not registered Oct 01 15:43:23 crc kubenswrapper[4949]: I1001 15:43:23.167299 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:23 crc kubenswrapper[4949]: I1001 15:43:23.167404 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:23 crc kubenswrapper[4949]: I1001 15:43:23.167432 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:23 crc kubenswrapper[4949]: I1001 15:43:23.167462 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:23 crc kubenswrapper[4949]: I1001 15:43:23.167484 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:23Z","lastTransitionTime":"2025-10-01T15:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:23 crc kubenswrapper[4949]: I1001 15:43:23.270374 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:23 crc kubenswrapper[4949]: I1001 15:43:23.270428 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:23 crc kubenswrapper[4949]: I1001 15:43:23.270443 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:23 crc kubenswrapper[4949]: I1001 15:43:23.270464 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:23 crc kubenswrapper[4949]: I1001 15:43:23.270478 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:23Z","lastTransitionTime":"2025-10-01T15:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:23 crc kubenswrapper[4949]: I1001 15:43:23.373569 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:23 crc kubenswrapper[4949]: I1001 15:43:23.373648 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:23 crc kubenswrapper[4949]: I1001 15:43:23.373676 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:23 crc kubenswrapper[4949]: I1001 15:43:23.373703 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:23 crc kubenswrapper[4949]: I1001 15:43:23.373719 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:23Z","lastTransitionTime":"2025-10-01T15:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:23 crc kubenswrapper[4949]: I1001 15:43:23.476865 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:23 crc kubenswrapper[4949]: I1001 15:43:23.476942 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:23 crc kubenswrapper[4949]: I1001 15:43:23.476967 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:23 crc kubenswrapper[4949]: I1001 15:43:23.476995 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:23 crc kubenswrapper[4949]: I1001 15:43:23.477019 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:23Z","lastTransitionTime":"2025-10-01T15:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:23 crc kubenswrapper[4949]: I1001 15:43:23.580463 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:23 crc kubenswrapper[4949]: I1001 15:43:23.580540 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:23 crc kubenswrapper[4949]: I1001 15:43:23.580561 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:23 crc kubenswrapper[4949]: I1001 15:43:23.580592 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:23 crc kubenswrapper[4949]: I1001 15:43:23.580609 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:23Z","lastTransitionTime":"2025-10-01T15:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:23 crc kubenswrapper[4949]: I1001 15:43:23.685077 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:23 crc kubenswrapper[4949]: I1001 15:43:23.685187 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:23 crc kubenswrapper[4949]: I1001 15:43:23.685206 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:23 crc kubenswrapper[4949]: I1001 15:43:23.685231 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:23 crc kubenswrapper[4949]: I1001 15:43:23.685249 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:23Z","lastTransitionTime":"2025-10-01T15:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:23 crc kubenswrapper[4949]: I1001 15:43:23.788378 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:23 crc kubenswrapper[4949]: I1001 15:43:23.788465 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:23 crc kubenswrapper[4949]: I1001 15:43:23.788491 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:23 crc kubenswrapper[4949]: I1001 15:43:23.788525 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:23 crc kubenswrapper[4949]: I1001 15:43:23.788555 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:23Z","lastTransitionTime":"2025-10-01T15:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:23 crc kubenswrapper[4949]: I1001 15:43:23.891349 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:23 crc kubenswrapper[4949]: I1001 15:43:23.891407 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:23 crc kubenswrapper[4949]: I1001 15:43:23.891424 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:23 crc kubenswrapper[4949]: I1001 15:43:23.891446 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:23 crc kubenswrapper[4949]: I1001 15:43:23.891464 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:23Z","lastTransitionTime":"2025-10-01T15:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:23 crc kubenswrapper[4949]: I1001 15:43:23.994342 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:23 crc kubenswrapper[4949]: I1001 15:43:23.994418 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:23 crc kubenswrapper[4949]: I1001 15:43:23.994437 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:23 crc kubenswrapper[4949]: I1001 15:43:23.994470 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:23 crc kubenswrapper[4949]: I1001 15:43:23.994489 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:23Z","lastTransitionTime":"2025-10-01T15:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:24 crc kubenswrapper[4949]: I1001 15:43:24.097746 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:24 crc kubenswrapper[4949]: I1001 15:43:24.097806 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:24 crc kubenswrapper[4949]: I1001 15:43:24.097823 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:24 crc kubenswrapper[4949]: I1001 15:43:24.097846 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:24 crc kubenswrapper[4949]: I1001 15:43:24.097864 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:24Z","lastTransitionTime":"2025-10-01T15:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:24 crc kubenswrapper[4949]: I1001 15:43:24.200776 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:24 crc kubenswrapper[4949]: I1001 15:43:24.200815 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:24 crc kubenswrapper[4949]: I1001 15:43:24.200823 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:24 crc kubenswrapper[4949]: I1001 15:43:24.200837 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:24 crc kubenswrapper[4949]: I1001 15:43:24.200845 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:24Z","lastTransitionTime":"2025-10-01T15:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:24 crc kubenswrapper[4949]: I1001 15:43:24.303492 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:24 crc kubenswrapper[4949]: I1001 15:43:24.303583 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:24 crc kubenswrapper[4949]: I1001 15:43:24.303607 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:24 crc kubenswrapper[4949]: I1001 15:43:24.303635 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:24 crc kubenswrapper[4949]: I1001 15:43:24.303656 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:24Z","lastTransitionTime":"2025-10-01T15:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:24 crc kubenswrapper[4949]: I1001 15:43:24.406951 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:24 crc kubenswrapper[4949]: I1001 15:43:24.407025 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:24 crc kubenswrapper[4949]: I1001 15:43:24.407049 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:24 crc kubenswrapper[4949]: I1001 15:43:24.407079 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:24 crc kubenswrapper[4949]: I1001 15:43:24.407175 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:24Z","lastTransitionTime":"2025-10-01T15:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:24 crc kubenswrapper[4949]: I1001 15:43:24.509845 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:24 crc kubenswrapper[4949]: I1001 15:43:24.509873 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:24 crc kubenswrapper[4949]: I1001 15:43:24.509880 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:24 crc kubenswrapper[4949]: I1001 15:43:24.509893 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:24 crc kubenswrapper[4949]: I1001 15:43:24.509901 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:24Z","lastTransitionTime":"2025-10-01T15:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:24 crc kubenswrapper[4949]: I1001 15:43:24.601156 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:43:24 crc kubenswrapper[4949]: I1001 15:43:24.601481 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:43:24 crc kubenswrapper[4949]: E1001 15:43:24.601836 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:43:24 crc kubenswrapper[4949]: I1001 15:43:24.601521 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:43:24 crc kubenswrapper[4949]: E1001 15:43:24.601898 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kfx8b" podUID="d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd" Oct 01 15:43:24 crc kubenswrapper[4949]: I1001 15:43:24.601486 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:43:24 crc kubenswrapper[4949]: E1001 15:43:24.601973 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:43:24 crc kubenswrapper[4949]: E1001 15:43:24.601778 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:43:24 crc kubenswrapper[4949]: I1001 15:43:24.613360 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:24 crc kubenswrapper[4949]: I1001 15:43:24.613410 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:24 crc kubenswrapper[4949]: I1001 15:43:24.613430 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:24 crc kubenswrapper[4949]: I1001 15:43:24.613462 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:24 crc kubenswrapper[4949]: I1001 15:43:24.613484 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:24Z","lastTransitionTime":"2025-10-01T15:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:24 crc kubenswrapper[4949]: I1001 15:43:24.717288 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:24 crc kubenswrapper[4949]: I1001 15:43:24.717357 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:24 crc kubenswrapper[4949]: I1001 15:43:24.717376 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:24 crc kubenswrapper[4949]: I1001 15:43:24.717405 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:24 crc kubenswrapper[4949]: I1001 15:43:24.717433 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:24Z","lastTransitionTime":"2025-10-01T15:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:24 crc kubenswrapper[4949]: I1001 15:43:24.820255 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:24 crc kubenswrapper[4949]: I1001 15:43:24.820311 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:24 crc kubenswrapper[4949]: I1001 15:43:24.820324 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:24 crc kubenswrapper[4949]: I1001 15:43:24.820341 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:24 crc kubenswrapper[4949]: I1001 15:43:24.820354 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:24Z","lastTransitionTime":"2025-10-01T15:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:24 crc kubenswrapper[4949]: I1001 15:43:24.923445 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:24 crc kubenswrapper[4949]: I1001 15:43:24.923498 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:24 crc kubenswrapper[4949]: I1001 15:43:24.923510 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:24 crc kubenswrapper[4949]: I1001 15:43:24.923524 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:24 crc kubenswrapper[4949]: I1001 15:43:24.923533 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:24Z","lastTransitionTime":"2025-10-01T15:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:25 crc kubenswrapper[4949]: I1001 15:43:25.027451 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:25 crc kubenswrapper[4949]: I1001 15:43:25.027511 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:25 crc kubenswrapper[4949]: I1001 15:43:25.027527 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:25 crc kubenswrapper[4949]: I1001 15:43:25.027551 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:25 crc kubenswrapper[4949]: I1001 15:43:25.027567 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:25Z","lastTransitionTime":"2025-10-01T15:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:25 crc kubenswrapper[4949]: I1001 15:43:25.130847 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:25 crc kubenswrapper[4949]: I1001 15:43:25.130914 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:25 crc kubenswrapper[4949]: I1001 15:43:25.130949 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:25 crc kubenswrapper[4949]: I1001 15:43:25.130979 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:25 crc kubenswrapper[4949]: I1001 15:43:25.131044 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:25Z","lastTransitionTime":"2025-10-01T15:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:25 crc kubenswrapper[4949]: I1001 15:43:25.234190 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:25 crc kubenswrapper[4949]: I1001 15:43:25.234288 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:25 crc kubenswrapper[4949]: I1001 15:43:25.234318 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:25 crc kubenswrapper[4949]: I1001 15:43:25.234352 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:25 crc kubenswrapper[4949]: I1001 15:43:25.234376 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:25Z","lastTransitionTime":"2025-10-01T15:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:25 crc kubenswrapper[4949]: I1001 15:43:25.337735 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:25 crc kubenswrapper[4949]: I1001 15:43:25.337784 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:25 crc kubenswrapper[4949]: I1001 15:43:25.337803 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:25 crc kubenswrapper[4949]: I1001 15:43:25.337823 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:25 crc kubenswrapper[4949]: I1001 15:43:25.337837 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:25Z","lastTransitionTime":"2025-10-01T15:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:25 crc kubenswrapper[4949]: I1001 15:43:25.439925 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:25 crc kubenswrapper[4949]: I1001 15:43:25.439964 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:25 crc kubenswrapper[4949]: I1001 15:43:25.439975 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:25 crc kubenswrapper[4949]: I1001 15:43:25.439991 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:25 crc kubenswrapper[4949]: I1001 15:43:25.440002 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:25Z","lastTransitionTime":"2025-10-01T15:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:25 crc kubenswrapper[4949]: I1001 15:43:25.542771 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:25 crc kubenswrapper[4949]: I1001 15:43:25.542837 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:25 crc kubenswrapper[4949]: I1001 15:43:25.542860 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:25 crc kubenswrapper[4949]: I1001 15:43:25.542888 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:25 crc kubenswrapper[4949]: I1001 15:43:25.542910 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:25Z","lastTransitionTime":"2025-10-01T15:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:25 crc kubenswrapper[4949]: I1001 15:43:25.645039 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:25 crc kubenswrapper[4949]: I1001 15:43:25.645117 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:25 crc kubenswrapper[4949]: I1001 15:43:25.645189 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:25 crc kubenswrapper[4949]: I1001 15:43:25.645221 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:25 crc kubenswrapper[4949]: I1001 15:43:25.645244 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:25Z","lastTransitionTime":"2025-10-01T15:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:25 crc kubenswrapper[4949]: I1001 15:43:25.748871 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:25 crc kubenswrapper[4949]: I1001 15:43:25.748939 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:25 crc kubenswrapper[4949]: I1001 15:43:25.748961 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:25 crc kubenswrapper[4949]: I1001 15:43:25.748984 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:25 crc kubenswrapper[4949]: I1001 15:43:25.749001 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:25Z","lastTransitionTime":"2025-10-01T15:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:25 crc kubenswrapper[4949]: I1001 15:43:25.851719 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:25 crc kubenswrapper[4949]: I1001 15:43:25.851763 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:25 crc kubenswrapper[4949]: I1001 15:43:25.851775 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:25 crc kubenswrapper[4949]: I1001 15:43:25.851790 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:25 crc kubenswrapper[4949]: I1001 15:43:25.851802 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:25Z","lastTransitionTime":"2025-10-01T15:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:25 crc kubenswrapper[4949]: I1001 15:43:25.954430 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:25 crc kubenswrapper[4949]: I1001 15:43:25.954481 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:25 crc kubenswrapper[4949]: I1001 15:43:25.954489 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:25 crc kubenswrapper[4949]: I1001 15:43:25.954502 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:25 crc kubenswrapper[4949]: I1001 15:43:25.954511 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:25Z","lastTransitionTime":"2025-10-01T15:43:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:26 crc kubenswrapper[4949]: I1001 15:43:26.057942 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:26 crc kubenswrapper[4949]: I1001 15:43:26.058027 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:26 crc kubenswrapper[4949]: I1001 15:43:26.058052 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:26 crc kubenswrapper[4949]: I1001 15:43:26.058081 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:26 crc kubenswrapper[4949]: I1001 15:43:26.058105 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:26Z","lastTransitionTime":"2025-10-01T15:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:26 crc kubenswrapper[4949]: I1001 15:43:26.161883 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:26 crc kubenswrapper[4949]: I1001 15:43:26.161929 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:26 crc kubenswrapper[4949]: I1001 15:43:26.161941 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:26 crc kubenswrapper[4949]: I1001 15:43:26.161957 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:26 crc kubenswrapper[4949]: I1001 15:43:26.161970 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:26Z","lastTransitionTime":"2025-10-01T15:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:26 crc kubenswrapper[4949]: I1001 15:43:26.264655 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:26 crc kubenswrapper[4949]: I1001 15:43:26.264694 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:26 crc kubenswrapper[4949]: I1001 15:43:26.264703 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:26 crc kubenswrapper[4949]: I1001 15:43:26.264718 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:26 crc kubenswrapper[4949]: I1001 15:43:26.264727 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:26Z","lastTransitionTime":"2025-10-01T15:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:26 crc kubenswrapper[4949]: I1001 15:43:26.366993 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:26 crc kubenswrapper[4949]: I1001 15:43:26.367179 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:26 crc kubenswrapper[4949]: I1001 15:43:26.367243 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:26 crc kubenswrapper[4949]: I1001 15:43:26.367267 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:26 crc kubenswrapper[4949]: I1001 15:43:26.367283 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:26Z","lastTransitionTime":"2025-10-01T15:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:26 crc kubenswrapper[4949]: I1001 15:43:26.470236 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:26 crc kubenswrapper[4949]: I1001 15:43:26.470273 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:26 crc kubenswrapper[4949]: I1001 15:43:26.470284 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:26 crc kubenswrapper[4949]: I1001 15:43:26.470300 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:26 crc kubenswrapper[4949]: I1001 15:43:26.470311 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:26Z","lastTransitionTime":"2025-10-01T15:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:26 crc kubenswrapper[4949]: I1001 15:43:26.573004 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:26 crc kubenswrapper[4949]: I1001 15:43:26.573469 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:26 crc kubenswrapper[4949]: I1001 15:43:26.573684 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:26 crc kubenswrapper[4949]: I1001 15:43:26.573729 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:26 crc kubenswrapper[4949]: I1001 15:43:26.573754 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:26Z","lastTransitionTime":"2025-10-01T15:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:26 crc kubenswrapper[4949]: I1001 15:43:26.601034 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:43:26 crc kubenswrapper[4949]: I1001 15:43:26.601360 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:43:26 crc kubenswrapper[4949]: I1001 15:43:26.601417 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:43:26 crc kubenswrapper[4949]: I1001 15:43:26.601389 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:43:26 crc kubenswrapper[4949]: E1001 15:43:26.601591 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:43:26 crc kubenswrapper[4949]: E1001 15:43:26.601861 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:43:26 crc kubenswrapper[4949]: E1001 15:43:26.601964 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kfx8b" podUID="d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd" Oct 01 15:43:26 crc kubenswrapper[4949]: E1001 15:43:26.602038 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:43:26 crc kubenswrapper[4949]: I1001 15:43:26.676751 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:26 crc kubenswrapper[4949]: I1001 15:43:26.676819 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:26 crc kubenswrapper[4949]: I1001 15:43:26.676842 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:26 crc kubenswrapper[4949]: I1001 15:43:26.676871 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:26 crc kubenswrapper[4949]: I1001 15:43:26.676895 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:26Z","lastTransitionTime":"2025-10-01T15:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:26 crc kubenswrapper[4949]: I1001 15:43:26.780194 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:26 crc kubenswrapper[4949]: I1001 15:43:26.780265 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:26 crc kubenswrapper[4949]: I1001 15:43:26.780281 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:26 crc kubenswrapper[4949]: I1001 15:43:26.780304 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:26 crc kubenswrapper[4949]: I1001 15:43:26.780322 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:26Z","lastTransitionTime":"2025-10-01T15:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:26 crc kubenswrapper[4949]: I1001 15:43:26.884197 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:26 crc kubenswrapper[4949]: I1001 15:43:26.884267 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:26 crc kubenswrapper[4949]: I1001 15:43:26.884304 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:26 crc kubenswrapper[4949]: I1001 15:43:26.884334 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:26 crc kubenswrapper[4949]: I1001 15:43:26.884355 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:26Z","lastTransitionTime":"2025-10-01T15:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:26 crc kubenswrapper[4949]: I1001 15:43:26.987174 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:26 crc kubenswrapper[4949]: I1001 15:43:26.987239 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:26 crc kubenswrapper[4949]: I1001 15:43:26.987272 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:26 crc kubenswrapper[4949]: I1001 15:43:26.987300 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:26 crc kubenswrapper[4949]: I1001 15:43:26.987321 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:26Z","lastTransitionTime":"2025-10-01T15:43:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:27 crc kubenswrapper[4949]: I1001 15:43:27.091780 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:27 crc kubenswrapper[4949]: I1001 15:43:27.091908 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:27 crc kubenswrapper[4949]: I1001 15:43:27.091942 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:27 crc kubenswrapper[4949]: I1001 15:43:27.091970 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:27 crc kubenswrapper[4949]: I1001 15:43:27.091991 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:27Z","lastTransitionTime":"2025-10-01T15:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:27 crc kubenswrapper[4949]: I1001 15:43:27.195363 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:27 crc kubenswrapper[4949]: I1001 15:43:27.195428 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:27 crc kubenswrapper[4949]: I1001 15:43:27.195445 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:27 crc kubenswrapper[4949]: I1001 15:43:27.195469 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:27 crc kubenswrapper[4949]: I1001 15:43:27.195488 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:27Z","lastTransitionTime":"2025-10-01T15:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:27 crc kubenswrapper[4949]: I1001 15:43:27.299098 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:27 crc kubenswrapper[4949]: I1001 15:43:27.299214 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:27 crc kubenswrapper[4949]: I1001 15:43:27.299244 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:27 crc kubenswrapper[4949]: I1001 15:43:27.299268 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:27 crc kubenswrapper[4949]: I1001 15:43:27.299285 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:27Z","lastTransitionTime":"2025-10-01T15:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:27 crc kubenswrapper[4949]: I1001 15:43:27.408178 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:27 crc kubenswrapper[4949]: I1001 15:43:27.408585 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:27 crc kubenswrapper[4949]: I1001 15:43:27.408604 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:27 crc kubenswrapper[4949]: I1001 15:43:27.408631 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:27 crc kubenswrapper[4949]: I1001 15:43:27.408649 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:27Z","lastTransitionTime":"2025-10-01T15:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:27 crc kubenswrapper[4949]: I1001 15:43:27.511909 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:27 crc kubenswrapper[4949]: I1001 15:43:27.512105 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:27 crc kubenswrapper[4949]: I1001 15:43:27.512152 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:27 crc kubenswrapper[4949]: I1001 15:43:27.512180 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:27 crc kubenswrapper[4949]: I1001 15:43:27.512197 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:27Z","lastTransitionTime":"2025-10-01T15:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:27 crc kubenswrapper[4949]: I1001 15:43:27.614945 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:27 crc kubenswrapper[4949]: I1001 15:43:27.615060 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:27 crc kubenswrapper[4949]: I1001 15:43:27.615077 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:27 crc kubenswrapper[4949]: I1001 15:43:27.615105 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:27 crc kubenswrapper[4949]: I1001 15:43:27.615157 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:27Z","lastTransitionTime":"2025-10-01T15:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:27 crc kubenswrapper[4949]: I1001 15:43:27.718212 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:27 crc kubenswrapper[4949]: I1001 15:43:27.718288 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:27 crc kubenswrapper[4949]: I1001 15:43:27.718311 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:27 crc kubenswrapper[4949]: I1001 15:43:27.718340 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:27 crc kubenswrapper[4949]: I1001 15:43:27.718362 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:27Z","lastTransitionTime":"2025-10-01T15:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:27 crc kubenswrapper[4949]: I1001 15:43:27.821540 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:27 crc kubenswrapper[4949]: I1001 15:43:27.821608 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:27 crc kubenswrapper[4949]: I1001 15:43:27.821645 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:27 crc kubenswrapper[4949]: I1001 15:43:27.821677 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:27 crc kubenswrapper[4949]: I1001 15:43:27.821710 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:27Z","lastTransitionTime":"2025-10-01T15:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:27 crc kubenswrapper[4949]: I1001 15:43:27.924069 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:27 crc kubenswrapper[4949]: I1001 15:43:27.924110 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:27 crc kubenswrapper[4949]: I1001 15:43:27.924118 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:27 crc kubenswrapper[4949]: I1001 15:43:27.924151 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:27 crc kubenswrapper[4949]: I1001 15:43:27.924162 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:27Z","lastTransitionTime":"2025-10-01T15:43:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.026533 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.026754 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.026766 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.026782 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.026795 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:28Z","lastTransitionTime":"2025-10-01T15:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.128761 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.128818 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.128839 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.128865 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.128886 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:28Z","lastTransitionTime":"2025-10-01T15:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.231402 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.231483 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.231506 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.231537 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.231562 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:28Z","lastTransitionTime":"2025-10-01T15:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.335241 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.335278 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.335290 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.335306 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.335318 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:28Z","lastTransitionTime":"2025-10-01T15:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.439324 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.439402 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.439552 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.439597 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.439627 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:28Z","lastTransitionTime":"2025-10-01T15:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.543418 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.543489 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.543502 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.543530 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.543545 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:28Z","lastTransitionTime":"2025-10-01T15:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.600890 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.600905 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.601091 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:43:28 crc kubenswrapper[4949]: E1001 15:43:28.601249 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.601043 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:43:28 crc kubenswrapper[4949]: E1001 15:43:28.601098 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:43:28 crc kubenswrapper[4949]: E1001 15:43:28.601391 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:43:28 crc kubenswrapper[4949]: E1001 15:43:28.601458 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kfx8b" podUID="d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.646625 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.646686 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.646708 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.646740 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.646763 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:28Z","lastTransitionTime":"2025-10-01T15:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.657098 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.657189 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.657213 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.657239 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.657261 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:28Z","lastTransitionTime":"2025-10-01T15:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:28 crc kubenswrapper[4949]: E1001 15:43:28.676672 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a4890954-ee04-4573-a52a-d0437f2c0f47\\\",\\\"systemUUID\\\":\\\"de71db26-32c1-4956-9e9f-66fc023dcd38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:28Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.681527 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.681593 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.681610 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.681634 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.681652 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:28Z","lastTransitionTime":"2025-10-01T15:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:28 crc kubenswrapper[4949]: E1001 15:43:28.714682 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a4890954-ee04-4573-a52a-d0437f2c0f47\\\",\\\"systemUUID\\\":\\\"de71db26-32c1-4956-9e9f-66fc023dcd38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:28Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.730935 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.731005 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.731015 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.731037 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.731049 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:28Z","lastTransitionTime":"2025-10-01T15:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:28 crc kubenswrapper[4949]: E1001 15:43:28.751545 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a4890954-ee04-4573-a52a-d0437f2c0f47\\\",\\\"systemUUID\\\":\\\"de71db26-32c1-4956-9e9f-66fc023dcd38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:28Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.756530 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.756588 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.756597 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.756610 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.756620 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:28Z","lastTransitionTime":"2025-10-01T15:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:28 crc kubenswrapper[4949]: E1001 15:43:28.777212 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a4890954-ee04-4573-a52a-d0437f2c0f47\\\",\\\"systemUUID\\\":\\\"de71db26-32c1-4956-9e9f-66fc023dcd38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:28Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.781238 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.781281 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.781298 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.781317 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.781331 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:28Z","lastTransitionTime":"2025-10-01T15:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:28 crc kubenswrapper[4949]: E1001 15:43:28.799548 4949 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:28Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:28Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2025-10-01T15:43:28Z\\\",\\\"lastTransitionTime\\\":\\\"2025-10-01T15:43:28Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"a4890954-ee04-4573-a52a-d0437f2c0f47\\\",\\\"systemUUID\\\":\\\"de71db26-32c1-4956-9e9f-66fc023dcd38\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:28Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:28 crc kubenswrapper[4949]: E1001 15:43:28.799653 4949 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.800830 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.800862 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.800872 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.800886 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.800895 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:28Z","lastTransitionTime":"2025-10-01T15:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.903622 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.903659 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.903669 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.903684 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:28 crc kubenswrapper[4949]: I1001 15:43:28.903694 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:28Z","lastTransitionTime":"2025-10-01T15:43:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:29 crc kubenswrapper[4949]: I1001 15:43:29.006623 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:29 crc kubenswrapper[4949]: I1001 15:43:29.006687 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:29 crc kubenswrapper[4949]: I1001 15:43:29.006710 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:29 crc kubenswrapper[4949]: I1001 15:43:29.006738 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:29 crc kubenswrapper[4949]: I1001 15:43:29.006755 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:29Z","lastTransitionTime":"2025-10-01T15:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:29 crc kubenswrapper[4949]: I1001 15:43:29.110333 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:29 crc kubenswrapper[4949]: I1001 15:43:29.110400 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:29 crc kubenswrapper[4949]: I1001 15:43:29.110417 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:29 crc kubenswrapper[4949]: I1001 15:43:29.110441 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:29 crc kubenswrapper[4949]: I1001 15:43:29.110458 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:29Z","lastTransitionTime":"2025-10-01T15:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:29 crc kubenswrapper[4949]: I1001 15:43:29.212837 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:29 crc kubenswrapper[4949]: I1001 15:43:29.212875 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:29 crc kubenswrapper[4949]: I1001 15:43:29.212885 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:29 crc kubenswrapper[4949]: I1001 15:43:29.212902 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:29 crc kubenswrapper[4949]: I1001 15:43:29.212913 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:29Z","lastTransitionTime":"2025-10-01T15:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:29 crc kubenswrapper[4949]: I1001 15:43:29.316062 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:29 crc kubenswrapper[4949]: I1001 15:43:29.316164 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:29 crc kubenswrapper[4949]: I1001 15:43:29.316187 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:29 crc kubenswrapper[4949]: I1001 15:43:29.316216 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:29 crc kubenswrapper[4949]: I1001 15:43:29.316239 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:29Z","lastTransitionTime":"2025-10-01T15:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:29 crc kubenswrapper[4949]: I1001 15:43:29.418533 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:29 crc kubenswrapper[4949]: I1001 15:43:29.418601 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:29 crc kubenswrapper[4949]: I1001 15:43:29.418623 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:29 crc kubenswrapper[4949]: I1001 15:43:29.418655 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:29 crc kubenswrapper[4949]: I1001 15:43:29.418678 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:29Z","lastTransitionTime":"2025-10-01T15:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:29 crc kubenswrapper[4949]: I1001 15:43:29.520844 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:29 crc kubenswrapper[4949]: I1001 15:43:29.520899 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:29 crc kubenswrapper[4949]: I1001 15:43:29.520914 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:29 crc kubenswrapper[4949]: I1001 15:43:29.520940 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:29 crc kubenswrapper[4949]: I1001 15:43:29.520958 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:29Z","lastTransitionTime":"2025-10-01T15:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:29 crc kubenswrapper[4949]: I1001 15:43:29.623895 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:29 crc kubenswrapper[4949]: I1001 15:43:29.623976 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:29 crc kubenswrapper[4949]: I1001 15:43:29.624062 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:29 crc kubenswrapper[4949]: I1001 15:43:29.624100 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:29 crc kubenswrapper[4949]: I1001 15:43:29.624162 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:29Z","lastTransitionTime":"2025-10-01T15:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:29 crc kubenswrapper[4949]: I1001 15:43:29.726840 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:29 crc kubenswrapper[4949]: I1001 15:43:29.726876 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:29 crc kubenswrapper[4949]: I1001 15:43:29.726885 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:29 crc kubenswrapper[4949]: I1001 15:43:29.726897 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:29 crc kubenswrapper[4949]: I1001 15:43:29.726906 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:29Z","lastTransitionTime":"2025-10-01T15:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:29 crc kubenswrapper[4949]: I1001 15:43:29.829788 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:29 crc kubenswrapper[4949]: I1001 15:43:29.829882 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:29 crc kubenswrapper[4949]: I1001 15:43:29.829919 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:29 crc kubenswrapper[4949]: I1001 15:43:29.829953 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:29 crc kubenswrapper[4949]: I1001 15:43:29.829979 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:29Z","lastTransitionTime":"2025-10-01T15:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:29 crc kubenswrapper[4949]: I1001 15:43:29.931835 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:29 crc kubenswrapper[4949]: I1001 15:43:29.931870 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:29 crc kubenswrapper[4949]: I1001 15:43:29.931881 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:29 crc kubenswrapper[4949]: I1001 15:43:29.931899 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:29 crc kubenswrapper[4949]: I1001 15:43:29.931916 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:29Z","lastTransitionTime":"2025-10-01T15:43:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:30 crc kubenswrapper[4949]: I1001 15:43:30.034949 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:30 crc kubenswrapper[4949]: I1001 15:43:30.034990 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:30 crc kubenswrapper[4949]: I1001 15:43:30.035007 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:30 crc kubenswrapper[4949]: I1001 15:43:30.035027 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:30 crc kubenswrapper[4949]: I1001 15:43:30.035041 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:30Z","lastTransitionTime":"2025-10-01T15:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:30 crc kubenswrapper[4949]: I1001 15:43:30.137470 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:30 crc kubenswrapper[4949]: I1001 15:43:30.137526 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:30 crc kubenswrapper[4949]: I1001 15:43:30.137540 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:30 crc kubenswrapper[4949]: I1001 15:43:30.137559 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:30 crc kubenswrapper[4949]: I1001 15:43:30.137572 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:30Z","lastTransitionTime":"2025-10-01T15:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:30 crc kubenswrapper[4949]: I1001 15:43:30.239403 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:30 crc kubenswrapper[4949]: I1001 15:43:30.239475 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:30 crc kubenswrapper[4949]: I1001 15:43:30.239484 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:30 crc kubenswrapper[4949]: I1001 15:43:30.239496 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:30 crc kubenswrapper[4949]: I1001 15:43:30.239504 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:30Z","lastTransitionTime":"2025-10-01T15:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:30 crc kubenswrapper[4949]: I1001 15:43:30.342731 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:30 crc kubenswrapper[4949]: I1001 15:43:30.342799 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:30 crc kubenswrapper[4949]: I1001 15:43:30.342809 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:30 crc kubenswrapper[4949]: I1001 15:43:30.342830 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:30 crc kubenswrapper[4949]: I1001 15:43:30.342840 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:30Z","lastTransitionTime":"2025-10-01T15:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:30 crc kubenswrapper[4949]: I1001 15:43:30.445577 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:30 crc kubenswrapper[4949]: I1001 15:43:30.445623 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:30 crc kubenswrapper[4949]: I1001 15:43:30.445639 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:30 crc kubenswrapper[4949]: I1001 15:43:30.445658 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:30 crc kubenswrapper[4949]: I1001 15:43:30.445670 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:30Z","lastTransitionTime":"2025-10-01T15:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:30 crc kubenswrapper[4949]: I1001 15:43:30.549042 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:30 crc kubenswrapper[4949]: I1001 15:43:30.549112 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:30 crc kubenswrapper[4949]: I1001 15:43:30.549182 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:30 crc kubenswrapper[4949]: I1001 15:43:30.549211 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:30 crc kubenswrapper[4949]: I1001 15:43:30.549233 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:30Z","lastTransitionTime":"2025-10-01T15:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:30 crc kubenswrapper[4949]: I1001 15:43:30.601104 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:43:30 crc kubenswrapper[4949]: I1001 15:43:30.601271 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:43:30 crc kubenswrapper[4949]: I1001 15:43:30.601440 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:43:30 crc kubenswrapper[4949]: I1001 15:43:30.601572 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:43:30 crc kubenswrapper[4949]: E1001 15:43:30.601563 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kfx8b" podUID="d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd" Oct 01 15:43:30 crc kubenswrapper[4949]: E1001 15:43:30.601752 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:43:30 crc kubenswrapper[4949]: E1001 15:43:30.601821 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:43:30 crc kubenswrapper[4949]: E1001 15:43:30.602358 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:43:30 crc kubenswrapper[4949]: I1001 15:43:30.652178 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:30 crc kubenswrapper[4949]: I1001 15:43:30.652212 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:30 crc kubenswrapper[4949]: I1001 15:43:30.652223 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:30 crc kubenswrapper[4949]: I1001 15:43:30.652240 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:30 crc kubenswrapper[4949]: I1001 15:43:30.652300 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:30Z","lastTransitionTime":"2025-10-01T15:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:30 crc kubenswrapper[4949]: I1001 15:43:30.754349 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:30 crc kubenswrapper[4949]: I1001 15:43:30.754455 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:30 crc kubenswrapper[4949]: I1001 15:43:30.754487 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:30 crc kubenswrapper[4949]: I1001 15:43:30.754519 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:30 crc kubenswrapper[4949]: I1001 15:43:30.754543 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:30Z","lastTransitionTime":"2025-10-01T15:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:30 crc kubenswrapper[4949]: I1001 15:43:30.857502 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:30 crc kubenswrapper[4949]: I1001 15:43:30.857562 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:30 crc kubenswrapper[4949]: I1001 15:43:30.857571 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:30 crc kubenswrapper[4949]: I1001 15:43:30.857585 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:30 crc kubenswrapper[4949]: I1001 15:43:30.857594 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:30Z","lastTransitionTime":"2025-10-01T15:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:30 crc kubenswrapper[4949]: I1001 15:43:30.960734 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:30 crc kubenswrapper[4949]: I1001 15:43:30.960781 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:30 crc kubenswrapper[4949]: I1001 15:43:30.960791 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:30 crc kubenswrapper[4949]: I1001 15:43:30.960811 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:30 crc kubenswrapper[4949]: I1001 15:43:30.960824 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:30Z","lastTransitionTime":"2025-10-01T15:43:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.063068 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.063156 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.063168 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.063189 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.063201 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:31Z","lastTransitionTime":"2025-10-01T15:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.166342 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.166400 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.166428 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.166458 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.166480 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:31Z","lastTransitionTime":"2025-10-01T15:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.269485 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.269529 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.269538 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.269550 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.269559 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:31Z","lastTransitionTime":"2025-10-01T15:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.372852 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.372914 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.372931 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.372957 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.372975 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:31Z","lastTransitionTime":"2025-10-01T15:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.476192 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.476246 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.476257 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.476281 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.476290 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:31Z","lastTransitionTime":"2025-10-01T15:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.580079 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.580156 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.580168 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.580186 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.580198 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:31Z","lastTransitionTime":"2025-10-01T15:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.624704 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"439d2f19-631a-4560-aa58-70cdeef997a8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://110e9a59347a665dfa5d6fcdf9e4937525c5e799c8356b2f28d3ba523f925d93\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://513e0be5244aa9da6671888fedd775306c0824650cc687f0faea952c28d1b6b3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c47ab556b4721804041f9abee5effa082f4f29fbffe7681a7fc8efe0cb4f2b7e\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://41f38cee75135e15635fdda772af2bdd1588e57025d28247223af8b5c34202e3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://02eba524109342a0990f10dfdf5432c2504acbb394b4f6e23b370becc256f51c\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2025-10-01T15:41:55Z\\\",\\\"message\\\":\\\"W1001 15:41:44.880099 1 cmd.go:257] Using insecure, self-signed certificates\\\\nI1001 15:41:44.880530 1 crypto.go:601] Generating new CA for check-endpoints-signer@1759333304 cert, and key in /tmp/serving-cert-2775419030/serving-signer.crt, /tmp/serving-cert-2775419030/serving-signer.key\\\\nI1001 15:41:45.097803 1 observer_polling.go:159] Starting file observer\\\\nW1001 15:41:45.100841 1 builder.go:272] unable to get owner reference (falling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": dial tcp [::1]:6443: connect: connection refused\\\\nI1001 15:41:45.103153 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI1001 15:41:45.104329 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2775419030/tls.crt::/tmp/serving-cert-2775419030/tls.key\\\\\\\"\\\\nF1001 15:41:55.441587 1 cmd.go:182] error initializing delegating authentication: unable to load configmap based request-header-client-ca-file: Get \\\\\\\"https://localhost:6443/api/v1/namespaces/kube-system/configmaps/extension-apiserver-authentication\\\\\\\": net/http: TLS handshake timeout\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f2ceea2906c651debca4b72a5654ba5f8291a20d513a96b2fde7f1a0373674c6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:44Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://91389c091782322ad0051f6a367de88fe64443d668213c60c1ce6734f2cf844c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:31Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.640050 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"57b836d2-a87c-4356-b8cb-71eed69e089e\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:43Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:41:41Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e3e4ba077e53324e0a6cb25d94228b1a6265878a923ef88934ee0d87967385c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:41:43Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4cbb0d610e0cf4a4beb09c6beae9121ac6b8f448a5d215da1d15debfaf9a4770\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4cbb0d610e0cf4a4beb09c6beae9121ac6b8f448a5d215da1d15debfaf9a4770\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2025-10-01T15:41:42Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2025-10-01T15:41:42Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:41:41Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:31Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.655182 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-pgg4f" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"25c0759d-c94a-438c-b478-48161acbb035\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:04Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d9037ea44408cee3d00ee27bb38ea8e49344ab06d6b173d86689a131010eb7f9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-r42ms\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:04Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-pgg4f\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:31Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.673515 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:31Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.682530 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.682567 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.682577 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.682591 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.682600 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:31Z","lastTransitionTime":"2025-10-01T15:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.690047 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:31Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.705301 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:00Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:31Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.717692 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0e15cd67-d4ad-49b8-96a6-da114105e558\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8ee03163adf8037e21674f8ff81a9eba22d580bf29b7c9a81d9505dd1e768169\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a85a240a198e037219428db0de4c633c5e99064f7f13a6a8922eba7fa3fac633\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8xghl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-l6287\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:31Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.729705 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-s5r4m" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:07Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b3d51987d4ca121795b1960ac1901175eb67d69f9380770888791f085e0f3bcc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://68a54b1244d03fd99a21c9ca116873fe7a80ec3053c95c8b99de3fb62e581419\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2025-10-01T15:42:53Z\\\",\\\"message\\\":\\\"2025-10-01T15:42:08+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_c8f86e39-7a0c-46dd-b100-b876fbdfae38\\\\n2025-10-01T15:42:08+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_c8f86e39-7a0c-46dd-b100-b876fbdfae38 to /host/opt/cni/bin/\\\\n2025-10-01T15:42:08Z [verbose] multus-daemon started\\\\n2025-10-01T15:42:08Z [verbose] Readiness Indicator file check\\\\n2025-10-01T15:42:53Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2025-10-01T15:42:06Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2025-10-01T15:42:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-ghb86\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:05Z\\\"}}\" for pod \"openshift-multus\"/\"multus-s5r4m\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:31Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.743374 4949 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-kfx8b" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2025-10-01T15:42:19Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2bpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-f2bpg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2025-10-01T15:42:19Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-kfx8b\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2025-10-01T15:43:31Z is after 2025-08-24T17:21:41Z" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.785916 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.785975 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.785993 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.786017 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.786034 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:31Z","lastTransitionTime":"2025-10-01T15:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.798974 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=89.798914986 podStartE2EDuration="1m29.798914986s" podCreationTimestamp="2025-10-01 15:42:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:43:31.795231662 +0000 UTC m=+111.100837863" watchObservedRunningTime="2025-10-01 15:43:31.798914986 +0000 UTC m=+111.104521207" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.841301 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=90.841281608 podStartE2EDuration="1m30.841281608s" podCreationTimestamp="2025-10-01 15:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:43:31.821721288 +0000 UTC m=+111.127327489" watchObservedRunningTime="2025-10-01 15:43:31.841281608 +0000 UTC m=+111.146887789" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.862425 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=58.862409872 podStartE2EDuration="58.862409872s" podCreationTimestamp="2025-10-01 15:42:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:43:31.841613847 +0000 UTC m=+111.147220038" watchObservedRunningTime="2025-10-01 15:43:31.862409872 +0000 UTC m=+111.168016063" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.888018 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.888054 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.888064 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.888079 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.888097 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:31Z","lastTransitionTime":"2025-10-01T15:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.894679 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-xr96p" podStartSLOduration=87.89465784 podStartE2EDuration="1m27.89465784s" podCreationTimestamp="2025-10-01 15:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:43:31.894522166 +0000 UTC m=+111.200128357" watchObservedRunningTime="2025-10-01 15:43:31.89465784 +0000 UTC m=+111.200264041" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.923816 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-kg2qk" podStartSLOduration=87.923794059 podStartE2EDuration="1m27.923794059s" podCreationTimestamp="2025-10-01 15:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:43:31.923781529 +0000 UTC m=+111.229387740" watchObservedRunningTime="2025-10-01 15:43:31.923794059 +0000 UTC m=+111.229400250" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.966235 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-zxj4z" podStartSLOduration=86.966217103 podStartE2EDuration="1m26.966217103s" podCreationTimestamp="2025-10-01 15:42:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:43:31.962655683 +0000 UTC m=+111.268261894" watchObservedRunningTime="2025-10-01 15:43:31.966217103 +0000 UTC m=+111.271823294" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.990663 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.990715 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.990728 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.990745 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:31 crc kubenswrapper[4949]: I1001 15:43:31.990757 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:31Z","lastTransitionTime":"2025-10-01T15:43:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:32 crc kubenswrapper[4949]: I1001 15:43:32.093269 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:32 crc kubenswrapper[4949]: I1001 15:43:32.093333 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:32 crc kubenswrapper[4949]: I1001 15:43:32.093348 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:32 crc kubenswrapper[4949]: I1001 15:43:32.093369 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:32 crc kubenswrapper[4949]: I1001 15:43:32.093384 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:32Z","lastTransitionTime":"2025-10-01T15:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:32 crc kubenswrapper[4949]: I1001 15:43:32.195265 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:32 crc kubenswrapper[4949]: I1001 15:43:32.195294 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:32 crc kubenswrapper[4949]: I1001 15:43:32.195303 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:32 crc kubenswrapper[4949]: I1001 15:43:32.195315 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:32 crc kubenswrapper[4949]: I1001 15:43:32.195323 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:32Z","lastTransitionTime":"2025-10-01T15:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:32 crc kubenswrapper[4949]: I1001 15:43:32.297676 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:32 crc kubenswrapper[4949]: I1001 15:43:32.297716 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:32 crc kubenswrapper[4949]: I1001 15:43:32.297730 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:32 crc kubenswrapper[4949]: I1001 15:43:32.297749 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:32 crc kubenswrapper[4949]: I1001 15:43:32.297763 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:32Z","lastTransitionTime":"2025-10-01T15:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:32 crc kubenswrapper[4949]: I1001 15:43:32.400576 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:32 crc kubenswrapper[4949]: I1001 15:43:32.400608 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:32 crc kubenswrapper[4949]: I1001 15:43:32.400619 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:32 crc kubenswrapper[4949]: I1001 15:43:32.400635 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:32 crc kubenswrapper[4949]: I1001 15:43:32.400646 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:32Z","lastTransitionTime":"2025-10-01T15:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:32 crc kubenswrapper[4949]: I1001 15:43:32.504009 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:32 crc kubenswrapper[4949]: I1001 15:43:32.504066 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:32 crc kubenswrapper[4949]: I1001 15:43:32.504084 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:32 crc kubenswrapper[4949]: I1001 15:43:32.504108 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:32 crc kubenswrapper[4949]: I1001 15:43:32.504167 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:32Z","lastTransitionTime":"2025-10-01T15:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:32 crc kubenswrapper[4949]: I1001 15:43:32.601312 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:43:32 crc kubenswrapper[4949]: E1001 15:43:32.601438 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kfx8b" podUID="d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd" Oct 01 15:43:32 crc kubenswrapper[4949]: I1001 15:43:32.601340 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:43:32 crc kubenswrapper[4949]: I1001 15:43:32.601471 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:43:32 crc kubenswrapper[4949]: I1001 15:43:32.601516 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:43:32 crc kubenswrapper[4949]: E1001 15:43:32.601609 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:43:32 crc kubenswrapper[4949]: E1001 15:43:32.601672 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:43:32 crc kubenswrapper[4949]: E1001 15:43:32.601744 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:43:32 crc kubenswrapper[4949]: I1001 15:43:32.606965 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:32 crc kubenswrapper[4949]: I1001 15:43:32.607037 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:32 crc kubenswrapper[4949]: I1001 15:43:32.607064 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:32 crc kubenswrapper[4949]: I1001 15:43:32.607100 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:32 crc kubenswrapper[4949]: I1001 15:43:32.607171 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:32Z","lastTransitionTime":"2025-10-01T15:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:32 crc kubenswrapper[4949]: I1001 15:43:32.711034 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:32 crc kubenswrapper[4949]: I1001 15:43:32.711092 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:32 crc kubenswrapper[4949]: I1001 15:43:32.711115 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:32 crc kubenswrapper[4949]: I1001 15:43:32.711164 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:32 crc kubenswrapper[4949]: I1001 15:43:32.711184 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:32Z","lastTransitionTime":"2025-10-01T15:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:32 crc kubenswrapper[4949]: I1001 15:43:32.813931 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:32 crc kubenswrapper[4949]: I1001 15:43:32.814003 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:32 crc kubenswrapper[4949]: I1001 15:43:32.814026 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:32 crc kubenswrapper[4949]: I1001 15:43:32.814056 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:32 crc kubenswrapper[4949]: I1001 15:43:32.814093 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:32Z","lastTransitionTime":"2025-10-01T15:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:32 crc kubenswrapper[4949]: I1001 15:43:32.917483 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:32 crc kubenswrapper[4949]: I1001 15:43:32.917566 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:32 crc kubenswrapper[4949]: I1001 15:43:32.917592 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:32 crc kubenswrapper[4949]: I1001 15:43:32.917635 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:32 crc kubenswrapper[4949]: I1001 15:43:32.917670 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:32Z","lastTransitionTime":"2025-10-01T15:43:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:33 crc kubenswrapper[4949]: I1001 15:43:33.021099 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:33 crc kubenswrapper[4949]: I1001 15:43:33.021161 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:33 crc kubenswrapper[4949]: I1001 15:43:33.021174 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:33 crc kubenswrapper[4949]: I1001 15:43:33.021189 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:33 crc kubenswrapper[4949]: I1001 15:43:33.021199 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:33Z","lastTransitionTime":"2025-10-01T15:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:33 crc kubenswrapper[4949]: I1001 15:43:33.123999 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:33 crc kubenswrapper[4949]: I1001 15:43:33.124056 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:33 crc kubenswrapper[4949]: I1001 15:43:33.124074 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:33 crc kubenswrapper[4949]: I1001 15:43:33.124161 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:33 crc kubenswrapper[4949]: I1001 15:43:33.124181 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:33Z","lastTransitionTime":"2025-10-01T15:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:33 crc kubenswrapper[4949]: I1001 15:43:33.227862 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:33 crc kubenswrapper[4949]: I1001 15:43:33.227947 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:33 crc kubenswrapper[4949]: I1001 15:43:33.227969 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:33 crc kubenswrapper[4949]: I1001 15:43:33.227997 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:33 crc kubenswrapper[4949]: I1001 15:43:33.228020 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:33Z","lastTransitionTime":"2025-10-01T15:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:33 crc kubenswrapper[4949]: I1001 15:43:33.331216 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:33 crc kubenswrapper[4949]: I1001 15:43:33.331257 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:33 crc kubenswrapper[4949]: I1001 15:43:33.331268 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:33 crc kubenswrapper[4949]: I1001 15:43:33.331290 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:33 crc kubenswrapper[4949]: I1001 15:43:33.331304 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:33Z","lastTransitionTime":"2025-10-01T15:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:33 crc kubenswrapper[4949]: I1001 15:43:33.434804 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:33 crc kubenswrapper[4949]: I1001 15:43:33.434899 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:33 crc kubenswrapper[4949]: I1001 15:43:33.434917 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:33 crc kubenswrapper[4949]: I1001 15:43:33.434941 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:33 crc kubenswrapper[4949]: I1001 15:43:33.434957 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:33Z","lastTransitionTime":"2025-10-01T15:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:33 crc kubenswrapper[4949]: I1001 15:43:33.537882 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:33 crc kubenswrapper[4949]: I1001 15:43:33.537946 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:33 crc kubenswrapper[4949]: I1001 15:43:33.537958 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:33 crc kubenswrapper[4949]: I1001 15:43:33.537981 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:33 crc kubenswrapper[4949]: I1001 15:43:33.538002 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:33Z","lastTransitionTime":"2025-10-01T15:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:33 crc kubenswrapper[4949]: I1001 15:43:33.640809 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:33 crc kubenswrapper[4949]: I1001 15:43:33.640859 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:33 crc kubenswrapper[4949]: I1001 15:43:33.640870 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:33 crc kubenswrapper[4949]: I1001 15:43:33.640888 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:33 crc kubenswrapper[4949]: I1001 15:43:33.640898 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:33Z","lastTransitionTime":"2025-10-01T15:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:33 crc kubenswrapper[4949]: I1001 15:43:33.743627 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:33 crc kubenswrapper[4949]: I1001 15:43:33.743667 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:33 crc kubenswrapper[4949]: I1001 15:43:33.743678 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:33 crc kubenswrapper[4949]: I1001 15:43:33.743695 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:33 crc kubenswrapper[4949]: I1001 15:43:33.743707 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:33Z","lastTransitionTime":"2025-10-01T15:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:33 crc kubenswrapper[4949]: I1001 15:43:33.847090 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:33 crc kubenswrapper[4949]: I1001 15:43:33.847360 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:33 crc kubenswrapper[4949]: I1001 15:43:33.847452 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:33 crc kubenswrapper[4949]: I1001 15:43:33.847520 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:33 crc kubenswrapper[4949]: I1001 15:43:33.847574 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:33Z","lastTransitionTime":"2025-10-01T15:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:33 crc kubenswrapper[4949]: I1001 15:43:33.950099 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:33 crc kubenswrapper[4949]: I1001 15:43:33.950205 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:33 crc kubenswrapper[4949]: I1001 15:43:33.950229 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:33 crc kubenswrapper[4949]: I1001 15:43:33.950257 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:33 crc kubenswrapper[4949]: I1001 15:43:33.950279 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:33Z","lastTransitionTime":"2025-10-01T15:43:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:34 crc kubenswrapper[4949]: I1001 15:43:34.053660 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:34 crc kubenswrapper[4949]: I1001 15:43:34.053721 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:34 crc kubenswrapper[4949]: I1001 15:43:34.053737 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:34 crc kubenswrapper[4949]: I1001 15:43:34.053760 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:34 crc kubenswrapper[4949]: I1001 15:43:34.053776 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:34Z","lastTransitionTime":"2025-10-01T15:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:34 crc kubenswrapper[4949]: I1001 15:43:34.156465 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:34 crc kubenswrapper[4949]: I1001 15:43:34.156535 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:34 crc kubenswrapper[4949]: I1001 15:43:34.156552 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:34 crc kubenswrapper[4949]: I1001 15:43:34.156576 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:34 crc kubenswrapper[4949]: I1001 15:43:34.156593 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:34Z","lastTransitionTime":"2025-10-01T15:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:34 crc kubenswrapper[4949]: I1001 15:43:34.258772 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:34 crc kubenswrapper[4949]: I1001 15:43:34.258833 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:34 crc kubenswrapper[4949]: I1001 15:43:34.258849 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:34 crc kubenswrapper[4949]: I1001 15:43:34.258872 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:34 crc kubenswrapper[4949]: I1001 15:43:34.258889 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:34Z","lastTransitionTime":"2025-10-01T15:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:34 crc kubenswrapper[4949]: I1001 15:43:34.361773 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:34 crc kubenswrapper[4949]: I1001 15:43:34.361824 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:34 crc kubenswrapper[4949]: I1001 15:43:34.361903 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:34 crc kubenswrapper[4949]: I1001 15:43:34.361922 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:34 crc kubenswrapper[4949]: I1001 15:43:34.361957 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:34Z","lastTransitionTime":"2025-10-01T15:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:34 crc kubenswrapper[4949]: I1001 15:43:34.465082 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:34 crc kubenswrapper[4949]: I1001 15:43:34.465194 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:34 crc kubenswrapper[4949]: I1001 15:43:34.465216 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:34 crc kubenswrapper[4949]: I1001 15:43:34.465243 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:34 crc kubenswrapper[4949]: I1001 15:43:34.465261 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:34Z","lastTransitionTime":"2025-10-01T15:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:34 crc kubenswrapper[4949]: I1001 15:43:34.567812 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:34 crc kubenswrapper[4949]: I1001 15:43:34.567884 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:34 crc kubenswrapper[4949]: I1001 15:43:34.567906 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:34 crc kubenswrapper[4949]: I1001 15:43:34.567937 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:34 crc kubenswrapper[4949]: I1001 15:43:34.567962 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:34Z","lastTransitionTime":"2025-10-01T15:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:34 crc kubenswrapper[4949]: I1001 15:43:34.601051 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:43:34 crc kubenswrapper[4949]: I1001 15:43:34.601103 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:43:34 crc kubenswrapper[4949]: I1001 15:43:34.601226 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:43:34 crc kubenswrapper[4949]: I1001 15:43:34.601064 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:43:34 crc kubenswrapper[4949]: E1001 15:43:34.601278 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:43:34 crc kubenswrapper[4949]: E1001 15:43:34.601405 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kfx8b" podUID="d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd" Oct 01 15:43:34 crc kubenswrapper[4949]: E1001 15:43:34.601561 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:43:34 crc kubenswrapper[4949]: E1001 15:43:34.601710 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:43:34 crc kubenswrapper[4949]: I1001 15:43:34.674255 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:34 crc kubenswrapper[4949]: I1001 15:43:34.674322 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:34 crc kubenswrapper[4949]: I1001 15:43:34.674438 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:34 crc kubenswrapper[4949]: I1001 15:43:34.674483 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:34 crc kubenswrapper[4949]: I1001 15:43:34.674508 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:34Z","lastTransitionTime":"2025-10-01T15:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:34 crc kubenswrapper[4949]: I1001 15:43:34.777548 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:34 crc kubenswrapper[4949]: I1001 15:43:34.777623 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:34 crc kubenswrapper[4949]: I1001 15:43:34.777641 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:34 crc kubenswrapper[4949]: I1001 15:43:34.777666 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:34 crc kubenswrapper[4949]: I1001 15:43:34.777684 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:34Z","lastTransitionTime":"2025-10-01T15:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:34 crc kubenswrapper[4949]: I1001 15:43:34.881084 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:34 crc kubenswrapper[4949]: I1001 15:43:34.881171 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:34 crc kubenswrapper[4949]: I1001 15:43:34.881189 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:34 crc kubenswrapper[4949]: I1001 15:43:34.881213 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:34 crc kubenswrapper[4949]: I1001 15:43:34.881229 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:34Z","lastTransitionTime":"2025-10-01T15:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:34 crc kubenswrapper[4949]: I1001 15:43:34.984074 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:34 crc kubenswrapper[4949]: I1001 15:43:34.984237 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:34 crc kubenswrapper[4949]: I1001 15:43:34.984273 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:34 crc kubenswrapper[4949]: I1001 15:43:34.984304 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:34 crc kubenswrapper[4949]: I1001 15:43:34.984328 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:34Z","lastTransitionTime":"2025-10-01T15:43:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:35 crc kubenswrapper[4949]: I1001 15:43:35.087231 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:35 crc kubenswrapper[4949]: I1001 15:43:35.087279 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:35 crc kubenswrapper[4949]: I1001 15:43:35.087290 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:35 crc kubenswrapper[4949]: I1001 15:43:35.087305 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:35 crc kubenswrapper[4949]: I1001 15:43:35.087317 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:35Z","lastTransitionTime":"2025-10-01T15:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:35 crc kubenswrapper[4949]: I1001 15:43:35.190311 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:35 crc kubenswrapper[4949]: I1001 15:43:35.190394 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:35 crc kubenswrapper[4949]: I1001 15:43:35.190417 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:35 crc kubenswrapper[4949]: I1001 15:43:35.190441 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:35 crc kubenswrapper[4949]: I1001 15:43:35.190457 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:35Z","lastTransitionTime":"2025-10-01T15:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:35 crc kubenswrapper[4949]: I1001 15:43:35.293042 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:35 crc kubenswrapper[4949]: I1001 15:43:35.293073 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:35 crc kubenswrapper[4949]: I1001 15:43:35.293082 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:35 crc kubenswrapper[4949]: I1001 15:43:35.293097 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:35 crc kubenswrapper[4949]: I1001 15:43:35.293106 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:35Z","lastTransitionTime":"2025-10-01T15:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:35 crc kubenswrapper[4949]: I1001 15:43:35.395150 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:35 crc kubenswrapper[4949]: I1001 15:43:35.395186 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:35 crc kubenswrapper[4949]: I1001 15:43:35.395194 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:35 crc kubenswrapper[4949]: I1001 15:43:35.395209 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:35 crc kubenswrapper[4949]: I1001 15:43:35.395219 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:35Z","lastTransitionTime":"2025-10-01T15:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:35 crc kubenswrapper[4949]: I1001 15:43:35.497807 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:35 crc kubenswrapper[4949]: I1001 15:43:35.497865 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:35 crc kubenswrapper[4949]: I1001 15:43:35.497885 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:35 crc kubenswrapper[4949]: I1001 15:43:35.497908 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:35 crc kubenswrapper[4949]: I1001 15:43:35.497925 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:35Z","lastTransitionTime":"2025-10-01T15:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:35 crc kubenswrapper[4949]: I1001 15:43:35.601968 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:35 crc kubenswrapper[4949]: I1001 15:43:35.602029 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:35 crc kubenswrapper[4949]: I1001 15:43:35.602050 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:35 crc kubenswrapper[4949]: I1001 15:43:35.602076 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:35 crc kubenswrapper[4949]: I1001 15:43:35.602097 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:35Z","lastTransitionTime":"2025-10-01T15:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:35 crc kubenswrapper[4949]: I1001 15:43:35.705258 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:35 crc kubenswrapper[4949]: I1001 15:43:35.705324 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:35 crc kubenswrapper[4949]: I1001 15:43:35.705343 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:35 crc kubenswrapper[4949]: I1001 15:43:35.705367 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:35 crc kubenswrapper[4949]: I1001 15:43:35.705387 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:35Z","lastTransitionTime":"2025-10-01T15:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:35 crc kubenswrapper[4949]: I1001 15:43:35.808656 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:35 crc kubenswrapper[4949]: I1001 15:43:35.808696 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:35 crc kubenswrapper[4949]: I1001 15:43:35.808708 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:35 crc kubenswrapper[4949]: I1001 15:43:35.808726 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:35 crc kubenswrapper[4949]: I1001 15:43:35.808738 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:35Z","lastTransitionTime":"2025-10-01T15:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:35 crc kubenswrapper[4949]: I1001 15:43:35.911305 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:35 crc kubenswrapper[4949]: I1001 15:43:35.911346 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:35 crc kubenswrapper[4949]: I1001 15:43:35.911356 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:35 crc kubenswrapper[4949]: I1001 15:43:35.911379 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:35 crc kubenswrapper[4949]: I1001 15:43:35.911390 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:35Z","lastTransitionTime":"2025-10-01T15:43:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.013357 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.013390 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.013401 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.013417 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.013428 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:36Z","lastTransitionTime":"2025-10-01T15:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.115899 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.115969 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.115994 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.116023 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.116047 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:36Z","lastTransitionTime":"2025-10-01T15:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.219461 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.219517 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.219529 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.219548 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.219566 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:36Z","lastTransitionTime":"2025-10-01T15:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.324068 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.324184 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.324215 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.324246 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.324269 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:36Z","lastTransitionTime":"2025-10-01T15:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.427723 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.427836 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.427863 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.427892 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.427914 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:36Z","lastTransitionTime":"2025-10-01T15:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.530793 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.530856 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.530873 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.530903 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.530925 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:36Z","lastTransitionTime":"2025-10-01T15:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.600746 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.600897 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:43:36 crc kubenswrapper[4949]: E1001 15:43:36.601083 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.601177 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.601225 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:43:36 crc kubenswrapper[4949]: E1001 15:43:36.601347 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kfx8b" podUID="d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd" Oct 01 15:43:36 crc kubenswrapper[4949]: E1001 15:43:36.602044 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:43:36 crc kubenswrapper[4949]: E1001 15:43:36.602185 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.603001 4949 scope.go:117] "RemoveContainer" containerID="1e7482d935a30cc08710ffb6a3f58cda0f46736eedf37f8852d2d027ef121e6a" Oct 01 15:43:36 crc kubenswrapper[4949]: E1001 15:43:36.603349 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-pppfm_openshift-ovn-kubernetes(6b30af5f-469f-4bee-b77f-4b58edba325b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.634504 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.634567 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.634587 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.634611 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.634629 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:36Z","lastTransitionTime":"2025-10-01T15:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.737424 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.737486 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.737506 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.737531 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.737549 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:36Z","lastTransitionTime":"2025-10-01T15:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.841030 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.841163 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.841185 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.841211 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.841230 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:36Z","lastTransitionTime":"2025-10-01T15:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.944002 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.944055 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.944069 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.944090 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:36 crc kubenswrapper[4949]: I1001 15:43:36.944104 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:36Z","lastTransitionTime":"2025-10-01T15:43:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:37 crc kubenswrapper[4949]: I1001 15:43:37.046805 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:37 crc kubenswrapper[4949]: I1001 15:43:37.046853 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:37 crc kubenswrapper[4949]: I1001 15:43:37.046864 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:37 crc kubenswrapper[4949]: I1001 15:43:37.046882 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:37 crc kubenswrapper[4949]: I1001 15:43:37.046895 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:37Z","lastTransitionTime":"2025-10-01T15:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:37 crc kubenswrapper[4949]: I1001 15:43:37.150178 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:37 crc kubenswrapper[4949]: I1001 15:43:37.150219 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:37 crc kubenswrapper[4949]: I1001 15:43:37.150236 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:37 crc kubenswrapper[4949]: I1001 15:43:37.150257 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:37 crc kubenswrapper[4949]: I1001 15:43:37.150271 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:37Z","lastTransitionTime":"2025-10-01T15:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:37 crc kubenswrapper[4949]: I1001 15:43:37.253392 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:37 crc kubenswrapper[4949]: I1001 15:43:37.253466 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:37 crc kubenswrapper[4949]: I1001 15:43:37.253489 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:37 crc kubenswrapper[4949]: I1001 15:43:37.253517 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:37 crc kubenswrapper[4949]: I1001 15:43:37.253539 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:37Z","lastTransitionTime":"2025-10-01T15:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:37 crc kubenswrapper[4949]: I1001 15:43:37.356313 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:37 crc kubenswrapper[4949]: I1001 15:43:37.356351 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:37 crc kubenswrapper[4949]: I1001 15:43:37.356361 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:37 crc kubenswrapper[4949]: I1001 15:43:37.356382 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:37 crc kubenswrapper[4949]: I1001 15:43:37.356398 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:37Z","lastTransitionTime":"2025-10-01T15:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:37 crc kubenswrapper[4949]: I1001 15:43:37.459797 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:37 crc kubenswrapper[4949]: I1001 15:43:37.459850 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:37 crc kubenswrapper[4949]: I1001 15:43:37.459864 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:37 crc kubenswrapper[4949]: I1001 15:43:37.459884 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:37 crc kubenswrapper[4949]: I1001 15:43:37.459903 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:37Z","lastTransitionTime":"2025-10-01T15:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:37 crc kubenswrapper[4949]: I1001 15:43:37.563003 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:37 crc kubenswrapper[4949]: I1001 15:43:37.563063 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:37 crc kubenswrapper[4949]: I1001 15:43:37.563080 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:37 crc kubenswrapper[4949]: I1001 15:43:37.563105 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:37 crc kubenswrapper[4949]: I1001 15:43:37.563124 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:37Z","lastTransitionTime":"2025-10-01T15:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:37 crc kubenswrapper[4949]: I1001 15:43:37.605005 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:43:37 crc kubenswrapper[4949]: E1001 15:43:37.605243 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:43:37 crc kubenswrapper[4949]: I1001 15:43:37.666877 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:37 crc kubenswrapper[4949]: I1001 15:43:37.666917 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:37 crc kubenswrapper[4949]: I1001 15:43:37.666926 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:37 crc kubenswrapper[4949]: I1001 15:43:37.666943 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:37 crc kubenswrapper[4949]: I1001 15:43:37.666952 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:37Z","lastTransitionTime":"2025-10-01T15:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:37 crc kubenswrapper[4949]: I1001 15:43:37.770460 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:37 crc kubenswrapper[4949]: I1001 15:43:37.770545 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:37 crc kubenswrapper[4949]: I1001 15:43:37.770569 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:37 crc kubenswrapper[4949]: I1001 15:43:37.770600 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:37 crc kubenswrapper[4949]: I1001 15:43:37.770622 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:37Z","lastTransitionTime":"2025-10-01T15:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:37 crc kubenswrapper[4949]: I1001 15:43:37.873911 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:37 crc kubenswrapper[4949]: I1001 15:43:37.873953 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:37 crc kubenswrapper[4949]: I1001 15:43:37.873964 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:37 crc kubenswrapper[4949]: I1001 15:43:37.873981 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:37 crc kubenswrapper[4949]: I1001 15:43:37.873991 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:37Z","lastTransitionTime":"2025-10-01T15:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:37 crc kubenswrapper[4949]: I1001 15:43:37.977327 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:37 crc kubenswrapper[4949]: I1001 15:43:37.977402 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:37 crc kubenswrapper[4949]: I1001 15:43:37.977420 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:37 crc kubenswrapper[4949]: I1001 15:43:37.977442 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:37 crc kubenswrapper[4949]: I1001 15:43:37.977455 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:37Z","lastTransitionTime":"2025-10-01T15:43:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.080916 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.080974 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.080988 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.081010 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.081024 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:38Z","lastTransitionTime":"2025-10-01T15:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.183682 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.183727 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.183736 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.183753 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.183764 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:38Z","lastTransitionTime":"2025-10-01T15:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.286256 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.286307 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.286316 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.286333 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.286344 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:38Z","lastTransitionTime":"2025-10-01T15:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.388842 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.388882 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.388890 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.388905 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.388915 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:38Z","lastTransitionTime":"2025-10-01T15:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.494648 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.494689 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.494700 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.494715 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.494727 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:38Z","lastTransitionTime":"2025-10-01T15:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.599268 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.599309 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.599320 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.599336 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.599347 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:38Z","lastTransitionTime":"2025-10-01T15:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.600756 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.600833 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:43:38 crc kubenswrapper[4949]: E1001 15:43:38.600905 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kfx8b" podUID="d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd" Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.600757 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:43:38 crc kubenswrapper[4949]: E1001 15:43:38.601004 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:43:38 crc kubenswrapper[4949]: E1001 15:43:38.601306 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.702268 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.702307 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.702318 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.702335 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.702346 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:38Z","lastTransitionTime":"2025-10-01T15:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.804509 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.804550 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.804562 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.804577 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.804588 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:38Z","lastTransitionTime":"2025-10-01T15:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.858609 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.858645 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.858661 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.858677 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.858687 4949 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2025-10-01T15:43:38Z","lastTransitionTime":"2025-10-01T15:43:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.906530 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-9rrgd"] Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.907077 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9rrgd" Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.909942 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.910421 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.910421 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.910617 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.935168 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=98.935116231 podStartE2EDuration="1m38.935116231s" podCreationTimestamp="2025-10-01 15:42:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:43:38.933859696 +0000 UTC m=+118.239465887" watchObservedRunningTime="2025-10-01 15:43:38.935116231 +0000 UTC m=+118.240722422" Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.948524 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/aeb37817-e8e8-4ebc-989c-aa6cf03593ed-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-9rrgd\" (UID: \"aeb37817-e8e8-4ebc-989c-aa6cf03593ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9rrgd" Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.948570 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aeb37817-e8e8-4ebc-989c-aa6cf03593ed-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-9rrgd\" (UID: \"aeb37817-e8e8-4ebc-989c-aa6cf03593ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9rrgd" Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.948732 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/aeb37817-e8e8-4ebc-989c-aa6cf03593ed-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-9rrgd\" (UID: \"aeb37817-e8e8-4ebc-989c-aa6cf03593ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9rrgd" Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.948758 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/aeb37817-e8e8-4ebc-989c-aa6cf03593ed-service-ca\") pod \"cluster-version-operator-5c965bbfc6-9rrgd\" (UID: \"aeb37817-e8e8-4ebc-989c-aa6cf03593ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9rrgd" Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.948818 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aeb37817-e8e8-4ebc-989c-aa6cf03593ed-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-9rrgd\" (UID: \"aeb37817-e8e8-4ebc-989c-aa6cf03593ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9rrgd" Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.966055 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=34.96600251 podStartE2EDuration="34.96600251s" podCreationTimestamp="2025-10-01 15:43:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:43:38.950602857 +0000 UTC m=+118.256209048" watchObservedRunningTime="2025-10-01 15:43:38.96600251 +0000 UTC m=+118.271608701" Oct 01 15:43:38 crc kubenswrapper[4949]: I1001 15:43:38.966347 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-pgg4f" podStartSLOduration=94.966343399 podStartE2EDuration="1m34.966343399s" podCreationTimestamp="2025-10-01 15:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:43:38.964886569 +0000 UTC m=+118.270492770" watchObservedRunningTime="2025-10-01 15:43:38.966343399 +0000 UTC m=+118.271949580" Oct 01 15:43:39 crc kubenswrapper[4949]: I1001 15:43:39.027017 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podStartSLOduration=95.026994266 podStartE2EDuration="1m35.026994266s" podCreationTimestamp="2025-10-01 15:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:43:39.025647648 +0000 UTC m=+118.331253839" watchObservedRunningTime="2025-10-01 15:43:39.026994266 +0000 UTC m=+118.332600487" Oct 01 15:43:39 crc kubenswrapper[4949]: I1001 15:43:39.047695 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-s5r4m" podStartSLOduration=95.047675748 podStartE2EDuration="1m35.047675748s" podCreationTimestamp="2025-10-01 15:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:43:39.038934512 +0000 UTC m=+118.344540733" watchObservedRunningTime="2025-10-01 15:43:39.047675748 +0000 UTC m=+118.353281939" Oct 01 15:43:39 crc kubenswrapper[4949]: I1001 15:43:39.049557 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/aeb37817-e8e8-4ebc-989c-aa6cf03593ed-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-9rrgd\" (UID: \"aeb37817-e8e8-4ebc-989c-aa6cf03593ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9rrgd" Oct 01 15:43:39 crc kubenswrapper[4949]: I1001 15:43:39.049593 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aeb37817-e8e8-4ebc-989c-aa6cf03593ed-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-9rrgd\" (UID: \"aeb37817-e8e8-4ebc-989c-aa6cf03593ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9rrgd" Oct 01 15:43:39 crc kubenswrapper[4949]: I1001 15:43:39.049643 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/aeb37817-e8e8-4ebc-989c-aa6cf03593ed-service-ca\") pod \"cluster-version-operator-5c965bbfc6-9rrgd\" (UID: \"aeb37817-e8e8-4ebc-989c-aa6cf03593ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9rrgd" Oct 01 15:43:39 crc kubenswrapper[4949]: I1001 15:43:39.049666 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/aeb37817-e8e8-4ebc-989c-aa6cf03593ed-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-9rrgd\" (UID: \"aeb37817-e8e8-4ebc-989c-aa6cf03593ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9rrgd" Oct 01 15:43:39 crc kubenswrapper[4949]: I1001 15:43:39.049669 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/aeb37817-e8e8-4ebc-989c-aa6cf03593ed-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-9rrgd\" (UID: \"aeb37817-e8e8-4ebc-989c-aa6cf03593ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9rrgd" Oct 01 15:43:39 crc kubenswrapper[4949]: I1001 15:43:39.049692 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aeb37817-e8e8-4ebc-989c-aa6cf03593ed-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-9rrgd\" (UID: \"aeb37817-e8e8-4ebc-989c-aa6cf03593ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9rrgd" Oct 01 15:43:39 crc kubenswrapper[4949]: I1001 15:43:39.049748 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/aeb37817-e8e8-4ebc-989c-aa6cf03593ed-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-9rrgd\" (UID: \"aeb37817-e8e8-4ebc-989c-aa6cf03593ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9rrgd" Oct 01 15:43:39 crc kubenswrapper[4949]: I1001 15:43:39.051368 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/aeb37817-e8e8-4ebc-989c-aa6cf03593ed-service-ca\") pod \"cluster-version-operator-5c965bbfc6-9rrgd\" (UID: \"aeb37817-e8e8-4ebc-989c-aa6cf03593ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9rrgd" Oct 01 15:43:39 crc kubenswrapper[4949]: I1001 15:43:39.057755 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aeb37817-e8e8-4ebc-989c-aa6cf03593ed-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-9rrgd\" (UID: \"aeb37817-e8e8-4ebc-989c-aa6cf03593ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9rrgd" Oct 01 15:43:39 crc kubenswrapper[4949]: I1001 15:43:39.073041 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/aeb37817-e8e8-4ebc-989c-aa6cf03593ed-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-9rrgd\" (UID: \"aeb37817-e8e8-4ebc-989c-aa6cf03593ed\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9rrgd" Oct 01 15:43:39 crc kubenswrapper[4949]: I1001 15:43:39.232032 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9rrgd" Oct 01 15:43:39 crc kubenswrapper[4949]: I1001 15:43:39.601487 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:43:39 crc kubenswrapper[4949]: E1001 15:43:39.601693 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:43:40 crc kubenswrapper[4949]: I1001 15:43:40.235485 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-s5r4m_ffe32683-6bbe-472a-811e-8fe0fd1d1bb6/kube-multus/1.log" Oct 01 15:43:40 crc kubenswrapper[4949]: I1001 15:43:40.235935 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-s5r4m_ffe32683-6bbe-472a-811e-8fe0fd1d1bb6/kube-multus/0.log" Oct 01 15:43:40 crc kubenswrapper[4949]: I1001 15:43:40.235977 4949 generic.go:334] "Generic (PLEG): container finished" podID="ffe32683-6bbe-472a-811e-8fe0fd1d1bb6" containerID="b3d51987d4ca121795b1960ac1901175eb67d69f9380770888791f085e0f3bcc" exitCode=1 Oct 01 15:43:40 crc kubenswrapper[4949]: I1001 15:43:40.236037 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-s5r4m" event={"ID":"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6","Type":"ContainerDied","Data":"b3d51987d4ca121795b1960ac1901175eb67d69f9380770888791f085e0f3bcc"} Oct 01 15:43:40 crc kubenswrapper[4949]: I1001 15:43:40.236072 4949 scope.go:117] "RemoveContainer" containerID="68a54b1244d03fd99a21c9ca116873fe7a80ec3053c95c8b99de3fb62e581419" Oct 01 15:43:40 crc kubenswrapper[4949]: I1001 15:43:40.237364 4949 scope.go:117] "RemoveContainer" containerID="b3d51987d4ca121795b1960ac1901175eb67d69f9380770888791f085e0f3bcc" Oct 01 15:43:40 crc kubenswrapper[4949]: E1001 15:43:40.237917 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-s5r4m_openshift-multus(ffe32683-6bbe-472a-811e-8fe0fd1d1bb6)\"" pod="openshift-multus/multus-s5r4m" podUID="ffe32683-6bbe-472a-811e-8fe0fd1d1bb6" Oct 01 15:43:40 crc kubenswrapper[4949]: I1001 15:43:40.238392 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9rrgd" event={"ID":"aeb37817-e8e8-4ebc-989c-aa6cf03593ed","Type":"ContainerStarted","Data":"ba6c7ac8d268f11315b25bf4ca994db6d62d32912f83663153d6175e808f76a8"} Oct 01 15:43:40 crc kubenswrapper[4949]: I1001 15:43:40.238459 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9rrgd" event={"ID":"aeb37817-e8e8-4ebc-989c-aa6cf03593ed","Type":"ContainerStarted","Data":"676437d8e6cc2c0ec0058ee69a6839151f4a753873fbe5f04d527aa012dd811f"} Oct 01 15:43:40 crc kubenswrapper[4949]: I1001 15:43:40.287306 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-9rrgd" podStartSLOduration=96.287287336 podStartE2EDuration="1m36.287287336s" podCreationTimestamp="2025-10-01 15:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:43:40.286146443 +0000 UTC m=+119.591752634" watchObservedRunningTime="2025-10-01 15:43:40.287287336 +0000 UTC m=+119.592893527" Oct 01 15:43:40 crc kubenswrapper[4949]: I1001 15:43:40.601163 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:43:40 crc kubenswrapper[4949]: I1001 15:43:40.601241 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:43:40 crc kubenswrapper[4949]: E1001 15:43:40.601288 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:43:40 crc kubenswrapper[4949]: I1001 15:43:40.601375 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:43:40 crc kubenswrapper[4949]: E1001 15:43:40.601430 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kfx8b" podUID="d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd" Oct 01 15:43:40 crc kubenswrapper[4949]: E1001 15:43:40.601553 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:43:41 crc kubenswrapper[4949]: I1001 15:43:41.242476 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-s5r4m_ffe32683-6bbe-472a-811e-8fe0fd1d1bb6/kube-multus/1.log" Oct 01 15:43:41 crc kubenswrapper[4949]: E1001 15:43:41.583431 4949 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Oct 01 15:43:41 crc kubenswrapper[4949]: I1001 15:43:41.601574 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:43:41 crc kubenswrapper[4949]: E1001 15:43:41.602735 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:43:41 crc kubenswrapper[4949]: E1001 15:43:41.687233 4949 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 01 15:43:42 crc kubenswrapper[4949]: I1001 15:43:42.600993 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:43:42 crc kubenswrapper[4949]: I1001 15:43:42.601028 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:43:42 crc kubenswrapper[4949]: E1001 15:43:42.601110 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:43:42 crc kubenswrapper[4949]: I1001 15:43:42.601181 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:43:42 crc kubenswrapper[4949]: E1001 15:43:42.601323 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kfx8b" podUID="d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd" Oct 01 15:43:42 crc kubenswrapper[4949]: E1001 15:43:42.601365 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:43:43 crc kubenswrapper[4949]: I1001 15:43:43.600765 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:43:43 crc kubenswrapper[4949]: E1001 15:43:43.601070 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:43:44 crc kubenswrapper[4949]: I1001 15:43:44.600702 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:43:44 crc kubenswrapper[4949]: I1001 15:43:44.600802 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:43:44 crc kubenswrapper[4949]: I1001 15:43:44.600881 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:43:44 crc kubenswrapper[4949]: E1001 15:43:44.600928 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kfx8b" podUID="d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd" Oct 01 15:43:44 crc kubenswrapper[4949]: E1001 15:43:44.600975 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:43:44 crc kubenswrapper[4949]: E1001 15:43:44.601198 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:43:45 crc kubenswrapper[4949]: I1001 15:43:45.600703 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:43:45 crc kubenswrapper[4949]: E1001 15:43:45.600852 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:43:46 crc kubenswrapper[4949]: I1001 15:43:46.600713 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:43:46 crc kubenswrapper[4949]: I1001 15:43:46.600816 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:43:46 crc kubenswrapper[4949]: E1001 15:43:46.600879 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kfx8b" podUID="d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd" Oct 01 15:43:46 crc kubenswrapper[4949]: I1001 15:43:46.600754 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:43:46 crc kubenswrapper[4949]: E1001 15:43:46.600995 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:43:46 crc kubenswrapper[4949]: E1001 15:43:46.601025 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:43:46 crc kubenswrapper[4949]: E1001 15:43:46.688427 4949 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 01 15:43:47 crc kubenswrapper[4949]: I1001 15:43:47.601153 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:43:47 crc kubenswrapper[4949]: E1001 15:43:47.601336 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:43:48 crc kubenswrapper[4949]: I1001 15:43:48.600699 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:43:48 crc kubenswrapper[4949]: I1001 15:43:48.600800 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:43:48 crc kubenswrapper[4949]: E1001 15:43:48.600867 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:43:48 crc kubenswrapper[4949]: I1001 15:43:48.600931 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:43:48 crc kubenswrapper[4949]: E1001 15:43:48.601009 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kfx8b" podUID="d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd" Oct 01 15:43:48 crc kubenswrapper[4949]: E1001 15:43:48.601078 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:43:49 crc kubenswrapper[4949]: I1001 15:43:49.601355 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:43:49 crc kubenswrapper[4949]: E1001 15:43:49.601557 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:43:50 crc kubenswrapper[4949]: I1001 15:43:50.601545 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:43:50 crc kubenswrapper[4949]: I1001 15:43:50.601644 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:43:50 crc kubenswrapper[4949]: I1001 15:43:50.601642 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:43:50 crc kubenswrapper[4949]: E1001 15:43:50.601780 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kfx8b" podUID="d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd" Oct 01 15:43:50 crc kubenswrapper[4949]: E1001 15:43:50.602656 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:43:50 crc kubenswrapper[4949]: I1001 15:43:50.606459 4949 scope.go:117] "RemoveContainer" containerID="1e7482d935a30cc08710ffb6a3f58cda0f46736eedf37f8852d2d027ef121e6a" Oct 01 15:43:50 crc kubenswrapper[4949]: E1001 15:43:50.607461 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:43:51 crc kubenswrapper[4949]: I1001 15:43:51.280050 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pppfm_6b30af5f-469f-4bee-b77f-4b58edba325b/ovnkube-controller/3.log" Oct 01 15:43:51 crc kubenswrapper[4949]: I1001 15:43:51.284165 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" event={"ID":"6b30af5f-469f-4bee-b77f-4b58edba325b","Type":"ContainerStarted","Data":"38cbc33011ec57f66d2abf2dbe2f8c91a9857563b440dc7517bacd0666294750"} Oct 01 15:43:51 crc kubenswrapper[4949]: I1001 15:43:51.284791 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:43:51 crc kubenswrapper[4949]: I1001 15:43:51.319613 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" podStartSLOduration=107.319586183 podStartE2EDuration="1m47.319586183s" podCreationTimestamp="2025-10-01 15:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:43:51.318454491 +0000 UTC m=+130.624060682" watchObservedRunningTime="2025-10-01 15:43:51.319586183 +0000 UTC m=+130.625192374" Oct 01 15:43:51 crc kubenswrapper[4949]: I1001 15:43:51.583668 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-kfx8b"] Oct 01 15:43:51 crc kubenswrapper[4949]: I1001 15:43:51.583808 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:43:51 crc kubenswrapper[4949]: E1001 15:43:51.583927 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kfx8b" podUID="d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd" Oct 01 15:43:51 crc kubenswrapper[4949]: I1001 15:43:51.601501 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:43:51 crc kubenswrapper[4949]: E1001 15:43:51.603289 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:43:51 crc kubenswrapper[4949]: E1001 15:43:51.693550 4949 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Oct 01 15:43:52 crc kubenswrapper[4949]: I1001 15:43:52.601656 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:43:52 crc kubenswrapper[4949]: I1001 15:43:52.601678 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:43:52 crc kubenswrapper[4949]: E1001 15:43:52.602063 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:43:52 crc kubenswrapper[4949]: E1001 15:43:52.601855 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:43:53 crc kubenswrapper[4949]: I1001 15:43:53.601519 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:43:53 crc kubenswrapper[4949]: E1001 15:43:53.601699 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:43:53 crc kubenswrapper[4949]: I1001 15:43:53.602173 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:43:53 crc kubenswrapper[4949]: E1001 15:43:53.602338 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kfx8b" podUID="d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd" Oct 01 15:43:53 crc kubenswrapper[4949]: I1001 15:43:53.602355 4949 scope.go:117] "RemoveContainer" containerID="b3d51987d4ca121795b1960ac1901175eb67d69f9380770888791f085e0f3bcc" Oct 01 15:43:54 crc kubenswrapper[4949]: I1001 15:43:54.300499 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-s5r4m_ffe32683-6bbe-472a-811e-8fe0fd1d1bb6/kube-multus/1.log" Oct 01 15:43:54 crc kubenswrapper[4949]: I1001 15:43:54.300963 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-s5r4m" event={"ID":"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6","Type":"ContainerStarted","Data":"674ea8da82695405b8163ca176857f496a30efb1ab1f6e9e6485ce661af8216d"} Oct 01 15:43:54 crc kubenswrapper[4949]: I1001 15:43:54.601118 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:43:54 crc kubenswrapper[4949]: I1001 15:43:54.601330 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:43:54 crc kubenswrapper[4949]: E1001 15:43:54.601491 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:43:54 crc kubenswrapper[4949]: E1001 15:43:54.601683 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:43:55 crc kubenswrapper[4949]: I1001 15:43:55.601986 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:43:55 crc kubenswrapper[4949]: I1001 15:43:55.601987 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:43:55 crc kubenswrapper[4949]: E1001 15:43:55.602212 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-kfx8b" podUID="d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd" Oct 01 15:43:55 crc kubenswrapper[4949]: E1001 15:43:55.602376 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Oct 01 15:43:56 crc kubenswrapper[4949]: I1001 15:43:56.601069 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:43:56 crc kubenswrapper[4949]: I1001 15:43:56.601230 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:43:56 crc kubenswrapper[4949]: E1001 15:43:56.601283 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Oct 01 15:43:56 crc kubenswrapper[4949]: E1001 15:43:56.601464 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Oct 01 15:43:57 crc kubenswrapper[4949]: I1001 15:43:57.600653 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:43:57 crc kubenswrapper[4949]: I1001 15:43:57.600736 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:43:57 crc kubenswrapper[4949]: I1001 15:43:57.604870 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Oct 01 15:43:57 crc kubenswrapper[4949]: I1001 15:43:57.605091 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Oct 01 15:43:57 crc kubenswrapper[4949]: I1001 15:43:57.605111 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Oct 01 15:43:57 crc kubenswrapper[4949]: I1001 15:43:57.605216 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Oct 01 15:43:58 crc kubenswrapper[4949]: I1001 15:43:58.601191 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:43:58 crc kubenswrapper[4949]: I1001 15:43:58.601388 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:43:58 crc kubenswrapper[4949]: I1001 15:43:58.604059 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Oct 01 15:43:58 crc kubenswrapper[4949]: I1001 15:43:58.604172 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.565815 4949 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.622378 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-np8v7"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.622892 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-np8v7" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.623197 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-sm5dr"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.623942 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-sm5dr" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.624522 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4v6mr"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.624923 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4v6mr" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.627269 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-nrnsk"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.627613 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-kwm7g"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.628025 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-kwm7g" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.629456 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-nrnsk" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.630785 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wjx4c"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.631427 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.631549 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.631676 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.632435 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gd6wp"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.634274 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-hjkx4"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.634829 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-hjkx4" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.637373 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gd6wp" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.655735 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-htjbx"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.657808 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.658077 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.670664 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.671109 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.671344 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.671349 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.671410 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.671405 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.671505 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.671525 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.671528 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.671573 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.671635 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.671638 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.671684 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.671638 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.671722 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.671790 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.671943 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dppnh"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.672042 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.672288 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-l2tfx"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.672311 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.672456 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-htjbx" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.672501 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.672805 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.673114 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.673164 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.673211 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.673279 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.673294 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.673376 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.673450 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.673487 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.673567 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.673674 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.673803 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.673880 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dppnh" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.674483 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mlktd"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.674606 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.674938 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-mlktd" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.674960 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.675089 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.675111 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.675220 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.675335 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-l2tfx" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.675528 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.675719 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.675886 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.676119 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.676927 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.677076 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.677210 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.677313 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.677418 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.677439 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.677516 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.677578 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.677897 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.678318 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.678890 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.679222 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.681171 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-t8vc2"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.681735 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-47lbj"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.682426 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.682747 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-t8vc2" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.683328 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lk9lz"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.684260 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lk9lz" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.686182 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-p2vmt"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.686972 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-p2vmt" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.689175 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.689956 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.690713 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.716998 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3103144-2f18-4cc3-82ad-3fedf7e23914-serving-cert\") pod \"apiserver-7bbb656c7d-np8v7\" (UID: \"b3103144-2f18-4cc3-82ad-3fedf7e23914\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-np8v7" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.717059 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2h4x9\" (UniqueName: \"kubernetes.io/projected/0df389db-e47d-4b16-9221-f1e5311c5cd6-kube-api-access-2h4x9\") pod \"controller-manager-879f6c89f-sm5dr\" (UID: \"0df389db-e47d-4b16-9221-f1e5311c5cd6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sm5dr" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.717092 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/91e40664-3369-4ee6-816c-03b6272c3d15-available-featuregates\") pod \"openshift-config-operator-7777fb866f-kwm7g\" (UID: \"91e40664-3369-4ee6-816c-03b6272c3d15\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-kwm7g" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.717159 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b3103144-2f18-4cc3-82ad-3fedf7e23914-encryption-config\") pod \"apiserver-7bbb656c7d-np8v7\" (UID: \"b3103144-2f18-4cc3-82ad-3fedf7e23914\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-np8v7" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.717192 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-wjx4c\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.717222 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-wjx4c\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.717249 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-audit-policies\") pod \"oauth-openshift-558db77b4-wjx4c\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.717275 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/54bc2b08-ed4b-45fc-baa6-e681a412b2ed-audit\") pod \"apiserver-76f77b778f-hjkx4\" (UID: \"54bc2b08-ed4b-45fc-baa6-e681a412b2ed\") " pod="openshift-apiserver/apiserver-76f77b778f-hjkx4" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.717302 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b3103144-2f18-4cc3-82ad-3fedf7e23914-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-np8v7\" (UID: \"b3103144-2f18-4cc3-82ad-3fedf7e23914\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-np8v7" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.717349 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0df389db-e47d-4b16-9221-f1e5311c5cd6-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-sm5dr\" (UID: \"0df389db-e47d-4b16-9221-f1e5311c5cd6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sm5dr" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.717380 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fllm6\" (UniqueName: \"kubernetes.io/projected/4ddc725e-4c58-4cd2-a228-e53fede0a61e-kube-api-access-fllm6\") pod \"console-operator-58897d9998-nrnsk\" (UID: \"4ddc725e-4c58-4cd2-a228-e53fede0a61e\") " pod="openshift-console-operator/console-operator-58897d9998-nrnsk" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.717403 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-wjx4c\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.717465 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/54bc2b08-ed4b-45fc-baa6-e681a412b2ed-encryption-config\") pod \"apiserver-76f77b778f-hjkx4\" (UID: \"54bc2b08-ed4b-45fc-baa6-e681a412b2ed\") " pod="openshift-apiserver/apiserver-76f77b778f-hjkx4" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.717494 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrgs8\" (UniqueName: \"kubernetes.io/projected/54bc2b08-ed4b-45fc-baa6-e681a412b2ed-kube-api-access-qrgs8\") pod \"apiserver-76f77b778f-hjkx4\" (UID: \"54bc2b08-ed4b-45fc-baa6-e681a412b2ed\") " pod="openshift-apiserver/apiserver-76f77b778f-hjkx4" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.717523 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342-client-ca\") pod \"route-controller-manager-6576b87f9c-4v6mr\" (UID: \"ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4v6mr" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.717553 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0df389db-e47d-4b16-9221-f1e5311c5cd6-serving-cert\") pod \"controller-manager-879f6c89f-sm5dr\" (UID: \"0df389db-e47d-4b16-9221-f1e5311c5cd6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sm5dr" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.717578 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ddc725e-4c58-4cd2-a228-e53fede0a61e-config\") pod \"console-operator-58897d9998-nrnsk\" (UID: \"4ddc725e-4c58-4cd2-a228-e53fede0a61e\") " pod="openshift-console-operator/console-operator-58897d9998-nrnsk" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.717607 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-wjx4c\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.717638 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/54bc2b08-ed4b-45fc-baa6-e681a412b2ed-etcd-client\") pod \"apiserver-76f77b778f-hjkx4\" (UID: \"54bc2b08-ed4b-45fc-baa6-e681a412b2ed\") " pod="openshift-apiserver/apiserver-76f77b778f-hjkx4" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.717666 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b3103144-2f18-4cc3-82ad-3fedf7e23914-etcd-client\") pod \"apiserver-7bbb656c7d-np8v7\" (UID: \"b3103144-2f18-4cc3-82ad-3fedf7e23914\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-np8v7" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.717690 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0df389db-e47d-4b16-9221-f1e5311c5cd6-config\") pod \"controller-manager-879f6c89f-sm5dr\" (UID: \"0df389db-e47d-4b16-9221-f1e5311c5cd6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sm5dr" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.717714 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x4mxh\" (UniqueName: \"kubernetes.io/projected/91e40664-3369-4ee6-816c-03b6272c3d15-kube-api-access-x4mxh\") pod \"openshift-config-operator-7777fb866f-kwm7g\" (UID: \"91e40664-3369-4ee6-816c-03b6272c3d15\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-kwm7g" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.717742 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3103144-2f18-4cc3-82ad-3fedf7e23914-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-np8v7\" (UID: \"b3103144-2f18-4cc3-82ad-3fedf7e23914\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-np8v7" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.717770 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-wjx4c\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.717779 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.717808 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0df389db-e47d-4b16-9221-f1e5311c5cd6-client-ca\") pod \"controller-manager-879f6c89f-sm5dr\" (UID: \"0df389db-e47d-4b16-9221-f1e5311c5cd6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sm5dr" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.717838 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-audit-dir\") pod \"oauth-openshift-558db77b4-wjx4c\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.717876 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/54bc2b08-ed4b-45fc-baa6-e681a412b2ed-node-pullsecrets\") pod \"apiserver-76f77b778f-hjkx4\" (UID: \"54bc2b08-ed4b-45fc-baa6-e681a412b2ed\") " pod="openshift-apiserver/apiserver-76f77b778f-hjkx4" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.717905 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-wjx4c\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.717948 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4ddc725e-4c58-4cd2-a228-e53fede0a61e-trusted-ca\") pod \"console-operator-58897d9998-nrnsk\" (UID: \"4ddc725e-4c58-4cd2-a228-e53fede0a61e\") " pod="openshift-console-operator/console-operator-58897d9998-nrnsk" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.717979 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-wjx4c\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.718006 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jchgl\" (UniqueName: \"kubernetes.io/projected/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-kube-api-access-jchgl\") pod \"oauth-openshift-558db77b4-wjx4c\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.718036 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rqdfd\" (UniqueName: \"kubernetes.io/projected/6d0cfaab-30c5-4483-8f77-5929e026cb31-kube-api-access-rqdfd\") pod \"cluster-samples-operator-665b6dd947-gd6wp\" (UID: \"6d0cfaab-30c5-4483-8f77-5929e026cb31\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gd6wp" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.718064 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54bc2b08-ed4b-45fc-baa6-e681a412b2ed-config\") pod \"apiserver-76f77b778f-hjkx4\" (UID: \"54bc2b08-ed4b-45fc-baa6-e681a412b2ed\") " pod="openshift-apiserver/apiserver-76f77b778f-hjkx4" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.718087 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91e40664-3369-4ee6-816c-03b6272c3d15-serving-cert\") pod \"openshift-config-operator-7777fb866f-kwm7g\" (UID: \"91e40664-3369-4ee6-816c-03b6272c3d15\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-kwm7g" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.718111 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b3103144-2f18-4cc3-82ad-3fedf7e23914-audit-policies\") pod \"apiserver-7bbb656c7d-np8v7\" (UID: \"b3103144-2f18-4cc3-82ad-3fedf7e23914\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-np8v7" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.718155 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b5fmd\" (UniqueName: \"kubernetes.io/projected/b3103144-2f18-4cc3-82ad-3fedf7e23914-kube-api-access-b5fmd\") pod \"apiserver-7bbb656c7d-np8v7\" (UID: \"b3103144-2f18-4cc3-82ad-3fedf7e23914\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-np8v7" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.718188 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-wjx4c\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.718215 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54bc2b08-ed4b-45fc-baa6-e681a412b2ed-serving-cert\") pod \"apiserver-76f77b778f-hjkx4\" (UID: \"54bc2b08-ed4b-45fc-baa6-e681a412b2ed\") " pod="openshift-apiserver/apiserver-76f77b778f-hjkx4" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.718240 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342-serving-cert\") pod \"route-controller-manager-6576b87f9c-4v6mr\" (UID: \"ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4v6mr" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.718266 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ddc725e-4c58-4cd2-a228-e53fede0a61e-serving-cert\") pod \"console-operator-58897d9998-nrnsk\" (UID: \"4ddc725e-4c58-4cd2-a228-e53fede0a61e\") " pod="openshift-console-operator/console-operator-58897d9998-nrnsk" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.718294 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-wjx4c\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.718315 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b3103144-2f18-4cc3-82ad-3fedf7e23914-audit-dir\") pod \"apiserver-7bbb656c7d-np8v7\" (UID: \"b3103144-2f18-4cc3-82ad-3fedf7e23914\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-np8v7" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.718362 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-wjx4c\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.718395 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54bc2b08-ed4b-45fc-baa6-e681a412b2ed-trusted-ca-bundle\") pod \"apiserver-76f77b778f-hjkx4\" (UID: \"54bc2b08-ed4b-45fc-baa6-e681a412b2ed\") " pod="openshift-apiserver/apiserver-76f77b778f-hjkx4" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.718426 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/54bc2b08-ed4b-45fc-baa6-e681a412b2ed-image-import-ca\") pod \"apiserver-76f77b778f-hjkx4\" (UID: \"54bc2b08-ed4b-45fc-baa6-e681a412b2ed\") " pod="openshift-apiserver/apiserver-76f77b778f-hjkx4" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.718455 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6d0cfaab-30c5-4483-8f77-5929e026cb31-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-gd6wp\" (UID: \"6d0cfaab-30c5-4483-8f77-5929e026cb31\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gd6wp" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.718479 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/54bc2b08-ed4b-45fc-baa6-e681a412b2ed-etcd-serving-ca\") pod \"apiserver-76f77b778f-hjkx4\" (UID: \"54bc2b08-ed4b-45fc-baa6-e681a412b2ed\") " pod="openshift-apiserver/apiserver-76f77b778f-hjkx4" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.718505 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342-config\") pod \"route-controller-manager-6576b87f9c-4v6mr\" (UID: \"ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4v6mr" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.718545 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btq4q\" (UniqueName: \"kubernetes.io/projected/ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342-kube-api-access-btq4q\") pod \"route-controller-manager-6576b87f9c-4v6mr\" (UID: \"ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4v6mr" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.718574 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/54bc2b08-ed4b-45fc-baa6-e681a412b2ed-audit-dir\") pod \"apiserver-76f77b778f-hjkx4\" (UID: \"54bc2b08-ed4b-45fc-baa6-e681a412b2ed\") " pod="openshift-apiserver/apiserver-76f77b778f-hjkx4" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.718597 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-wjx4c\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.719562 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.720537 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.721020 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-xlwdp"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.721142 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.722703 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-xlwdp" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.732288 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.734620 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t7cmz"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.735101 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-j5tq6"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.736611 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.736879 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.737376 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.737600 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.737684 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.737745 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.737842 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.737850 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.737950 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-j5tq6" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.737962 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.737974 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t7cmz" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.738151 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.738320 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.738496 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.738514 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.738634 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.738842 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.738978 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.739345 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.740987 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.741518 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-pblzs"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.741744 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.746871 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.747104 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.747207 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.747358 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.747441 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.747471 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.747659 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.747719 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.747868 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.748027 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.748086 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.748172 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.748269 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.748305 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.748344 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.748551 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.748591 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.752443 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-jpqsm"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.752707 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mqv74"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.753486 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pblzs" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.753784 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-jpqsm" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.754089 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9gv8b"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.755285 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mqv74" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.756410 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.756426 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.756557 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9gv8b" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.756911 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.764792 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-fv7wt"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.765367 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8znr5"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.765640 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-6tt8n"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.766051 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-6tt8n" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.766407 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fv7wt" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.767243 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.767529 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.767605 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8znr5" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.768232 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-bpvnv"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.768684 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bpvnv" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.769135 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zz76h"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.769485 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zz76h" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.770012 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-58ggj"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.770785 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-58ggj" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.775889 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tszj5"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.776803 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w9c7m"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.777314 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-twgpv"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.777947 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-twgpv" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.778314 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tszj5" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.778622 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w9c7m" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.785005 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.788838 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-gclj9"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.789488 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gclj9" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.790819 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-7t2lr"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.791423 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-7t2lr" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.802352 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5hfpc"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.803724 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5hfpc" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.805435 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.809205 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29322210-dtf9r"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.809789 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hbkqd"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.810042 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29322210-dtf9r" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.810333 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hbkqd" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.813635 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4v6mr"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.819008 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44c12e71-4fbb-407d-b4b2-81ae599a853a-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-dppnh\" (UID: \"44c12e71-4fbb-407d-b4b2-81ae599a853a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dppnh" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.819048 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x4mxh\" (UniqueName: \"kubernetes.io/projected/91e40664-3369-4ee6-816c-03b6272c3d15-kube-api-access-x4mxh\") pod \"openshift-config-operator-7777fb866f-kwm7g\" (UID: \"91e40664-3369-4ee6-816c-03b6272c3d15\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-kwm7g" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.819073 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3103144-2f18-4cc3-82ad-3fedf7e23914-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-np8v7\" (UID: \"b3103144-2f18-4cc3-82ad-3fedf7e23914\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-np8v7" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.819095 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-wjx4c\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.819118 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tn9zr\" (UniqueName: \"kubernetes.io/projected/dbf87b2b-7468-46c2-bf9b-cd95faaedbb9-kube-api-access-tn9zr\") pod \"multus-admission-controller-857f4d67dd-6tt8n\" (UID: \"dbf87b2b-7468-46c2-bf9b-cd95faaedbb9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6tt8n" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.819186 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0df389db-e47d-4b16-9221-f1e5311c5cd6-client-ca\") pod \"controller-manager-879f6c89f-sm5dr\" (UID: \"0df389db-e47d-4b16-9221-f1e5311c5cd6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sm5dr" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.819219 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-audit-dir\") pod \"oauth-openshift-558db77b4-wjx4c\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.819267 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/54bc2b08-ed4b-45fc-baa6-e681a412b2ed-node-pullsecrets\") pod \"apiserver-76f77b778f-hjkx4\" (UID: \"54bc2b08-ed4b-45fc-baa6-e681a412b2ed\") " pod="openshift-apiserver/apiserver-76f77b778f-hjkx4" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.819294 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-wjx4c\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.819342 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4dec871b-0b00-4f01-b97c-aaf139f5f879-srv-cert\") pod \"catalog-operator-68c6474976-8znr5\" (UID: \"4dec871b-0b00-4f01-b97c-aaf139f5f879\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8znr5" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.819383 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4ddc725e-4c58-4cd2-a228-e53fede0a61e-trusted-ca\") pod \"console-operator-58897d9998-nrnsk\" (UID: \"4ddc725e-4c58-4cd2-a228-e53fede0a61e\") " pod="openshift-console-operator/console-operator-58897d9998-nrnsk" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.819408 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-wjx4c\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.819419 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-audit-dir\") pod \"oauth-openshift-558db77b4-wjx4c\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.819496 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/54bc2b08-ed4b-45fc-baa6-e681a412b2ed-node-pullsecrets\") pod \"apiserver-76f77b778f-hjkx4\" (UID: \"54bc2b08-ed4b-45fc-baa6-e681a412b2ed\") " pod="openshift-apiserver/apiserver-76f77b778f-hjkx4" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.819666 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b3103144-2f18-4cc3-82ad-3fedf7e23914-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-np8v7\" (UID: \"b3103144-2f18-4cc3-82ad-3fedf7e23914\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-np8v7" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.819433 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jchgl\" (UniqueName: \"kubernetes.io/projected/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-kube-api-access-jchgl\") pod \"oauth-openshift-558db77b4-wjx4c\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.821259 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rqdfd\" (UniqueName: \"kubernetes.io/projected/6d0cfaab-30c5-4483-8f77-5929e026cb31-kube-api-access-rqdfd\") pod \"cluster-samples-operator-665b6dd947-gd6wp\" (UID: \"6d0cfaab-30c5-4483-8f77-5929e026cb31\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gd6wp" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.821321 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54bc2b08-ed4b-45fc-baa6-e681a412b2ed-config\") pod \"apiserver-76f77b778f-hjkx4\" (UID: \"54bc2b08-ed4b-45fc-baa6-e681a412b2ed\") " pod="openshift-apiserver/apiserver-76f77b778f-hjkx4" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.821354 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91e40664-3369-4ee6-816c-03b6272c3d15-serving-cert\") pod \"openshift-config-operator-7777fb866f-kwm7g\" (UID: \"91e40664-3369-4ee6-816c-03b6272c3d15\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-kwm7g" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.821385 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b3103144-2f18-4cc3-82ad-3fedf7e23914-audit-policies\") pod \"apiserver-7bbb656c7d-np8v7\" (UID: \"b3103144-2f18-4cc3-82ad-3fedf7e23914\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-np8v7" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.821409 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b5fmd\" (UniqueName: \"kubernetes.io/projected/b3103144-2f18-4cc3-82ad-3fedf7e23914-kube-api-access-b5fmd\") pod \"apiserver-7bbb656c7d-np8v7\" (UID: \"b3103144-2f18-4cc3-82ad-3fedf7e23914\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-np8v7" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.821441 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-wjx4c\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.821472 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54bc2b08-ed4b-45fc-baa6-e681a412b2ed-serving-cert\") pod \"apiserver-76f77b778f-hjkx4\" (UID: \"54bc2b08-ed4b-45fc-baa6-e681a412b2ed\") " pod="openshift-apiserver/apiserver-76f77b778f-hjkx4" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.821497 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342-serving-cert\") pod \"route-controller-manager-6576b87f9c-4v6mr\" (UID: \"ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4v6mr" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.821538 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ddc725e-4c58-4cd2-a228-e53fede0a61e-serving-cert\") pod \"console-operator-58897d9998-nrnsk\" (UID: \"4ddc725e-4c58-4cd2-a228-e53fede0a61e\") " pod="openshift-console-operator/console-operator-58897d9998-nrnsk" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.821563 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-wjx4c\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.822235 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-wjx4c\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.822465 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4ddc725e-4c58-4cd2-a228-e53fede0a61e-trusted-ca\") pod \"console-operator-58897d9998-nrnsk\" (UID: \"4ddc725e-4c58-4cd2-a228-e53fede0a61e\") " pod="openshift-console-operator/console-operator-58897d9998-nrnsk" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.822940 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/54bc2b08-ed4b-45fc-baa6-e681a412b2ed-config\") pod \"apiserver-76f77b778f-hjkx4\" (UID: \"54bc2b08-ed4b-45fc-baa6-e681a412b2ed\") " pod="openshift-apiserver/apiserver-76f77b778f-hjkx4" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.824564 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0df389db-e47d-4b16-9221-f1e5311c5cd6-client-ca\") pod \"controller-manager-879f6c89f-sm5dr\" (UID: \"0df389db-e47d-4b16-9221-f1e5311c5cd6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sm5dr" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.827392 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/91e40664-3369-4ee6-816c-03b6272c3d15-serving-cert\") pod \"openshift-config-operator-7777fb866f-kwm7g\" (UID: \"91e40664-3369-4ee6-816c-03b6272c3d15\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-kwm7g" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.828162 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-wjx4c\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.828185 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-wjx4c\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.828384 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342-serving-cert\") pod \"route-controller-manager-6576b87f9c-4v6mr\" (UID: \"ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4v6mr" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.828663 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-wjx4c\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.828890 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/b3103144-2f18-4cc3-82ad-3fedf7e23914-audit-policies\") pod \"apiserver-7bbb656c7d-np8v7\" (UID: \"b3103144-2f18-4cc3-82ad-3fedf7e23914\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-np8v7" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.828993 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6zrfz\" (UniqueName: \"kubernetes.io/projected/4dec871b-0b00-4f01-b97c-aaf139f5f879-kube-api-access-6zrfz\") pod \"catalog-operator-68c6474976-8znr5\" (UID: \"4dec871b-0b00-4f01-b97c-aaf139f5f879\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8znr5" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.829070 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b3103144-2f18-4cc3-82ad-3fedf7e23914-audit-dir\") pod \"apiserver-7bbb656c7d-np8v7\" (UID: \"b3103144-2f18-4cc3-82ad-3fedf7e23914\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-np8v7" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.829114 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-wjx4c\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.829167 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54bc2b08-ed4b-45fc-baa6-e681a412b2ed-trusted-ca-bundle\") pod \"apiserver-76f77b778f-hjkx4\" (UID: \"54bc2b08-ed4b-45fc-baa6-e681a412b2ed\") " pod="openshift-apiserver/apiserver-76f77b778f-hjkx4" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.829211 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/54bc2b08-ed4b-45fc-baa6-e681a412b2ed-image-import-ca\") pod \"apiserver-76f77b778f-hjkx4\" (UID: \"54bc2b08-ed4b-45fc-baa6-e681a412b2ed\") " pod="openshift-apiserver/apiserver-76f77b778f-hjkx4" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.829244 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6d0cfaab-30c5-4483-8f77-5929e026cb31-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-gd6wp\" (UID: \"6d0cfaab-30c5-4483-8f77-5929e026cb31\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gd6wp" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.829377 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/54bc2b08-ed4b-45fc-baa6-e681a412b2ed-etcd-serving-ca\") pod \"apiserver-76f77b778f-hjkx4\" (UID: \"54bc2b08-ed4b-45fc-baa6-e681a412b2ed\") " pod="openshift-apiserver/apiserver-76f77b778f-hjkx4" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.829414 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342-config\") pod \"route-controller-manager-6576b87f9c-4v6mr\" (UID: \"ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4v6mr" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.829450 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btq4q\" (UniqueName: \"kubernetes.io/projected/ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342-kube-api-access-btq4q\") pod \"route-controller-manager-6576b87f9c-4v6mr\" (UID: \"ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4v6mr" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.829489 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/54bc2b08-ed4b-45fc-baa6-e681a412b2ed-audit-dir\") pod \"apiserver-76f77b778f-hjkx4\" (UID: \"54bc2b08-ed4b-45fc-baa6-e681a412b2ed\") " pod="openshift-apiserver/apiserver-76f77b778f-hjkx4" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.829526 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-wjx4c\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.829567 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3103144-2f18-4cc3-82ad-3fedf7e23914-serving-cert\") pod \"apiserver-7bbb656c7d-np8v7\" (UID: \"b3103144-2f18-4cc3-82ad-3fedf7e23914\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-np8v7" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.829594 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2h4x9\" (UniqueName: \"kubernetes.io/projected/0df389db-e47d-4b16-9221-f1e5311c5cd6-kube-api-access-2h4x9\") pod \"controller-manager-879f6c89f-sm5dr\" (UID: \"0df389db-e47d-4b16-9221-f1e5311c5cd6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sm5dr" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.829682 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrtjl\" (UniqueName: \"kubernetes.io/projected/44c12e71-4fbb-407d-b4b2-81ae599a853a-kube-api-access-zrtjl\") pod \"openshift-controller-manager-operator-756b6f6bc6-dppnh\" (UID: \"44c12e71-4fbb-407d-b4b2-81ae599a853a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dppnh" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.829850 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/91e40664-3369-4ee6-816c-03b6272c3d15-available-featuregates\") pod \"openshift-config-operator-7777fb866f-kwm7g\" (UID: \"91e40664-3369-4ee6-816c-03b6272c3d15\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-kwm7g" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.829884 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b3103144-2f18-4cc3-82ad-3fedf7e23914-encryption-config\") pod \"apiserver-7bbb656c7d-np8v7\" (UID: \"b3103144-2f18-4cc3-82ad-3fedf7e23914\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-np8v7" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.829915 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-wjx4c\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.829981 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-wjx4c\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.830013 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-audit-policies\") pod \"oauth-openshift-558db77b4-wjx4c\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.830074 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/54bc2b08-ed4b-45fc-baa6-e681a412b2ed-audit\") pod \"apiserver-76f77b778f-hjkx4\" (UID: \"54bc2b08-ed4b-45fc-baa6-e681a412b2ed\") " pod="openshift-apiserver/apiserver-76f77b778f-hjkx4" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.830142 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b3103144-2f18-4cc3-82ad-3fedf7e23914-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-np8v7\" (UID: \"b3103144-2f18-4cc3-82ad-3fedf7e23914\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-np8v7" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.830176 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0df389db-e47d-4b16-9221-f1e5311c5cd6-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-sm5dr\" (UID: \"0df389db-e47d-4b16-9221-f1e5311c5cd6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sm5dr" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.830231 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fllm6\" (UniqueName: \"kubernetes.io/projected/4ddc725e-4c58-4cd2-a228-e53fede0a61e-kube-api-access-fllm6\") pod \"console-operator-58897d9998-nrnsk\" (UID: \"4ddc725e-4c58-4cd2-a228-e53fede0a61e\") " pod="openshift-console-operator/console-operator-58897d9998-nrnsk" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.830265 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-wjx4c\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.830295 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/54bc2b08-ed4b-45fc-baa6-e681a412b2ed-serving-cert\") pod \"apiserver-76f77b778f-hjkx4\" (UID: \"54bc2b08-ed4b-45fc-baa6-e681a412b2ed\") " pod="openshift-apiserver/apiserver-76f77b778f-hjkx4" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.830320 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4dec871b-0b00-4f01-b97c-aaf139f5f879-profile-collector-cert\") pod \"catalog-operator-68c6474976-8znr5\" (UID: \"4dec871b-0b00-4f01-b97c-aaf139f5f879\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8znr5" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.830358 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dbf87b2b-7468-46c2-bf9b-cd95faaedbb9-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-6tt8n\" (UID: \"dbf87b2b-7468-46c2-bf9b-cd95faaedbb9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6tt8n" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.830440 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/54bc2b08-ed4b-45fc-baa6-e681a412b2ed-encryption-config\") pod \"apiserver-76f77b778f-hjkx4\" (UID: \"54bc2b08-ed4b-45fc-baa6-e681a412b2ed\") " pod="openshift-apiserver/apiserver-76f77b778f-hjkx4" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.830531 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrgs8\" (UniqueName: \"kubernetes.io/projected/54bc2b08-ed4b-45fc-baa6-e681a412b2ed-kube-api-access-qrgs8\") pod \"apiserver-76f77b778f-hjkx4\" (UID: \"54bc2b08-ed4b-45fc-baa6-e681a412b2ed\") " pod="openshift-apiserver/apiserver-76f77b778f-hjkx4" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.830601 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.830611 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342-client-ca\") pod \"route-controller-manager-6576b87f9c-4v6mr\" (UID: \"ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4v6mr" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.830639 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0df389db-e47d-4b16-9221-f1e5311c5cd6-serving-cert\") pod \"controller-manager-879f6c89f-sm5dr\" (UID: \"0df389db-e47d-4b16-9221-f1e5311c5cd6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sm5dr" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.830703 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ddc725e-4c58-4cd2-a228-e53fede0a61e-config\") pod \"console-operator-58897d9998-nrnsk\" (UID: \"4ddc725e-4c58-4cd2-a228-e53fede0a61e\") " pod="openshift-console-operator/console-operator-58897d9998-nrnsk" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.830738 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-wjx4c\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.830796 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44c12e71-4fbb-407d-b4b2-81ae599a853a-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-dppnh\" (UID: \"44c12e71-4fbb-407d-b4b2-81ae599a853a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dppnh" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.830826 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/54bc2b08-ed4b-45fc-baa6-e681a412b2ed-etcd-client\") pod \"apiserver-76f77b778f-hjkx4\" (UID: \"54bc2b08-ed4b-45fc-baa6-e681a412b2ed\") " pod="openshift-apiserver/apiserver-76f77b778f-hjkx4" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.830860 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b3103144-2f18-4cc3-82ad-3fedf7e23914-etcd-client\") pod \"apiserver-7bbb656c7d-np8v7\" (UID: \"b3103144-2f18-4cc3-82ad-3fedf7e23914\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-np8v7" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.831133 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0df389db-e47d-4b16-9221-f1e5311c5cd6-config\") pod \"controller-manager-879f6c89f-sm5dr\" (UID: \"0df389db-e47d-4b16-9221-f1e5311c5cd6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sm5dr" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.831712 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-wjx4c\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.831958 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-wjx4c\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.832054 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/b3103144-2f18-4cc3-82ad-3fedf7e23914-audit-dir\") pod \"apiserver-7bbb656c7d-np8v7\" (UID: \"b3103144-2f18-4cc3-82ad-3fedf7e23914\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-np8v7" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.833225 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0df389db-e47d-4b16-9221-f1e5311c5cd6-config\") pod \"controller-manager-879f6c89f-sm5dr\" (UID: \"0df389db-e47d-4b16-9221-f1e5311c5cd6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sm5dr" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.833864 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/54bc2b08-ed4b-45fc-baa6-e681a412b2ed-trusted-ca-bundle\") pod \"apiserver-76f77b778f-hjkx4\" (UID: \"54bc2b08-ed4b-45fc-baa6-e681a412b2ed\") " pod="openshift-apiserver/apiserver-76f77b778f-hjkx4" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.835397 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/54bc2b08-ed4b-45fc-baa6-e681a412b2ed-encryption-config\") pod \"apiserver-76f77b778f-hjkx4\" (UID: \"54bc2b08-ed4b-45fc-baa6-e681a412b2ed\") " pod="openshift-apiserver/apiserver-76f77b778f-hjkx4" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.836193 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/54bc2b08-ed4b-45fc-baa6-e681a412b2ed-image-import-ca\") pod \"apiserver-76f77b778f-hjkx4\" (UID: \"54bc2b08-ed4b-45fc-baa6-e681a412b2ed\") " pod="openshift-apiserver/apiserver-76f77b778f-hjkx4" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.836304 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342-client-ca\") pod \"route-controller-manager-6576b87f9c-4v6mr\" (UID: \"ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4v6mr" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.836576 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0df389db-e47d-4b16-9221-f1e5311c5cd6-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-sm5dr\" (UID: \"0df389db-e47d-4b16-9221-f1e5311c5cd6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sm5dr" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.837218 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-audit-policies\") pod \"oauth-openshift-558db77b4-wjx4c\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.837830 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/54bc2b08-ed4b-45fc-baa6-e681a412b2ed-audit\") pod \"apiserver-76f77b778f-hjkx4\" (UID: \"54bc2b08-ed4b-45fc-baa6-e681a412b2ed\") " pod="openshift-apiserver/apiserver-76f77b778f-hjkx4" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.838673 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/b3103144-2f18-4cc3-82ad-3fedf7e23914-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-np8v7\" (UID: \"b3103144-2f18-4cc3-82ad-3fedf7e23914\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-np8v7" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.838673 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-wjx4c\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.838727 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6d0cfaab-30c5-4483-8f77-5929e026cb31-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-gd6wp\" (UID: \"6d0cfaab-30c5-4483-8f77-5929e026cb31\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gd6wp" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.839035 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/91e40664-3369-4ee6-816c-03b6272c3d15-available-featuregates\") pod \"openshift-config-operator-7777fb866f-kwm7g\" (UID: \"91e40664-3369-4ee6-816c-03b6272c3d15\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-kwm7g" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.839966 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nbrh4"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.840264 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/54bc2b08-ed4b-45fc-baa6-e681a412b2ed-etcd-serving-ca\") pod \"apiserver-76f77b778f-hjkx4\" (UID: \"54bc2b08-ed4b-45fc-baa6-e681a412b2ed\") " pod="openshift-apiserver/apiserver-76f77b778f-hjkx4" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.840292 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4ddc725e-4c58-4cd2-a228-e53fede0a61e-config\") pod \"console-operator-58897d9998-nrnsk\" (UID: \"4ddc725e-4c58-4cd2-a228-e53fede0a61e\") " pod="openshift-console-operator/console-operator-58897d9998-nrnsk" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.840402 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/54bc2b08-ed4b-45fc-baa6-e681a412b2ed-audit-dir\") pod \"apiserver-76f77b778f-hjkx4\" (UID: \"54bc2b08-ed4b-45fc-baa6-e681a412b2ed\") " pod="openshift-apiserver/apiserver-76f77b778f-hjkx4" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.840485 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4ddc725e-4c58-4cd2-a228-e53fede0a61e-serving-cert\") pod \"console-operator-58897d9998-nrnsk\" (UID: \"4ddc725e-4c58-4cd2-a228-e53fede0a61e\") " pod="openshift-console-operator/console-operator-58897d9998-nrnsk" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.841157 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-wjx4c\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.841619 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342-config\") pod \"route-controller-manager-6576b87f9c-4v6mr\" (UID: \"ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4v6mr" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.841851 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-np8v7"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.842162 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nbrh4" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.842282 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-wjx4c\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.842333 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-nrnsk"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.842528 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-wjx4c\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.842542 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0df389db-e47d-4b16-9221-f1e5311c5cd6-serving-cert\") pod \"controller-manager-879f6c89f-sm5dr\" (UID: \"0df389db-e47d-4b16-9221-f1e5311c5cd6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sm5dr" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.843525 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/b3103144-2f18-4cc3-82ad-3fedf7e23914-etcd-client\") pod \"apiserver-7bbb656c7d-np8v7\" (UID: \"b3103144-2f18-4cc3-82ad-3fedf7e23914\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-np8v7" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.843550 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b3103144-2f18-4cc3-82ad-3fedf7e23914-serving-cert\") pod \"apiserver-7bbb656c7d-np8v7\" (UID: \"b3103144-2f18-4cc3-82ad-3fedf7e23914\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-np8v7" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.844355 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/b3103144-2f18-4cc3-82ad-3fedf7e23914-encryption-config\") pod \"apiserver-7bbb656c7d-np8v7\" (UID: \"b3103144-2f18-4cc3-82ad-3fedf7e23914\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-np8v7" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.844412 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/54bc2b08-ed4b-45fc-baa6-e681a412b2ed-etcd-client\") pod \"apiserver-76f77b778f-hjkx4\" (UID: \"54bc2b08-ed4b-45fc-baa6-e681a412b2ed\") " pod="openshift-apiserver/apiserver-76f77b778f-hjkx4" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.844832 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-sm5dr"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.845142 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.845173 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-wjx4c\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.848555 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-kwm7g"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.850245 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-8s6sd"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.851966 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-8s6sd" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.852098 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mlktd"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.853772 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-fv7wt"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.855957 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-pblzs"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.857965 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wjx4c"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.859750 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gd6wp"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.860822 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-l2tfx"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.862801 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-hjkx4"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.864508 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-htjbx"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.865703 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t7cmz"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.866597 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.868422 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tszj5"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.869765 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8znr5"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.871738 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-j5tq6"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.872878 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-gclj9"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.873869 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dppnh"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.874944 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-jpqsm"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.876063 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-7t2lr"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.877340 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-xlwdp"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.878514 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-6tt8n"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.879654 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lk9lz"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.880727 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mqv74"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.881761 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-t8vc2"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.882806 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-p2vmt"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.883854 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9gv8b"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.884829 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.885005 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-47lbj"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.887165 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w9c7m"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.888314 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-8s6sd"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.889530 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zz76h"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.890674 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nbrh4"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.891677 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hbkqd"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.892717 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5hfpc"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.893765 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-twgpv"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.895002 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-kgxcz"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.895697 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-kgxcz" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.896161 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-rsrns"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.897098 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29322210-dtf9r"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.897245 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-rsrns" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.898266 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-kgxcz"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.899374 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-rsrns"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.900438 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-qb6b6"] Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.901355 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-qb6b6" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.905603 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.925153 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.931912 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6zrfz\" (UniqueName: \"kubernetes.io/projected/4dec871b-0b00-4f01-b97c-aaf139f5f879-kube-api-access-6zrfz\") pod \"catalog-operator-68c6474976-8znr5\" (UID: \"4dec871b-0b00-4f01-b97c-aaf139f5f879\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8znr5" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.931982 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrtjl\" (UniqueName: \"kubernetes.io/projected/44c12e71-4fbb-407d-b4b2-81ae599a853a-kube-api-access-zrtjl\") pod \"openshift-controller-manager-operator-756b6f6bc6-dppnh\" (UID: \"44c12e71-4fbb-407d-b4b2-81ae599a853a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dppnh" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.932027 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4dec871b-0b00-4f01-b97c-aaf139f5f879-profile-collector-cert\") pod \"catalog-operator-68c6474976-8znr5\" (UID: \"4dec871b-0b00-4f01-b97c-aaf139f5f879\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8znr5" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.932046 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dbf87b2b-7468-46c2-bf9b-cd95faaedbb9-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-6tt8n\" (UID: \"dbf87b2b-7468-46c2-bf9b-cd95faaedbb9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6tt8n" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.932081 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44c12e71-4fbb-407d-b4b2-81ae599a853a-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-dppnh\" (UID: \"44c12e71-4fbb-407d-b4b2-81ae599a853a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dppnh" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.932140 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tn9zr\" (UniqueName: \"kubernetes.io/projected/dbf87b2b-7468-46c2-bf9b-cd95faaedbb9-kube-api-access-tn9zr\") pod \"multus-admission-controller-857f4d67dd-6tt8n\" (UID: \"dbf87b2b-7468-46c2-bf9b-cd95faaedbb9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6tt8n" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.932157 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44c12e71-4fbb-407d-b4b2-81ae599a853a-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-dppnh\" (UID: \"44c12e71-4fbb-407d-b4b2-81ae599a853a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dppnh" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.932213 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4dec871b-0b00-4f01-b97c-aaf139f5f879-srv-cert\") pod \"catalog-operator-68c6474976-8znr5\" (UID: \"4dec871b-0b00-4f01-b97c-aaf139f5f879\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8znr5" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.933098 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44c12e71-4fbb-407d-b4b2-81ae599a853a-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-dppnh\" (UID: \"44c12e71-4fbb-407d-b4b2-81ae599a853a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dppnh" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.935461 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/44c12e71-4fbb-407d-b4b2-81ae599a853a-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-dppnh\" (UID: \"44c12e71-4fbb-407d-b4b2-81ae599a853a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dppnh" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.945004 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Oct 01 15:43:59 crc kubenswrapper[4949]: I1001 15:43:59.965112 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Oct 01 15:44:00 crc kubenswrapper[4949]: I1001 15:44:00.006045 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Oct 01 15:44:00 crc kubenswrapper[4949]: I1001 15:44:00.026191 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Oct 01 15:44:00 crc kubenswrapper[4949]: I1001 15:44:00.046407 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Oct 01 15:44:00 crc kubenswrapper[4949]: I1001 15:44:00.065043 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Oct 01 15:44:00 crc kubenswrapper[4949]: I1001 15:44:00.086432 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Oct 01 15:44:00 crc kubenswrapper[4949]: I1001 15:44:00.105670 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Oct 01 15:44:00 crc kubenswrapper[4949]: I1001 15:44:00.125435 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Oct 01 15:44:00 crc kubenswrapper[4949]: I1001 15:44:00.157359 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Oct 01 15:44:00 crc kubenswrapper[4949]: I1001 15:44:00.165643 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Oct 01 15:44:00 crc kubenswrapper[4949]: I1001 15:44:00.186351 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Oct 01 15:44:00 crc kubenswrapper[4949]: I1001 15:44:00.206156 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Oct 01 15:44:00 crc kubenswrapper[4949]: I1001 15:44:00.225648 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Oct 01 15:44:00 crc kubenswrapper[4949]: I1001 15:44:00.246467 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Oct 01 15:44:00 crc kubenswrapper[4949]: I1001 15:44:00.265020 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Oct 01 15:44:00 crc kubenswrapper[4949]: I1001 15:44:00.286158 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Oct 01 15:44:00 crc kubenswrapper[4949]: I1001 15:44:00.305649 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Oct 01 15:44:00 crc kubenswrapper[4949]: I1001 15:44:00.335082 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Oct 01 15:44:00 crc kubenswrapper[4949]: I1001 15:44:00.345969 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Oct 01 15:44:00 crc kubenswrapper[4949]: I1001 15:44:00.366574 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Oct 01 15:44:00 crc kubenswrapper[4949]: I1001 15:44:00.387100 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Oct 01 15:44:00 crc kubenswrapper[4949]: I1001 15:44:00.405936 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Oct 01 15:44:00 crc kubenswrapper[4949]: I1001 15:44:00.426525 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Oct 01 15:44:00 crc kubenswrapper[4949]: I1001 15:44:00.446881 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Oct 01 15:44:00 crc kubenswrapper[4949]: I1001 15:44:00.466067 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Oct 01 15:44:00 crc kubenswrapper[4949]: I1001 15:44:00.477612 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/dbf87b2b-7468-46c2-bf9b-cd95faaedbb9-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-6tt8n\" (UID: \"dbf87b2b-7468-46c2-bf9b-cd95faaedbb9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6tt8n" Oct 01 15:44:00 crc kubenswrapper[4949]: I1001 15:44:00.487095 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Oct 01 15:44:00 crc kubenswrapper[4949]: I1001 15:44:00.505335 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Oct 01 15:44:00 crc kubenswrapper[4949]: I1001 15:44:00.517823 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/4dec871b-0b00-4f01-b97c-aaf139f5f879-srv-cert\") pod \"catalog-operator-68c6474976-8znr5\" (UID: \"4dec871b-0b00-4f01-b97c-aaf139f5f879\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8znr5" Oct 01 15:44:00 crc kubenswrapper[4949]: I1001 15:44:00.525626 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Oct 01 15:44:00 crc kubenswrapper[4949]: I1001 15:44:00.545097 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Oct 01 15:44:00 crc kubenswrapper[4949]: I1001 15:44:00.558468 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/4dec871b-0b00-4f01-b97c-aaf139f5f879-profile-collector-cert\") pod \"catalog-operator-68c6474976-8znr5\" (UID: \"4dec871b-0b00-4f01-b97c-aaf139f5f879\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8znr5" Oct 01 15:44:00 crc kubenswrapper[4949]: I1001 15:44:00.565730 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Oct 01 15:44:00 crc kubenswrapper[4949]: I1001 15:44:00.586471 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Oct 01 15:44:00 crc kubenswrapper[4949]: I1001 15:44:00.605918 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Oct 01 15:44:00 crc kubenswrapper[4949]: I1001 15:44:00.625908 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Oct 01 15:44:00 crc kubenswrapper[4949]: I1001 15:44:00.646706 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Oct 01 15:44:00 crc kubenswrapper[4949]: I1001 15:44:00.665937 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Oct 01 15:44:00 crc kubenswrapper[4949]: I1001 15:44:00.687012 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Oct 01 15:44:00 crc kubenswrapper[4949]: I1001 15:44:00.707159 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Oct 01 15:44:00 crc kubenswrapper[4949]: I1001 15:44:00.726554 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Oct 01 15:44:00 crc kubenswrapper[4949]: I1001 15:44:00.745399 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Oct 01 15:44:00 crc kubenswrapper[4949]: I1001 15:44:00.765846 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Oct 01 15:44:00 crc kubenswrapper[4949]: I1001 15:44:00.783357 4949 request.go:700] Waited for 1.012341672s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress/secrets?fieldSelector=metadata.name%3Drouter-stats-default&limit=500&resourceVersion=0 Oct 01 15:44:00 crc kubenswrapper[4949]: I1001 15:44:00.784556 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Oct 01 15:44:00 crc kubenswrapper[4949]: I1001 15:44:00.804781 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Oct 01 15:44:00 crc kubenswrapper[4949]: I1001 15:44:00.826260 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Oct 01 15:44:00 crc kubenswrapper[4949]: I1001 15:44:00.846305 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Oct 01 15:44:00 crc kubenswrapper[4949]: I1001 15:44:00.884974 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Oct 01 15:44:00 crc kubenswrapper[4949]: I1001 15:44:00.907785 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Oct 01 15:44:00 crc kubenswrapper[4949]: I1001 15:44:00.925570 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Oct 01 15:44:00 crc kubenswrapper[4949]: I1001 15:44:00.946472 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Oct 01 15:44:00 crc kubenswrapper[4949]: I1001 15:44:00.965384 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Oct 01 15:44:00 crc kubenswrapper[4949]: I1001 15:44:00.985952 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.005730 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.025992 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.045197 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.065592 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.086598 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.105192 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.126010 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.145400 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.166590 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.187526 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.206286 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.225201 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.245930 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.265801 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.286544 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.305842 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.342949 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jchgl\" (UniqueName: \"kubernetes.io/projected/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-kube-api-access-jchgl\") pod \"oauth-openshift-558db77b4-wjx4c\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.362387 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rqdfd\" (UniqueName: \"kubernetes.io/projected/6d0cfaab-30c5-4483-8f77-5929e026cb31-kube-api-access-rqdfd\") pod \"cluster-samples-operator-665b6dd947-gd6wp\" (UID: \"6d0cfaab-30c5-4483-8f77-5929e026cb31\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gd6wp" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.381709 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x4mxh\" (UniqueName: \"kubernetes.io/projected/91e40664-3369-4ee6-816c-03b6272c3d15-kube-api-access-x4mxh\") pod \"openshift-config-operator-7777fb866f-kwm7g\" (UID: \"91e40664-3369-4ee6-816c-03b6272c3d15\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-kwm7g" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.406076 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b5fmd\" (UniqueName: \"kubernetes.io/projected/b3103144-2f18-4cc3-82ad-3fedf7e23914-kube-api-access-b5fmd\") pod \"apiserver-7bbb656c7d-np8v7\" (UID: \"b3103144-2f18-4cc3-82ad-3fedf7e23914\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-np8v7" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.425752 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrgs8\" (UniqueName: \"kubernetes.io/projected/54bc2b08-ed4b-45fc-baa6-e681a412b2ed-kube-api-access-qrgs8\") pod \"apiserver-76f77b778f-hjkx4\" (UID: \"54bc2b08-ed4b-45fc-baa6-e681a412b2ed\") " pod="openshift-apiserver/apiserver-76f77b778f-hjkx4" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.442626 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fllm6\" (UniqueName: \"kubernetes.io/projected/4ddc725e-4c58-4cd2-a228-e53fede0a61e-kube-api-access-fllm6\") pod \"console-operator-58897d9998-nrnsk\" (UID: \"4ddc725e-4c58-4cd2-a228-e53fede0a61e\") " pod="openshift-console-operator/console-operator-58897d9998-nrnsk" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.463944 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2h4x9\" (UniqueName: \"kubernetes.io/projected/0df389db-e47d-4b16-9221-f1e5311c5cd6-kube-api-access-2h4x9\") pod \"controller-manager-879f6c89f-sm5dr\" (UID: \"0df389db-e47d-4b16-9221-f1e5311c5cd6\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sm5dr" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.473006 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-np8v7" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.479017 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-sm5dr" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.483611 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btq4q\" (UniqueName: \"kubernetes.io/projected/ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342-kube-api-access-btq4q\") pod \"route-controller-manager-6576b87f9c-4v6mr\" (UID: \"ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4v6mr" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.485547 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.490973 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4v6mr" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.522593 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.527149 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.532332 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-kwm7g" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.545902 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.556191 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-nrnsk" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.565879 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.570492 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.580638 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-hjkx4" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.585406 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.587809 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gd6wp" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.607726 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.625964 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.646871 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.666534 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.686359 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.706095 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.724747 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.746031 4949 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.765643 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.783827 4949 request.go:700] Waited for 1.882227347s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/secrets?fieldSelector=metadata.name%3Dmachine-config-server-dockercfg-qx5rd&limit=500&resourceVersion=0 Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.785489 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.810029 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.825756 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.836557 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-kwm7g"] Oct 01 15:44:01 crc kubenswrapper[4949]: W1001 15:44:01.846504 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod91e40664_3369_4ee6_816c_03b6272c3d15.slice/crio-2a0cb2386b4d2363f6c826066a1e5b568435a9cd5d4164a427d6b0c5e116a4ab WatchSource:0}: Error finding container 2a0cb2386b4d2363f6c826066a1e5b568435a9cd5d4164a427d6b0c5e116a4ab: Status 404 returned error can't find the container with id 2a0cb2386b4d2363f6c826066a1e5b568435a9cd5d4164a427d6b0c5e116a4ab Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.862455 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6zrfz\" (UniqueName: \"kubernetes.io/projected/4dec871b-0b00-4f01-b97c-aaf139f5f879-kube-api-access-6zrfz\") pod \"catalog-operator-68c6474976-8znr5\" (UID: \"4dec871b-0b00-4f01-b97c-aaf139f5f879\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8znr5" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.880430 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrtjl\" (UniqueName: \"kubernetes.io/projected/44c12e71-4fbb-407d-b4b2-81ae599a853a-kube-api-access-zrtjl\") pod \"openshift-controller-manager-operator-756b6f6bc6-dppnh\" (UID: \"44c12e71-4fbb-407d-b4b2-81ae599a853a\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dppnh" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.898157 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tn9zr\" (UniqueName: \"kubernetes.io/projected/dbf87b2b-7468-46c2-bf9b-cd95faaedbb9-kube-api-access-tn9zr\") pod \"multus-admission-controller-857f4d67dd-6tt8n\" (UID: \"dbf87b2b-7468-46c2-bf9b-cd95faaedbb9\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-6tt8n" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.906560 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dppnh" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.935172 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-np8v7"] Oct 01 15:44:01 crc kubenswrapper[4949]: W1001 15:44:01.943743 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3103144_2f18_4cc3_82ad_3fedf7e23914.slice/crio-d62e2f545bab5accda00a38e9ad850c3b3ca8ad7bebef8c4a2e64ad82e747dfa WatchSource:0}: Error finding container d62e2f545bab5accda00a38e9ad850c3b3ca8ad7bebef8c4a2e64ad82e747dfa: Status 404 returned error can't find the container with id d62e2f545bab5accda00a38e9ad850c3b3ca8ad7bebef8c4a2e64ad82e747dfa Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.964746 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08be9cf8-e8ce-4ef3-bde6-fc57dfb0df23-serving-cert\") pod \"etcd-operator-b45778765-l2tfx\" (UID: \"08be9cf8-e8ce-4ef3-bde6-fc57dfb0df23\") " pod="openshift-etcd-operator/etcd-operator-b45778765-l2tfx" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.964926 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/1209b691-1ca5-4577-a46f-00b590053ec9-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-mqv74\" (UID: \"1209b691-1ca5-4577-a46f-00b590053ec9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mqv74" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.964958 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0dede0f5-5799-426f-94dc-cba3a14494fb-metrics-certs\") pod \"router-default-5444994796-58ggj\" (UID: \"0dede0f5-5799-426f-94dc-cba3a14494fb\") " pod="openshift-ingress/router-default-5444994796-58ggj" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.964981 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zpd5\" (UniqueName: \"kubernetes.io/projected/a4a80548-d2f1-40ee-a409-dd3cbed9ead2-kube-api-access-9zpd5\") pod \"machine-config-operator-74547568cd-pblzs\" (UID: \"a4a80548-d2f1-40ee-a409-dd3cbed9ead2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pblzs" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.965596 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/0dede0f5-5799-426f-94dc-cba3a14494fb-default-certificate\") pod \"router-default-5444994796-58ggj\" (UID: \"0dede0f5-5799-426f-94dc-cba3a14494fb\") " pod="openshift-ingress/router-default-5444994796-58ggj" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.965630 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3d1c7446-d5a2-4dd3-996c-0ca11ccbcf0b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-zz76h\" (UID: \"3d1c7446-d5a2-4dd3-996c-0ca11ccbcf0b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zz76h" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.965674 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/62b77904-e0d8-4a98-b6e0-49b2c18821db-console-oauth-config\") pod \"console-f9d7485db-xlwdp\" (UID: \"62b77904-e0d8-4a98-b6e0-49b2c18821db\") " pod="openshift-console/console-f9d7485db-xlwdp" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.965698 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ddcaaa4-4dfe-4b11-a24c-ffc2adbbeb5e-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-t7cmz\" (UID: \"3ddcaaa4-4dfe-4b11-a24c-ffc2adbbeb5e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t7cmz" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.965736 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a4a80548-d2f1-40ee-a409-dd3cbed9ead2-auth-proxy-config\") pod \"machine-config-operator-74547568cd-pblzs\" (UID: \"a4a80548-d2f1-40ee-a409-dd3cbed9ead2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pblzs" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.965758 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/19e22ec0-f904-4fe5-a5ff-ba976489c026-auth-proxy-config\") pod \"machine-approver-56656f9798-bpvnv\" (UID: \"19e22ec0-f904-4fe5-a5ff-ba976489c026\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bpvnv" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.965808 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/433c73f4-6132-487a-b44d-ac351cd0fb7d-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-lk9lz\" (UID: \"433c73f4-6132-487a-b44d-ac351cd0fb7d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lk9lz" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.966035 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ff9c89a1-2f76-47e5-9e37-866d1d8adef2-images\") pod \"machine-api-operator-5694c8668f-t8vc2\" (UID: \"ff9c89a1-2f76-47e5-9e37-866d1d8adef2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t8vc2" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.966057 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0accd572-1725-44eb-9d94-94184e622485-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9gv8b\" (UID: \"0accd572-1725-44eb-9d94-94184e622485\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9gv8b" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.966079 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e3cc267-5513-4e94-a950-31dace366440-config\") pod \"openshift-apiserver-operator-796bbdcf4f-j5tq6\" (UID: \"7e3cc267-5513-4e94-a950-31dace366440\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-j5tq6" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.966102 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0accd572-1725-44eb-9d94-94184e622485-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9gv8b\" (UID: \"0accd572-1725-44eb-9d94-94184e622485\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9gv8b" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.966336 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/62b77904-e0d8-4a98-b6e0-49b2c18821db-oauth-serving-cert\") pod \"console-f9d7485db-xlwdp\" (UID: \"62b77904-e0d8-4a98-b6e0-49b2c18821db\") " pod="openshift-console/console-f9d7485db-xlwdp" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.966384 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2e81ff5c-f656-4f24-bf49-33fbec1f7052-registry-certificates\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.966408 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a4a80548-d2f1-40ee-a409-dd3cbed9ead2-images\") pod \"machine-config-operator-74547568cd-pblzs\" (UID: \"a4a80548-d2f1-40ee-a409-dd3cbed9ead2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pblzs" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.966430 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2e81ff5c-f656-4f24-bf49-33fbec1f7052-bound-sa-token\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.966451 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wld4k\" (UniqueName: \"kubernetes.io/projected/049bc083-906a-459a-9df8-1e1fb5ff8918-kube-api-access-wld4k\") pod \"authentication-operator-69f744f599-jpqsm\" (UID: \"049bc083-906a-459a-9df8-1e1fb5ff8918\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jpqsm" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.966475 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/0dede0f5-5799-426f-94dc-cba3a14494fb-stats-auth\") pod \"router-default-5444994796-58ggj\" (UID: \"0dede0f5-5799-426f-94dc-cba3a14494fb\") " pod="openshift-ingress/router-default-5444994796-58ggj" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.966527 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/08be9cf8-e8ce-4ef3-bde6-fc57dfb0df23-etcd-client\") pod \"etcd-operator-b45778765-l2tfx\" (UID: \"08be9cf8-e8ce-4ef3-bde6-fc57dfb0df23\") " pod="openshift-etcd-operator/etcd-operator-b45778765-l2tfx" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.966551 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/049bc083-906a-459a-9df8-1e1fb5ff8918-config\") pod \"authentication-operator-69f744f599-jpqsm\" (UID: \"049bc083-906a-459a-9df8-1e1fb5ff8918\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jpqsm" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.966665 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2e81ff5c-f656-4f24-bf49-33fbec1f7052-installation-pull-secrets\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.966697 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/433c73f4-6132-487a-b44d-ac351cd0fb7d-config\") pod \"kube-controller-manager-operator-78b949d7b-lk9lz\" (UID: \"433c73f4-6132-487a-b44d-ac351cd0fb7d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lk9lz" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.966720 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ff9c89a1-2f76-47e5-9e37-866d1d8adef2-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-t8vc2\" (UID: \"ff9c89a1-2f76-47e5-9e37-866d1d8adef2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t8vc2" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.967479 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/08be9cf8-e8ce-4ef3-bde6-fc57dfb0df23-etcd-ca\") pod \"etcd-operator-b45778765-l2tfx\" (UID: \"08be9cf8-e8ce-4ef3-bde6-fc57dfb0df23\") " pod="openshift-etcd-operator/etcd-operator-b45778765-l2tfx" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.967535 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5s6mg\" (UniqueName: \"kubernetes.io/projected/7e3cc267-5513-4e94-a950-31dace366440-kube-api-access-5s6mg\") pod \"openshift-apiserver-operator-796bbdcf4f-j5tq6\" (UID: \"7e3cc267-5513-4e94-a950-31dace366440\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-j5tq6" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.967611 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/33bf5fa2-f552-4a5a-828d-30b3db9c29a3-bound-sa-token\") pod \"ingress-operator-5b745b69d9-fv7wt\" (UID: \"33bf5fa2-f552-4a5a-828d-30b3db9c29a3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fv7wt" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.967633 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lj724\" (UniqueName: \"kubernetes.io/projected/62b77904-e0d8-4a98-b6e0-49b2c18821db-kube-api-access-lj724\") pod \"console-f9d7485db-xlwdp\" (UID: \"62b77904-e0d8-4a98-b6e0-49b2c18821db\") " pod="openshift-console/console-f9d7485db-xlwdp" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.967666 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e81ff5c-f656-4f24-bf49-33fbec1f7052-trusted-ca\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.967689 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnxdl\" (UniqueName: \"kubernetes.io/projected/ff9c89a1-2f76-47e5-9e37-866d1d8adef2-kube-api-access-mnxdl\") pod \"machine-api-operator-5694c8668f-t8vc2\" (UID: \"ff9c89a1-2f76-47e5-9e37-866d1d8adef2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t8vc2" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.967710 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f22fcb2d-8db9-4200-bedb-b47ab677a3f2-metrics-tls\") pod \"dns-operator-744455d44c-mlktd\" (UID: \"f22fcb2d-8db9-4200-bedb-b47ab677a3f2\") " pod="openshift-dns-operator/dns-operator-744455d44c-mlktd" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.967730 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/049bc083-906a-459a-9df8-1e1fb5ff8918-serving-cert\") pod \"authentication-operator-69f744f599-jpqsm\" (UID: \"049bc083-906a-459a-9df8-1e1fb5ff8918\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jpqsm" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.967752 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/62b77904-e0d8-4a98-b6e0-49b2c18821db-console-config\") pod \"console-f9d7485db-xlwdp\" (UID: \"62b77904-e0d8-4a98-b6e0-49b2c18821db\") " pod="openshift-console/console-f9d7485db-xlwdp" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.967777 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1209b691-1ca5-4577-a46f-00b590053ec9-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-mqv74\" (UID: \"1209b691-1ca5-4577-a46f-00b590053ec9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mqv74" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.967814 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqzxg\" (UniqueName: \"kubernetes.io/projected/c04759f6-7a1a-43cb-a705-41d2319646b6-kube-api-access-cqzxg\") pod \"downloads-7954f5f757-p2vmt\" (UID: \"c04759f6-7a1a-43cb-a705-41d2319646b6\") " pod="openshift-console/downloads-7954f5f757-p2vmt" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.967834 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/433c73f4-6132-487a-b44d-ac351cd0fb7d-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-lk9lz\" (UID: \"433c73f4-6132-487a-b44d-ac351cd0fb7d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lk9lz" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.967856 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff9c89a1-2f76-47e5-9e37-866d1d8adef2-config\") pod \"machine-api-operator-5694c8668f-t8vc2\" (UID: \"ff9c89a1-2f76-47e5-9e37-866d1d8adef2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t8vc2" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.967881 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2h8w\" (UniqueName: \"kubernetes.io/projected/1209b691-1ca5-4577-a46f-00b590053ec9-kube-api-access-t2h8w\") pod \"cluster-image-registry-operator-dc59b4c8b-mqv74\" (UID: \"1209b691-1ca5-4577-a46f-00b590053ec9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mqv74" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.967901 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/19e22ec0-f904-4fe5-a5ff-ba976489c026-machine-approver-tls\") pod \"machine-approver-56656f9798-bpvnv\" (UID: \"19e22ec0-f904-4fe5-a5ff-ba976489c026\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bpvnv" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.967961 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62b77904-e0d8-4a98-b6e0-49b2c18821db-trusted-ca-bundle\") pod \"console-f9d7485db-xlwdp\" (UID: \"62b77904-e0d8-4a98-b6e0-49b2c18821db\") " pod="openshift-console/console-f9d7485db-xlwdp" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.967996 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ddcaaa4-4dfe-4b11-a24c-ffc2adbbeb5e-config\") pod \"kube-apiserver-operator-766d6c64bb-t7cmz\" (UID: \"3ddcaaa4-4dfe-4b11-a24c-ffc2adbbeb5e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t7cmz" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.968018 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08be9cf8-e8ce-4ef3-bde6-fc57dfb0df23-config\") pod \"etcd-operator-b45778765-l2tfx\" (UID: \"08be9cf8-e8ce-4ef3-bde6-fc57dfb0df23\") " pod="openshift-etcd-operator/etcd-operator-b45778765-l2tfx" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.968039 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg9pg\" (UniqueName: \"kubernetes.io/projected/08be9cf8-e8ce-4ef3-bde6-fc57dfb0df23-kube-api-access-pg9pg\") pod \"etcd-operator-b45778765-l2tfx\" (UID: \"08be9cf8-e8ce-4ef3-bde6-fc57dfb0df23\") " pod="openshift-etcd-operator/etcd-operator-b45778765-l2tfx" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.968072 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0dede0f5-5799-426f-94dc-cba3a14494fb-service-ca-bundle\") pod \"router-default-5444994796-58ggj\" (UID: \"0dede0f5-5799-426f-94dc-cba3a14494fb\") " pod="openshift-ingress/router-default-5444994796-58ggj" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.968103 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.968153 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19e22ec0-f904-4fe5-a5ff-ba976489c026-config\") pod \"machine-approver-56656f9798-bpvnv\" (UID: \"19e22ec0-f904-4fe5-a5ff-ba976489c026\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bpvnv" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.968188 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9njbk\" (UniqueName: \"kubernetes.io/projected/f22fcb2d-8db9-4200-bedb-b47ab677a3f2-kube-api-access-9njbk\") pod \"dns-operator-744455d44c-mlktd\" (UID: \"f22fcb2d-8db9-4200-bedb-b47ab677a3f2\") " pod="openshift-dns-operator/dns-operator-744455d44c-mlktd" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.968228 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/049bc083-906a-459a-9df8-1e1fb5ff8918-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-jpqsm\" (UID: \"049bc083-906a-459a-9df8-1e1fb5ff8918\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jpqsm" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.968249 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/08be9cf8-e8ce-4ef3-bde6-fc57dfb0df23-etcd-service-ca\") pod \"etcd-operator-b45778765-l2tfx\" (UID: \"08be9cf8-e8ce-4ef3-bde6-fc57dfb0df23\") " pod="openshift-etcd-operator/etcd-operator-b45778765-l2tfx" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.968285 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e3cc267-5513-4e94-a950-31dace366440-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-j5tq6\" (UID: \"7e3cc267-5513-4e94-a950-31dace366440\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-j5tq6" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.968324 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/33bf5fa2-f552-4a5a-828d-30b3db9c29a3-metrics-tls\") pod \"ingress-operator-5b745b69d9-fv7wt\" (UID: \"33bf5fa2-f552-4a5a-828d-30b3db9c29a3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fv7wt" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.968352 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/33bf5fa2-f552-4a5a-828d-30b3db9c29a3-trusted-ca\") pod \"ingress-operator-5b745b69d9-fv7wt\" (UID: \"33bf5fa2-f552-4a5a-828d-30b3db9c29a3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fv7wt" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.968379 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs4b6\" (UniqueName: \"kubernetes.io/projected/3d1c7446-d5a2-4dd3-996c-0ca11ccbcf0b-kube-api-access-bs4b6\") pod \"control-plane-machine-set-operator-78cbb6b69f-zz76h\" (UID: \"3d1c7446-d5a2-4dd3-996c-0ca11ccbcf0b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zz76h" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.968729 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2e81ff5c-f656-4f24-bf49-33fbec1f7052-registry-tls\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.968768 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/049bc083-906a-459a-9df8-1e1fb5ff8918-service-ca-bundle\") pod \"authentication-operator-69f744f599-jpqsm\" (UID: \"049bc083-906a-459a-9df8-1e1fb5ff8918\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jpqsm" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.968800 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q9l64\" (UniqueName: \"kubernetes.io/projected/19e22ec0-f904-4fe5-a5ff-ba976489c026-kube-api-access-q9l64\") pod \"machine-approver-56656f9798-bpvnv\" (UID: \"19e22ec0-f904-4fe5-a5ff-ba976489c026\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bpvnv" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.968928 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7w7zp\" (UniqueName: \"kubernetes.io/projected/33bf5fa2-f552-4a5a-828d-30b3db9c29a3-kube-api-access-7w7zp\") pod \"ingress-operator-5b745b69d9-fv7wt\" (UID: \"33bf5fa2-f552-4a5a-828d-30b3db9c29a3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fv7wt" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.968962 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7qqk\" (UniqueName: \"kubernetes.io/projected/971a7b5f-3076-4e58-a96c-d5902c05b319-kube-api-access-p7qqk\") pod \"migrator-59844c95c7-htjbx\" (UID: \"971a7b5f-3076-4e58-a96c-d5902c05b319\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-htjbx" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.968991 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/62b77904-e0d8-4a98-b6e0-49b2c18821db-service-ca\") pod \"console-f9d7485db-xlwdp\" (UID: \"62b77904-e0d8-4a98-b6e0-49b2c18821db\") " pod="openshift-console/console-f9d7485db-xlwdp" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.969046 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a4a80548-d2f1-40ee-a409-dd3cbed9ead2-proxy-tls\") pod \"machine-config-operator-74547568cd-pblzs\" (UID: \"a4a80548-d2f1-40ee-a409-dd3cbed9ead2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pblzs" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.969067 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/62b77904-e0d8-4a98-b6e0-49b2c18821db-console-serving-cert\") pod \"console-f9d7485db-xlwdp\" (UID: \"62b77904-e0d8-4a98-b6e0-49b2c18821db\") " pod="openshift-console/console-f9d7485db-xlwdp" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.969104 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2e81ff5c-f656-4f24-bf49-33fbec1f7052-ca-trust-extracted\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.969235 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ddcaaa4-4dfe-4b11-a24c-ffc2adbbeb5e-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-t7cmz\" (UID: \"3ddcaaa4-4dfe-4b11-a24c-ffc2adbbeb5e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t7cmz" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.969267 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2d6tn\" (UniqueName: \"kubernetes.io/projected/0dede0f5-5799-426f-94dc-cba3a14494fb-kube-api-access-2d6tn\") pod \"router-default-5444994796-58ggj\" (UID: \"0dede0f5-5799-426f-94dc-cba3a14494fb\") " pod="openshift-ingress/router-default-5444994796-58ggj" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.969288 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0accd572-1725-44eb-9d94-94184e622485-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9gv8b\" (UID: \"0accd572-1725-44eb-9d94-94184e622485\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9gv8b" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.969382 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpcjq\" (UniqueName: \"kubernetes.io/projected/2e81ff5c-f656-4f24-bf49-33fbec1f7052-kube-api-access-bpcjq\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:01 crc kubenswrapper[4949]: I1001 15:44:01.969421 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1209b691-1ca5-4577-a46f-00b590053ec9-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-mqv74\" (UID: \"1209b691-1ca5-4577-a46f-00b590053ec9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mqv74" Oct 01 15:44:01 crc kubenswrapper[4949]: E1001 15:44:01.973363 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:02.473343609 +0000 UTC m=+141.778949820 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.010543 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-sm5dr"] Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.012745 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4v6mr"] Oct 01 15:44:02 crc kubenswrapper[4949]: W1001 15:44:02.026791 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0df389db_e47d_4b16_9221_f1e5311c5cd6.slice/crio-df9635a815ea03823da56a9c4158745f5984d0fd963e2fbc26517136c9eff0b1 WatchSource:0}: Error finding container df9635a815ea03823da56a9c4158745f5984d0fd963e2fbc26517136c9eff0b1: Status 404 returned error can't find the container with id df9635a815ea03823da56a9c4158745f5984d0fd963e2fbc26517136c9eff0b1 Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.060330 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-6tt8n" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.068671 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8znr5" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.070617 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:02 crc kubenswrapper[4949]: E1001 15:44:02.070806 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:02.570774415 +0000 UTC m=+141.876380606 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.070849 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0accd572-1725-44eb-9d94-94184e622485-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9gv8b\" (UID: \"0accd572-1725-44eb-9d94-94184e622485\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9gv8b" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.070932 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cc4f3054-5b96-488b-b209-fa9433c513ab-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hbkqd\" (UID: \"cc4f3054-5b96-488b-b209-fa9433c513ab\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hbkqd" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.070970 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/433c73f4-6132-487a-b44d-ac351cd0fb7d-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-lk9lz\" (UID: \"433c73f4-6132-487a-b44d-ac351cd0fb7d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lk9lz" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.070992 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ff9c89a1-2f76-47e5-9e37-866d1d8adef2-images\") pod \"machine-api-operator-5694c8668f-t8vc2\" (UID: \"ff9c89a1-2f76-47e5-9e37-866d1d8adef2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t8vc2" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.071016 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e3cc267-5513-4e94-a950-31dace366440-config\") pod \"openshift-apiserver-operator-796bbdcf4f-j5tq6\" (UID: \"7e3cc267-5513-4e94-a950-31dace366440\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-j5tq6" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.071036 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0accd572-1725-44eb-9d94-94184e622485-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9gv8b\" (UID: \"0accd572-1725-44eb-9d94-94184e622485\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9gv8b" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.071058 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/62b77904-e0d8-4a98-b6e0-49b2c18821db-oauth-serving-cert\") pod \"console-f9d7485db-xlwdp\" (UID: \"62b77904-e0d8-4a98-b6e0-49b2c18821db\") " pod="openshift-console/console-f9d7485db-xlwdp" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.071078 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9682d1e4-ba48-496f-822e-6b0262676cca-serving-cert\") pod \"service-ca-operator-777779d784-gclj9\" (UID: \"9682d1e4-ba48-496f-822e-6b0262676cca\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gclj9" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.071100 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ms8k\" (UniqueName: \"kubernetes.io/projected/5dfe4211-4bb6-47e4-9797-652393f66bc5-kube-api-access-5ms8k\") pod \"marketplace-operator-79b997595-nbrh4\" (UID: \"5dfe4211-4bb6-47e4-9797-652393f66bc5\") " pod="openshift-marketplace/marketplace-operator-79b997595-nbrh4" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.071144 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2e81ff5c-f656-4f24-bf49-33fbec1f7052-registry-certificates\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.071167 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a4a80548-d2f1-40ee-a409-dd3cbed9ead2-images\") pod \"machine-config-operator-74547568cd-pblzs\" (UID: \"a4a80548-d2f1-40ee-a409-dd3cbed9ead2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pblzs" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.071193 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qhpk6\" (UniqueName: \"kubernetes.io/projected/dc588bff-a558-4793-ba57-c1efaf23f92a-kube-api-access-qhpk6\") pod \"packageserver-d55dfcdfc-w9c7m\" (UID: \"dc588bff-a558-4793-ba57-c1efaf23f92a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w9c7m" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.071214 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/98af3efd-3e5b-4bfd-96ae-f3629aa18f43-secret-volume\") pod \"collect-profiles-29322210-dtf9r\" (UID: \"98af3efd-3e5b-4bfd-96ae-f3629aa18f43\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322210-dtf9r" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.071241 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2e81ff5c-f656-4f24-bf49-33fbec1f7052-bound-sa-token\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.071266 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wld4k\" (UniqueName: \"kubernetes.io/projected/049bc083-906a-459a-9df8-1e1fb5ff8918-kube-api-access-wld4k\") pod \"authentication-operator-69f744f599-jpqsm\" (UID: \"049bc083-906a-459a-9df8-1e1fb5ff8918\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jpqsm" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.071292 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/0dede0f5-5799-426f-94dc-cba3a14494fb-stats-auth\") pod \"router-default-5444994796-58ggj\" (UID: \"0dede0f5-5799-426f-94dc-cba3a14494fb\") " pod="openshift-ingress/router-default-5444994796-58ggj" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.071312 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/08be9cf8-e8ce-4ef3-bde6-fc57dfb0df23-etcd-client\") pod \"etcd-operator-b45778765-l2tfx\" (UID: \"08be9cf8-e8ce-4ef3-bde6-fc57dfb0df23\") " pod="openshift-etcd-operator/etcd-operator-b45778765-l2tfx" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.071337 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/837935a1-6cd1-4472-a692-e9c13f2b7ad7-mountpoint-dir\") pod \"csi-hostpathplugin-rsrns\" (UID: \"837935a1-6cd1-4472-a692-e9c13f2b7ad7\") " pod="hostpath-provisioner/csi-hostpathplugin-rsrns" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.071358 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-85snb\" (UniqueName: \"kubernetes.io/projected/cc4f3054-5b96-488b-b209-fa9433c513ab-kube-api-access-85snb\") pod \"olm-operator-6b444d44fb-hbkqd\" (UID: \"cc4f3054-5b96-488b-b209-fa9433c513ab\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hbkqd" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.071394 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/049bc083-906a-459a-9df8-1e1fb5ff8918-config\") pod \"authentication-operator-69f744f599-jpqsm\" (UID: \"049bc083-906a-459a-9df8-1e1fb5ff8918\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jpqsm" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.071421 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gwklq\" (UniqueName: \"kubernetes.io/projected/98af3efd-3e5b-4bfd-96ae-f3629aa18f43-kube-api-access-gwklq\") pod \"collect-profiles-29322210-dtf9r\" (UID: \"98af3efd-3e5b-4bfd-96ae-f3629aa18f43\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322210-dtf9r" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.071439 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf8fbefc-06d3-4792-b11c-86e8488b231e-config-volume\") pod \"dns-default-8s6sd\" (UID: \"bf8fbefc-06d3-4792-b11c-86e8488b231e\") " pod="openshift-dns/dns-default-8s6sd" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.071475 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7755b8a6-1aee-422e-ad5a-b56cd77c0234-certs\") pod \"machine-config-server-qb6b6\" (UID: \"7755b8a6-1aee-422e-ad5a-b56cd77c0234\") " pod="openshift-machine-config-operator/machine-config-server-qb6b6" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.071502 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2e81ff5c-f656-4f24-bf49-33fbec1f7052-installation-pull-secrets\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.071522 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/433c73f4-6132-487a-b44d-ac351cd0fb7d-config\") pod \"kube-controller-manager-operator-78b949d7b-lk9lz\" (UID: \"433c73f4-6132-487a-b44d-ac351cd0fb7d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lk9lz" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.071545 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ff9c89a1-2f76-47e5-9e37-866d1d8adef2-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-t8vc2\" (UID: \"ff9c89a1-2f76-47e5-9e37-866d1d8adef2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t8vc2" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.071567 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/08be9cf8-e8ce-4ef3-bde6-fc57dfb0df23-etcd-ca\") pod \"etcd-operator-b45778765-l2tfx\" (UID: \"08be9cf8-e8ce-4ef3-bde6-fc57dfb0df23\") " pod="openshift-etcd-operator/etcd-operator-b45778765-l2tfx" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.071588 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c049ec19-17c6-4838-873d-686ea408b5dc-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-tszj5\" (UID: \"c049ec19-17c6-4838-873d-686ea408b5dc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tszj5" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.071611 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf8fbefc-06d3-4792-b11c-86e8488b231e-metrics-tls\") pod \"dns-default-8s6sd\" (UID: \"bf8fbefc-06d3-4792-b11c-86e8488b231e\") " pod="openshift-dns/dns-default-8s6sd" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.071630 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25hkw\" (UniqueName: \"kubernetes.io/projected/7755b8a6-1aee-422e-ad5a-b56cd77c0234-kube-api-access-25hkw\") pod \"machine-config-server-qb6b6\" (UID: \"7755b8a6-1aee-422e-ad5a-b56cd77c0234\") " pod="openshift-machine-config-operator/machine-config-server-qb6b6" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.071668 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5s6mg\" (UniqueName: \"kubernetes.io/projected/7e3cc267-5513-4e94-a950-31dace366440-kube-api-access-5s6mg\") pod \"openshift-apiserver-operator-796bbdcf4f-j5tq6\" (UID: \"7e3cc267-5513-4e94-a950-31dace366440\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-j5tq6" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.071689 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dc588bff-a558-4793-ba57-c1efaf23f92a-apiservice-cert\") pod \"packageserver-d55dfcdfc-w9c7m\" (UID: \"dc588bff-a558-4793-ba57-c1efaf23f92a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w9c7m" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.071716 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdw6t\" (UniqueName: \"kubernetes.io/projected/9682d1e4-ba48-496f-822e-6b0262676cca-kube-api-access-pdw6t\") pod \"service-ca-operator-777779d784-gclj9\" (UID: \"9682d1e4-ba48-496f-822e-6b0262676cca\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gclj9" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.071757 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/33bf5fa2-f552-4a5a-828d-30b3db9c29a3-bound-sa-token\") pod \"ingress-operator-5b745b69d9-fv7wt\" (UID: \"33bf5fa2-f552-4a5a-828d-30b3db9c29a3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fv7wt" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.071781 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lj724\" (UniqueName: \"kubernetes.io/projected/62b77904-e0d8-4a98-b6e0-49b2c18821db-kube-api-access-lj724\") pod \"console-f9d7485db-xlwdp\" (UID: \"62b77904-e0d8-4a98-b6e0-49b2c18821db\") " pod="openshift-console/console-f9d7485db-xlwdp" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.071805 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8bfda5f1-ce68-4682-bb75-e62f14514d81-signing-cabundle\") pod \"service-ca-9c57cc56f-7t2lr\" (UID: \"8bfda5f1-ce68-4682-bb75-e62f14514d81\") " pod="openshift-service-ca/service-ca-9c57cc56f-7t2lr" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.071830 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/049bc083-906a-459a-9df8-1e1fb5ff8918-serving-cert\") pod \"authentication-operator-69f744f599-jpqsm\" (UID: \"049bc083-906a-459a-9df8-1e1fb5ff8918\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jpqsm" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.071857 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e81ff5c-f656-4f24-bf49-33fbec1f7052-trusted-ca\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.071882 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mnxdl\" (UniqueName: \"kubernetes.io/projected/ff9c89a1-2f76-47e5-9e37-866d1d8adef2-kube-api-access-mnxdl\") pod \"machine-api-operator-5694c8668f-t8vc2\" (UID: \"ff9c89a1-2f76-47e5-9e37-866d1d8adef2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t8vc2" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.071904 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f22fcb2d-8db9-4200-bedb-b47ab677a3f2-metrics-tls\") pod \"dns-operator-744455d44c-mlktd\" (UID: \"f22fcb2d-8db9-4200-bedb-b47ab677a3f2\") " pod="openshift-dns-operator/dns-operator-744455d44c-mlktd" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.071930 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/62b77904-e0d8-4a98-b6e0-49b2c18821db-console-config\") pod \"console-f9d7485db-xlwdp\" (UID: \"62b77904-e0d8-4a98-b6e0-49b2c18821db\") " pod="openshift-console/console-f9d7485db-xlwdp" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.071957 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1209b691-1ca5-4577-a46f-00b590053ec9-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-mqv74\" (UID: \"1209b691-1ca5-4577-a46f-00b590053ec9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mqv74" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.071981 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c049ec19-17c6-4838-873d-686ea408b5dc-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-tszj5\" (UID: \"c049ec19-17c6-4838-873d-686ea408b5dc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tszj5" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.072017 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cc4f3054-5b96-488b-b209-fa9433c513ab-srv-cert\") pod \"olm-operator-6b444d44fb-hbkqd\" (UID: \"cc4f3054-5b96-488b-b209-fa9433c513ab\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hbkqd" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.072049 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqzxg\" (UniqueName: \"kubernetes.io/projected/c04759f6-7a1a-43cb-a705-41d2319646b6-kube-api-access-cqzxg\") pod \"downloads-7954f5f757-p2vmt\" (UID: \"c04759f6-7a1a-43cb-a705-41d2319646b6\") " pod="openshift-console/downloads-7954f5f757-p2vmt" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.072073 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7755b8a6-1aee-422e-ad5a-b56cd77c0234-node-bootstrap-token\") pod \"machine-config-server-qb6b6\" (UID: \"7755b8a6-1aee-422e-ad5a-b56cd77c0234\") " pod="openshift-machine-config-operator/machine-config-server-qb6b6" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.072097 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/433c73f4-6132-487a-b44d-ac351cd0fb7d-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-lk9lz\" (UID: \"433c73f4-6132-487a-b44d-ac351cd0fb7d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lk9lz" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.072138 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff9c89a1-2f76-47e5-9e37-866d1d8adef2-config\") pod \"machine-api-operator-5694c8668f-t8vc2\" (UID: \"ff9c89a1-2f76-47e5-9e37-866d1d8adef2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t8vc2" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.072165 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8bfda5f1-ce68-4682-bb75-e62f14514d81-signing-key\") pod \"service-ca-9c57cc56f-7t2lr\" (UID: \"8bfda5f1-ce68-4682-bb75-e62f14514d81\") " pod="openshift-service-ca/service-ca-9c57cc56f-7t2lr" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.072190 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t2h8w\" (UniqueName: \"kubernetes.io/projected/1209b691-1ca5-4577-a46f-00b590053ec9-kube-api-access-t2h8w\") pod \"cluster-image-registry-operator-dc59b4c8b-mqv74\" (UID: \"1209b691-1ca5-4577-a46f-00b590053ec9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mqv74" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.072213 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/19e22ec0-f904-4fe5-a5ff-ba976489c026-machine-approver-tls\") pod \"machine-approver-56656f9798-bpvnv\" (UID: \"19e22ec0-f904-4fe5-a5ff-ba976489c026\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bpvnv" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.072233 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/837935a1-6cd1-4472-a692-e9c13f2b7ad7-plugins-dir\") pod \"csi-hostpathplugin-rsrns\" (UID: \"837935a1-6cd1-4472-a692-e9c13f2b7ad7\") " pod="hostpath-provisioner/csi-hostpathplugin-rsrns" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.072256 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ltq8c\" (UniqueName: \"kubernetes.io/projected/7ef25191-6af6-4921-973b-f2b127a01a6a-kube-api-access-ltq8c\") pod \"machine-config-controller-84d6567774-twgpv\" (UID: \"7ef25191-6af6-4921-973b-f2b127a01a6a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-twgpv" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.072268 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/62b77904-e0d8-4a98-b6e0-49b2c18821db-oauth-serving-cert\") pod \"console-f9d7485db-xlwdp\" (UID: \"62b77904-e0d8-4a98-b6e0-49b2c18821db\") " pod="openshift-console/console-f9d7485db-xlwdp" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.072285 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62b77904-e0d8-4a98-b6e0-49b2c18821db-trusted-ca-bundle\") pod \"console-f9d7485db-xlwdp\" (UID: \"62b77904-e0d8-4a98-b6e0-49b2c18821db\") " pod="openshift-console/console-f9d7485db-xlwdp" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.072322 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ddcaaa4-4dfe-4b11-a24c-ffc2adbbeb5e-config\") pod \"kube-apiserver-operator-766d6c64bb-t7cmz\" (UID: \"3ddcaaa4-4dfe-4b11-a24c-ffc2adbbeb5e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t7cmz" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.072350 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08be9cf8-e8ce-4ef3-bde6-fc57dfb0df23-config\") pod \"etcd-operator-b45778765-l2tfx\" (UID: \"08be9cf8-e8ce-4ef3-bde6-fc57dfb0df23\") " pod="openshift-etcd-operator/etcd-operator-b45778765-l2tfx" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.072377 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pg9pg\" (UniqueName: \"kubernetes.io/projected/08be9cf8-e8ce-4ef3-bde6-fc57dfb0df23-kube-api-access-pg9pg\") pod \"etcd-operator-b45778765-l2tfx\" (UID: \"08be9cf8-e8ce-4ef3-bde6-fc57dfb0df23\") " pod="openshift-etcd-operator/etcd-operator-b45778765-l2tfx" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.072400 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0dede0f5-5799-426f-94dc-cba3a14494fb-service-ca-bundle\") pod \"router-default-5444994796-58ggj\" (UID: \"0dede0f5-5799-426f-94dc-cba3a14494fb\") " pod="openshift-ingress/router-default-5444994796-58ggj" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.072432 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.072458 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19e22ec0-f904-4fe5-a5ff-ba976489c026-config\") pod \"machine-approver-56656f9798-bpvnv\" (UID: \"19e22ec0-f904-4fe5-a5ff-ba976489c026\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bpvnv" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.072482 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9njbk\" (UniqueName: \"kubernetes.io/projected/f22fcb2d-8db9-4200-bedb-b47ab677a3f2-kube-api-access-9njbk\") pod \"dns-operator-744455d44c-mlktd\" (UID: \"f22fcb2d-8db9-4200-bedb-b47ab677a3f2\") " pod="openshift-dns-operator/dns-operator-744455d44c-mlktd" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.072508 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/049bc083-906a-459a-9df8-1e1fb5ff8918-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-jpqsm\" (UID: \"049bc083-906a-459a-9df8-1e1fb5ff8918\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jpqsm" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.072531 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/08be9cf8-e8ce-4ef3-bde6-fc57dfb0df23-etcd-service-ca\") pod \"etcd-operator-b45778765-l2tfx\" (UID: \"08be9cf8-e8ce-4ef3-bde6-fc57dfb0df23\") " pod="openshift-etcd-operator/etcd-operator-b45778765-l2tfx" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.072559 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hfc44\" (UniqueName: \"kubernetes.io/projected/edc35680-cc63-485d-aa02-4e08f96d86fa-kube-api-access-hfc44\") pod \"ingress-canary-kgxcz\" (UID: \"edc35680-cc63-485d-aa02-4e08f96d86fa\") " pod="openshift-ingress-canary/ingress-canary-kgxcz" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.072583 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e3cc267-5513-4e94-a950-31dace366440-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-j5tq6\" (UID: \"7e3cc267-5513-4e94-a950-31dace366440\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-j5tq6" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.072607 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/33bf5fa2-f552-4a5a-828d-30b3db9c29a3-metrics-tls\") pod \"ingress-operator-5b745b69d9-fv7wt\" (UID: \"33bf5fa2-f552-4a5a-828d-30b3db9c29a3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fv7wt" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.072628 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/33bf5fa2-f552-4a5a-828d-30b3db9c29a3-trusted-ca\") pod \"ingress-operator-5b745b69d9-fv7wt\" (UID: \"33bf5fa2-f552-4a5a-828d-30b3db9c29a3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fv7wt" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.072649 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/837935a1-6cd1-4472-a692-e9c13f2b7ad7-csi-data-dir\") pod \"csi-hostpathplugin-rsrns\" (UID: \"837935a1-6cd1-4472-a692-e9c13f2b7ad7\") " pod="hostpath-provisioner/csi-hostpathplugin-rsrns" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.072675 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs4b6\" (UniqueName: \"kubernetes.io/projected/3d1c7446-d5a2-4dd3-996c-0ca11ccbcf0b-kube-api-access-bs4b6\") pod \"control-plane-machine-set-operator-78cbb6b69f-zz76h\" (UID: \"3d1c7446-d5a2-4dd3-996c-0ca11ccbcf0b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zz76h" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.072699 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/dc588bff-a558-4793-ba57-c1efaf23f92a-tmpfs\") pod \"packageserver-d55dfcdfc-w9c7m\" (UID: \"dc588bff-a558-4793-ba57-c1efaf23f92a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w9c7m" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.072724 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2e81ff5c-f656-4f24-bf49-33fbec1f7052-registry-tls\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.072748 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/049bc083-906a-459a-9df8-1e1fb5ff8918-service-ca-bundle\") pod \"authentication-operator-69f744f599-jpqsm\" (UID: \"049bc083-906a-459a-9df8-1e1fb5ff8918\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jpqsm" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.072773 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q9l64\" (UniqueName: \"kubernetes.io/projected/19e22ec0-f904-4fe5-a5ff-ba976489c026-kube-api-access-q9l64\") pod \"machine-approver-56656f9798-bpvnv\" (UID: \"19e22ec0-f904-4fe5-a5ff-ba976489c026\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bpvnv" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.072798 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/edc35680-cc63-485d-aa02-4e08f96d86fa-cert\") pod \"ingress-canary-kgxcz\" (UID: \"edc35680-cc63-485d-aa02-4e08f96d86fa\") " pod="openshift-ingress-canary/ingress-canary-kgxcz" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.072825 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5dfe4211-4bb6-47e4-9797-652393f66bc5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nbrh4\" (UID: \"5dfe4211-4bb6-47e4-9797-652393f66bc5\") " pod="openshift-marketplace/marketplace-operator-79b997595-nbrh4" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.072887 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98af3efd-3e5b-4bfd-96ae-f3629aa18f43-config-volume\") pod \"collect-profiles-29322210-dtf9r\" (UID: \"98af3efd-3e5b-4bfd-96ae-f3629aa18f43\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322210-dtf9r" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.072914 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7w7zp\" (UniqueName: \"kubernetes.io/projected/33bf5fa2-f552-4a5a-828d-30b3db9c29a3-kube-api-access-7w7zp\") pod \"ingress-operator-5b745b69d9-fv7wt\" (UID: \"33bf5fa2-f552-4a5a-828d-30b3db9c29a3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fv7wt" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.072936 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7qqk\" (UniqueName: \"kubernetes.io/projected/971a7b5f-3076-4e58-a96c-d5902c05b319-kube-api-access-p7qqk\") pod \"migrator-59844c95c7-htjbx\" (UID: \"971a7b5f-3076-4e58-a96c-d5902c05b319\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-htjbx" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.072958 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/62b77904-e0d8-4a98-b6e0-49b2c18821db-service-ca\") pod \"console-f9d7485db-xlwdp\" (UID: \"62b77904-e0d8-4a98-b6e0-49b2c18821db\") " pod="openshift-console/console-f9d7485db-xlwdp" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.072960 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dppnh"] Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.072992 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mmtg8\" (UniqueName: \"kubernetes.io/projected/c049ec19-17c6-4838-873d-686ea408b5dc-kube-api-access-mmtg8\") pod \"kube-storage-version-migrator-operator-b67b599dd-tszj5\" (UID: \"c049ec19-17c6-4838-873d-686ea408b5dc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tszj5" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.073035 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a4a80548-d2f1-40ee-a409-dd3cbed9ead2-proxy-tls\") pod \"machine-config-operator-74547568cd-pblzs\" (UID: \"a4a80548-d2f1-40ee-a409-dd3cbed9ead2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pblzs" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.073058 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/62b77904-e0d8-4a98-b6e0-49b2c18821db-console-serving-cert\") pod \"console-f9d7485db-xlwdp\" (UID: \"62b77904-e0d8-4a98-b6e0-49b2c18821db\") " pod="openshift-console/console-f9d7485db-xlwdp" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.073080 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwzxd\" (UniqueName: \"kubernetes.io/projected/bf8fbefc-06d3-4792-b11c-86e8488b231e-kube-api-access-xwzxd\") pod \"dns-default-8s6sd\" (UID: \"bf8fbefc-06d3-4792-b11c-86e8488b231e\") " pod="openshift-dns/dns-default-8s6sd" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.073101 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7ef25191-6af6-4921-973b-f2b127a01a6a-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-twgpv\" (UID: \"7ef25191-6af6-4921-973b-f2b127a01a6a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-twgpv" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.073153 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2e81ff5c-f656-4f24-bf49-33fbec1f7052-ca-trust-extracted\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.073178 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2rzm7\" (UniqueName: \"kubernetes.io/projected/1dc73581-03ed-4c2a-8631-0e3adab48686-kube-api-access-2rzm7\") pod \"package-server-manager-789f6589d5-5hfpc\" (UID: \"1dc73581-03ed-4c2a-8631-0e3adab48686\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5hfpc" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.073249 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ddcaaa4-4dfe-4b11-a24c-ffc2adbbeb5e-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-t7cmz\" (UID: \"3ddcaaa4-4dfe-4b11-a24c-ffc2adbbeb5e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t7cmz" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.073271 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9682d1e4-ba48-496f-822e-6b0262676cca-config\") pod \"service-ca-operator-777779d784-gclj9\" (UID: \"9682d1e4-ba48-496f-822e-6b0262676cca\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gclj9" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.073293 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1dc73581-03ed-4c2a-8631-0e3adab48686-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5hfpc\" (UID: \"1dc73581-03ed-4c2a-8631-0e3adab48686\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5hfpc" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.073315 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5dfe4211-4bb6-47e4-9797-652393f66bc5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nbrh4\" (UID: \"5dfe4211-4bb6-47e4-9797-652393f66bc5\") " pod="openshift-marketplace/marketplace-operator-79b997595-nbrh4" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.073355 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2d6tn\" (UniqueName: \"kubernetes.io/projected/0dede0f5-5799-426f-94dc-cba3a14494fb-kube-api-access-2d6tn\") pod \"router-default-5444994796-58ggj\" (UID: \"0dede0f5-5799-426f-94dc-cba3a14494fb\") " pod="openshift-ingress/router-default-5444994796-58ggj" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.073382 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0accd572-1725-44eb-9d94-94184e622485-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9gv8b\" (UID: \"0accd572-1725-44eb-9d94-94184e622485\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9gv8b" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.073408 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bpcjq\" (UniqueName: \"kubernetes.io/projected/2e81ff5c-f656-4f24-bf49-33fbec1f7052-kube-api-access-bpcjq\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.073441 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1209b691-1ca5-4577-a46f-00b590053ec9-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-mqv74\" (UID: \"1209b691-1ca5-4577-a46f-00b590053ec9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mqv74" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.073466 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08be9cf8-e8ce-4ef3-bde6-fc57dfb0df23-serving-cert\") pod \"etcd-operator-b45778765-l2tfx\" (UID: \"08be9cf8-e8ce-4ef3-bde6-fc57dfb0df23\") " pod="openshift-etcd-operator/etcd-operator-b45778765-l2tfx" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.073509 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/1209b691-1ca5-4577-a46f-00b590053ec9-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-mqv74\" (UID: \"1209b691-1ca5-4577-a46f-00b590053ec9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mqv74" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.073534 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0dede0f5-5799-426f-94dc-cba3a14494fb-metrics-certs\") pod \"router-default-5444994796-58ggj\" (UID: \"0dede0f5-5799-426f-94dc-cba3a14494fb\") " pod="openshift-ingress/router-default-5444994796-58ggj" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.073557 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zpd5\" (UniqueName: \"kubernetes.io/projected/a4a80548-d2f1-40ee-a409-dd3cbed9ead2-kube-api-access-9zpd5\") pod \"machine-config-operator-74547568cd-pblzs\" (UID: \"a4a80548-d2f1-40ee-a409-dd3cbed9ead2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pblzs" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.073579 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dc588bff-a558-4793-ba57-c1efaf23f92a-webhook-cert\") pod \"packageserver-d55dfcdfc-w9c7m\" (UID: \"dc588bff-a558-4793-ba57-c1efaf23f92a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w9c7m" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.073600 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/837935a1-6cd1-4472-a692-e9c13f2b7ad7-registration-dir\") pod \"csi-hostpathplugin-rsrns\" (UID: \"837935a1-6cd1-4472-a692-e9c13f2b7ad7\") " pod="hostpath-provisioner/csi-hostpathplugin-rsrns" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.073625 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7ef25191-6af6-4921-973b-f2b127a01a6a-proxy-tls\") pod \"machine-config-controller-84d6567774-twgpv\" (UID: \"7ef25191-6af6-4921-973b-f2b127a01a6a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-twgpv" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.073647 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrfgh\" (UniqueName: \"kubernetes.io/projected/8bfda5f1-ce68-4682-bb75-e62f14514d81-kube-api-access-qrfgh\") pod \"service-ca-9c57cc56f-7t2lr\" (UID: \"8bfda5f1-ce68-4682-bb75-e62f14514d81\") " pod="openshift-service-ca/service-ca-9c57cc56f-7t2lr" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.073669 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/837935a1-6cd1-4472-a692-e9c13f2b7ad7-socket-dir\") pod \"csi-hostpathplugin-rsrns\" (UID: \"837935a1-6cd1-4472-a692-e9c13f2b7ad7\") " pod="hostpath-provisioner/csi-hostpathplugin-rsrns" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.073699 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/0dede0f5-5799-426f-94dc-cba3a14494fb-default-certificate\") pod \"router-default-5444994796-58ggj\" (UID: \"0dede0f5-5799-426f-94dc-cba3a14494fb\") " pod="openshift-ingress/router-default-5444994796-58ggj" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.073727 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3d1c7446-d5a2-4dd3-996c-0ca11ccbcf0b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-zz76h\" (UID: \"3d1c7446-d5a2-4dd3-996c-0ca11ccbcf0b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zz76h" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.073734 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62b77904-e0d8-4a98-b6e0-49b2c18821db-trusted-ca-bundle\") pod \"console-f9d7485db-xlwdp\" (UID: \"62b77904-e0d8-4a98-b6e0-49b2c18821db\") " pod="openshift-console/console-f9d7485db-xlwdp" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.073745 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/62b77904-e0d8-4a98-b6e0-49b2c18821db-console-config\") pod \"console-f9d7485db-xlwdp\" (UID: \"62b77904-e0d8-4a98-b6e0-49b2c18821db\") " pod="openshift-console/console-f9d7485db-xlwdp" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.073750 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vr274\" (UniqueName: \"kubernetes.io/projected/837935a1-6cd1-4472-a692-e9c13f2b7ad7-kube-api-access-vr274\") pod \"csi-hostpathplugin-rsrns\" (UID: \"837935a1-6cd1-4472-a692-e9c13f2b7ad7\") " pod="hostpath-provisioner/csi-hostpathplugin-rsrns" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.073999 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/62b77904-e0d8-4a98-b6e0-49b2c18821db-console-oauth-config\") pod \"console-f9d7485db-xlwdp\" (UID: \"62b77904-e0d8-4a98-b6e0-49b2c18821db\") " pod="openshift-console/console-f9d7485db-xlwdp" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.074105 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a4a80548-d2f1-40ee-a409-dd3cbed9ead2-images\") pod \"machine-config-operator-74547568cd-pblzs\" (UID: \"a4a80548-d2f1-40ee-a409-dd3cbed9ead2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pblzs" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.074855 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2e81ff5c-f656-4f24-bf49-33fbec1f7052-registry-certificates\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.075073 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1209b691-1ca5-4577-a46f-00b590053ec9-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-mqv74\" (UID: \"1209b691-1ca5-4577-a46f-00b590053ec9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mqv74" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.075323 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/433c73f4-6132-487a-b44d-ac351cd0fb7d-config\") pod \"kube-controller-manager-operator-78b949d7b-lk9lz\" (UID: \"433c73f4-6132-487a-b44d-ac351cd0fb7d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lk9lz" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.075682 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0accd572-1725-44eb-9d94-94184e622485-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9gv8b\" (UID: \"0accd572-1725-44eb-9d94-94184e622485\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9gv8b" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.075949 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3ddcaaa4-4dfe-4b11-a24c-ffc2adbbeb5e-config\") pod \"kube-apiserver-operator-766d6c64bb-t7cmz\" (UID: \"3ddcaaa4-4dfe-4b11-a24c-ffc2adbbeb5e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t7cmz" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.077210 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08be9cf8-e8ce-4ef3-bde6-fc57dfb0df23-config\") pod \"etcd-operator-b45778765-l2tfx\" (UID: \"08be9cf8-e8ce-4ef3-bde6-fc57dfb0df23\") " pod="openshift-etcd-operator/etcd-operator-b45778765-l2tfx" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.077310 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/62b77904-e0d8-4a98-b6e0-49b2c18821db-service-ca\") pod \"console-f9d7485db-xlwdp\" (UID: \"62b77904-e0d8-4a98-b6e0-49b2c18821db\") " pod="openshift-console/console-f9d7485db-xlwdp" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.077542 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/049bc083-906a-459a-9df8-1e1fb5ff8918-service-ca-bundle\") pod \"authentication-operator-69f744f599-jpqsm\" (UID: \"049bc083-906a-459a-9df8-1e1fb5ff8918\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jpqsm" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.079514 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e81ff5c-f656-4f24-bf49-33fbec1f7052-trusted-ca\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.080240 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ddcaaa4-4dfe-4b11-a24c-ffc2adbbeb5e-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-t7cmz\" (UID: \"3ddcaaa4-4dfe-4b11-a24c-ffc2adbbeb5e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t7cmz" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.080371 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a4a80548-d2f1-40ee-a409-dd3cbed9ead2-auth-proxy-config\") pod \"machine-config-operator-74547568cd-pblzs\" (UID: \"a4a80548-d2f1-40ee-a409-dd3cbed9ead2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pblzs" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.080402 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/19e22ec0-f904-4fe5-a5ff-ba976489c026-auth-proxy-config\") pod \"machine-approver-56656f9798-bpvnv\" (UID: \"19e22ec0-f904-4fe5-a5ff-ba976489c026\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bpvnv" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.080983 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0dede0f5-5799-426f-94dc-cba3a14494fb-service-ca-bundle\") pod \"router-default-5444994796-58ggj\" (UID: \"0dede0f5-5799-426f-94dc-cba3a14494fb\") " pod="openshift-ingress/router-default-5444994796-58ggj" Oct 01 15:44:02 crc kubenswrapper[4949]: E1001 15:44:02.081468 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:02.58145176 +0000 UTC m=+141.887057951 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.082267 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/19e22ec0-f904-4fe5-a5ff-ba976489c026-config\") pod \"machine-approver-56656f9798-bpvnv\" (UID: \"19e22ec0-f904-4fe5-a5ff-ba976489c026\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bpvnv" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.084519 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7e3cc267-5513-4e94-a950-31dace366440-config\") pod \"openshift-apiserver-operator-796bbdcf4f-j5tq6\" (UID: \"7e3cc267-5513-4e94-a950-31dace366440\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-j5tq6" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.085002 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gd6wp"] Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.085043 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-nrnsk"] Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.085448 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/08be9cf8-e8ce-4ef3-bde6-fc57dfb0df23-etcd-ca\") pod \"etcd-operator-b45778765-l2tfx\" (UID: \"08be9cf8-e8ce-4ef3-bde6-fc57dfb0df23\") " pod="openshift-etcd-operator/etcd-operator-b45778765-l2tfx" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.085486 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/ff9c89a1-2f76-47e5-9e37-866d1d8adef2-images\") pod \"machine-api-operator-5694c8668f-t8vc2\" (UID: \"ff9c89a1-2f76-47e5-9e37-866d1d8adef2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t8vc2" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.085944 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/049bc083-906a-459a-9df8-1e1fb5ff8918-config\") pod \"authentication-operator-69f744f599-jpqsm\" (UID: \"049bc083-906a-459a-9df8-1e1fb5ff8918\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jpqsm" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.086180 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ff9c89a1-2f76-47e5-9e37-866d1d8adef2-config\") pod \"machine-api-operator-5694c8668f-t8vc2\" (UID: \"ff9c89a1-2f76-47e5-9e37-866d1d8adef2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t8vc2" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.086210 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2e81ff5c-f656-4f24-bf49-33fbec1f7052-ca-trust-extracted\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.087569 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-hjkx4"] Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.092286 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/049bc083-906a-459a-9df8-1e1fb5ff8918-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-jpqsm\" (UID: \"049bc083-906a-459a-9df8-1e1fb5ff8918\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jpqsm" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.093674 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/19e22ec0-f904-4fe5-a5ff-ba976489c026-auth-proxy-config\") pod \"machine-approver-56656f9798-bpvnv\" (UID: \"19e22ec0-f904-4fe5-a5ff-ba976489c026\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bpvnv" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.093704 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a4a80548-d2f1-40ee-a409-dd3cbed9ead2-auth-proxy-config\") pod \"machine-config-operator-74547568cd-pblzs\" (UID: \"a4a80548-d2f1-40ee-a409-dd3cbed9ead2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pblzs" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.094494 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/19e22ec0-f904-4fe5-a5ff-ba976489c026-machine-approver-tls\") pod \"machine-approver-56656f9798-bpvnv\" (UID: \"19e22ec0-f904-4fe5-a5ff-ba976489c026\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bpvnv" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.094859 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/33bf5fa2-f552-4a5a-828d-30b3db9c29a3-trusted-ca\") pod \"ingress-operator-5b745b69d9-fv7wt\" (UID: \"33bf5fa2-f552-4a5a-828d-30b3db9c29a3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fv7wt" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.094908 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a4a80548-d2f1-40ee-a409-dd3cbed9ead2-proxy-tls\") pod \"machine-config-operator-74547568cd-pblzs\" (UID: \"a4a80548-d2f1-40ee-a409-dd3cbed9ead2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pblzs" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.095475 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/62b77904-e0d8-4a98-b6e0-49b2c18821db-console-serving-cert\") pod \"console-f9d7485db-xlwdp\" (UID: \"62b77904-e0d8-4a98-b6e0-49b2c18821db\") " pod="openshift-console/console-f9d7485db-xlwdp" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.096251 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/08be9cf8-e8ce-4ef3-bde6-fc57dfb0df23-etcd-service-ca\") pod \"etcd-operator-b45778765-l2tfx\" (UID: \"08be9cf8-e8ce-4ef3-bde6-fc57dfb0df23\") " pod="openshift-etcd-operator/etcd-operator-b45778765-l2tfx" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.096621 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/0dede0f5-5799-426f-94dc-cba3a14494fb-default-certificate\") pod \"router-default-5444994796-58ggj\" (UID: \"0dede0f5-5799-426f-94dc-cba3a14494fb\") " pod="openshift-ingress/router-default-5444994796-58ggj" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.101582 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/08be9cf8-e8ce-4ef3-bde6-fc57dfb0df23-etcd-client\") pod \"etcd-operator-b45778765-l2tfx\" (UID: \"08be9cf8-e8ce-4ef3-bde6-fc57dfb0df23\") " pod="openshift-etcd-operator/etcd-operator-b45778765-l2tfx" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.102380 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/0dede0f5-5799-426f-94dc-cba3a14494fb-stats-auth\") pod \"router-default-5444994796-58ggj\" (UID: \"0dede0f5-5799-426f-94dc-cba3a14494fb\") " pod="openshift-ingress/router-default-5444994796-58ggj" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.102766 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0accd572-1725-44eb-9d94-94184e622485-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9gv8b\" (UID: \"0accd572-1725-44eb-9d94-94184e622485\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9gv8b" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.103204 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/ff9c89a1-2f76-47e5-9e37-866d1d8adef2-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-t8vc2\" (UID: \"ff9c89a1-2f76-47e5-9e37-866d1d8adef2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t8vc2" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.103419 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2e81ff5c-f656-4f24-bf49-33fbec1f7052-installation-pull-secrets\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.104468 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/3d1c7446-d5a2-4dd3-996c-0ca11ccbcf0b-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-zz76h\" (UID: \"3d1c7446-d5a2-4dd3-996c-0ca11ccbcf0b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zz76h" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.104646 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/1209b691-1ca5-4577-a46f-00b590053ec9-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-mqv74\" (UID: \"1209b691-1ca5-4577-a46f-00b590053ec9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mqv74" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.104827 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/0dede0f5-5799-426f-94dc-cba3a14494fb-metrics-certs\") pod \"router-default-5444994796-58ggj\" (UID: \"0dede0f5-5799-426f-94dc-cba3a14494fb\") " pod="openshift-ingress/router-default-5444994796-58ggj" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.104921 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7e3cc267-5513-4e94-a950-31dace366440-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-j5tq6\" (UID: \"7e3cc267-5513-4e94-a950-31dace366440\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-j5tq6" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.105812 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08be9cf8-e8ce-4ef3-bde6-fc57dfb0df23-serving-cert\") pod \"etcd-operator-b45778765-l2tfx\" (UID: \"08be9cf8-e8ce-4ef3-bde6-fc57dfb0df23\") " pod="openshift-etcd-operator/etcd-operator-b45778765-l2tfx" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.105822 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/3ddcaaa4-4dfe-4b11-a24c-ffc2adbbeb5e-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-t7cmz\" (UID: \"3ddcaaa4-4dfe-4b11-a24c-ffc2adbbeb5e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t7cmz" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.110502 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f22fcb2d-8db9-4200-bedb-b47ab677a3f2-metrics-tls\") pod \"dns-operator-744455d44c-mlktd\" (UID: \"f22fcb2d-8db9-4200-bedb-b47ab677a3f2\") " pod="openshift-dns-operator/dns-operator-744455d44c-mlktd" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.111908 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wjx4c"] Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.113590 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/433c73f4-6132-487a-b44d-ac351cd0fb7d-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-lk9lz\" (UID: \"433c73f4-6132-487a-b44d-ac351cd0fb7d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lk9lz" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.114355 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/049bc083-906a-459a-9df8-1e1fb5ff8918-serving-cert\") pod \"authentication-operator-69f744f599-jpqsm\" (UID: \"049bc083-906a-459a-9df8-1e1fb5ff8918\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jpqsm" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.114765 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/33bf5fa2-f552-4a5a-828d-30b3db9c29a3-metrics-tls\") pod \"ingress-operator-5b745b69d9-fv7wt\" (UID: \"33bf5fa2-f552-4a5a-828d-30b3db9c29a3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fv7wt" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.117940 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/62b77904-e0d8-4a98-b6e0-49b2c18821db-console-oauth-config\") pod \"console-f9d7485db-xlwdp\" (UID: \"62b77904-e0d8-4a98-b6e0-49b2c18821db\") " pod="openshift-console/console-f9d7485db-xlwdp" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.120886 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2e81ff5c-f656-4f24-bf49-33fbec1f7052-registry-tls\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:02 crc kubenswrapper[4949]: W1001 15:44:02.121513 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44c12e71_4fbb_407d_b4b2_81ae599a853a.slice/crio-7a1b1a7052ccf113a9240f08108074689d68153947c0314235d37c32d0401bb9 WatchSource:0}: Error finding container 7a1b1a7052ccf113a9240f08108074689d68153947c0314235d37c32d0401bb9: Status 404 returned error can't find the container with id 7a1b1a7052ccf113a9240f08108074689d68153947c0314235d37c32d0401bb9 Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.136818 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t2h8w\" (UniqueName: \"kubernetes.io/projected/1209b691-1ca5-4577-a46f-00b590053ec9-kube-api-access-t2h8w\") pod \"cluster-image-registry-operator-dc59b4c8b-mqv74\" (UID: \"1209b691-1ca5-4577-a46f-00b590053ec9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mqv74" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.142973 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqzxg\" (UniqueName: \"kubernetes.io/projected/c04759f6-7a1a-43cb-a705-41d2319646b6-kube-api-access-cqzxg\") pod \"downloads-7954f5f757-p2vmt\" (UID: \"c04759f6-7a1a-43cb-a705-41d2319646b6\") " pod="openshift-console/downloads-7954f5f757-p2vmt" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.163174 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/433c73f4-6132-487a-b44d-ac351cd0fb7d-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-lk9lz\" (UID: \"433c73f4-6132-487a-b44d-ac351cd0fb7d\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lk9lz" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.185319 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5s6mg\" (UniqueName: \"kubernetes.io/projected/7e3cc267-5513-4e94-a950-31dace366440-kube-api-access-5s6mg\") pod \"openshift-apiserver-operator-796bbdcf4f-j5tq6\" (UID: \"7e3cc267-5513-4e94-a950-31dace366440\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-j5tq6" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.196107 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.196424 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cc4f3054-5b96-488b-b209-fa9433c513ab-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hbkqd\" (UID: \"cc4f3054-5b96-488b-b209-fa9433c513ab\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hbkqd" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.196464 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9682d1e4-ba48-496f-822e-6b0262676cca-serving-cert\") pod \"service-ca-operator-777779d784-gclj9\" (UID: \"9682d1e4-ba48-496f-822e-6b0262676cca\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gclj9" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.196489 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qhpk6\" (UniqueName: \"kubernetes.io/projected/dc588bff-a558-4793-ba57-c1efaf23f92a-kube-api-access-qhpk6\") pod \"packageserver-d55dfcdfc-w9c7m\" (UID: \"dc588bff-a558-4793-ba57-c1efaf23f92a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w9c7m" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.196511 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5ms8k\" (UniqueName: \"kubernetes.io/projected/5dfe4211-4bb6-47e4-9797-652393f66bc5-kube-api-access-5ms8k\") pod \"marketplace-operator-79b997595-nbrh4\" (UID: \"5dfe4211-4bb6-47e4-9797-652393f66bc5\") " pod="openshift-marketplace/marketplace-operator-79b997595-nbrh4" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.196548 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/98af3efd-3e5b-4bfd-96ae-f3629aa18f43-secret-volume\") pod \"collect-profiles-29322210-dtf9r\" (UID: \"98af3efd-3e5b-4bfd-96ae-f3629aa18f43\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322210-dtf9r" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.196574 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/837935a1-6cd1-4472-a692-e9c13f2b7ad7-mountpoint-dir\") pod \"csi-hostpathplugin-rsrns\" (UID: \"837935a1-6cd1-4472-a692-e9c13f2b7ad7\") " pod="hostpath-provisioner/csi-hostpathplugin-rsrns" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.196596 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-85snb\" (UniqueName: \"kubernetes.io/projected/cc4f3054-5b96-488b-b209-fa9433c513ab-kube-api-access-85snb\") pod \"olm-operator-6b444d44fb-hbkqd\" (UID: \"cc4f3054-5b96-488b-b209-fa9433c513ab\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hbkqd" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.196629 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf8fbefc-06d3-4792-b11c-86e8488b231e-config-volume\") pod \"dns-default-8s6sd\" (UID: \"bf8fbefc-06d3-4792-b11c-86e8488b231e\") " pod="openshift-dns/dns-default-8s6sd" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.196648 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gwklq\" (UniqueName: \"kubernetes.io/projected/98af3efd-3e5b-4bfd-96ae-f3629aa18f43-kube-api-access-gwklq\") pod \"collect-profiles-29322210-dtf9r\" (UID: \"98af3efd-3e5b-4bfd-96ae-f3629aa18f43\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322210-dtf9r" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.196668 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7755b8a6-1aee-422e-ad5a-b56cd77c0234-certs\") pod \"machine-config-server-qb6b6\" (UID: \"7755b8a6-1aee-422e-ad5a-b56cd77c0234\") " pod="openshift-machine-config-operator/machine-config-server-qb6b6" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.196694 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c049ec19-17c6-4838-873d-686ea408b5dc-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-tszj5\" (UID: \"c049ec19-17c6-4838-873d-686ea408b5dc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tszj5" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.196715 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf8fbefc-06d3-4792-b11c-86e8488b231e-metrics-tls\") pod \"dns-default-8s6sd\" (UID: \"bf8fbefc-06d3-4792-b11c-86e8488b231e\") " pod="openshift-dns/dns-default-8s6sd" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.196737 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25hkw\" (UniqueName: \"kubernetes.io/projected/7755b8a6-1aee-422e-ad5a-b56cd77c0234-kube-api-access-25hkw\") pod \"machine-config-server-qb6b6\" (UID: \"7755b8a6-1aee-422e-ad5a-b56cd77c0234\") " pod="openshift-machine-config-operator/machine-config-server-qb6b6" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.196764 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dc588bff-a558-4793-ba57-c1efaf23f92a-apiservice-cert\") pod \"packageserver-d55dfcdfc-w9c7m\" (UID: \"dc588bff-a558-4793-ba57-c1efaf23f92a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w9c7m" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.196786 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdw6t\" (UniqueName: \"kubernetes.io/projected/9682d1e4-ba48-496f-822e-6b0262676cca-kube-api-access-pdw6t\") pod \"service-ca-operator-777779d784-gclj9\" (UID: \"9682d1e4-ba48-496f-822e-6b0262676cca\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gclj9" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.196826 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8bfda5f1-ce68-4682-bb75-e62f14514d81-signing-cabundle\") pod \"service-ca-9c57cc56f-7t2lr\" (UID: \"8bfda5f1-ce68-4682-bb75-e62f14514d81\") " pod="openshift-service-ca/service-ca-9c57cc56f-7t2lr" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.196866 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c049ec19-17c6-4838-873d-686ea408b5dc-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-tszj5\" (UID: \"c049ec19-17c6-4838-873d-686ea408b5dc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tszj5" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.196889 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cc4f3054-5b96-488b-b209-fa9433c513ab-srv-cert\") pod \"olm-operator-6b444d44fb-hbkqd\" (UID: \"cc4f3054-5b96-488b-b209-fa9433c513ab\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hbkqd" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.196913 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7755b8a6-1aee-422e-ad5a-b56cd77c0234-node-bootstrap-token\") pod \"machine-config-server-qb6b6\" (UID: \"7755b8a6-1aee-422e-ad5a-b56cd77c0234\") " pod="openshift-machine-config-operator/machine-config-server-qb6b6" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.196936 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8bfda5f1-ce68-4682-bb75-e62f14514d81-signing-key\") pod \"service-ca-9c57cc56f-7t2lr\" (UID: \"8bfda5f1-ce68-4682-bb75-e62f14514d81\") " pod="openshift-service-ca/service-ca-9c57cc56f-7t2lr" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.196958 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/837935a1-6cd1-4472-a692-e9c13f2b7ad7-plugins-dir\") pod \"csi-hostpathplugin-rsrns\" (UID: \"837935a1-6cd1-4472-a692-e9c13f2b7ad7\") " pod="hostpath-provisioner/csi-hostpathplugin-rsrns" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.196987 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ltq8c\" (UniqueName: \"kubernetes.io/projected/7ef25191-6af6-4921-973b-f2b127a01a6a-kube-api-access-ltq8c\") pod \"machine-config-controller-84d6567774-twgpv\" (UID: \"7ef25191-6af6-4921-973b-f2b127a01a6a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-twgpv" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.197040 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hfc44\" (UniqueName: \"kubernetes.io/projected/edc35680-cc63-485d-aa02-4e08f96d86fa-kube-api-access-hfc44\") pod \"ingress-canary-kgxcz\" (UID: \"edc35680-cc63-485d-aa02-4e08f96d86fa\") " pod="openshift-ingress-canary/ingress-canary-kgxcz" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.197377 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/837935a1-6cd1-4472-a692-e9c13f2b7ad7-csi-data-dir\") pod \"csi-hostpathplugin-rsrns\" (UID: \"837935a1-6cd1-4472-a692-e9c13f2b7ad7\") " pod="hostpath-provisioner/csi-hostpathplugin-rsrns" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.197430 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/dc588bff-a558-4793-ba57-c1efaf23f92a-tmpfs\") pod \"packageserver-d55dfcdfc-w9c7m\" (UID: \"dc588bff-a558-4793-ba57-c1efaf23f92a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w9c7m" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.197454 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/edc35680-cc63-485d-aa02-4e08f96d86fa-cert\") pod \"ingress-canary-kgxcz\" (UID: \"edc35680-cc63-485d-aa02-4e08f96d86fa\") " pod="openshift-ingress-canary/ingress-canary-kgxcz" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.197497 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5dfe4211-4bb6-47e4-9797-652393f66bc5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nbrh4\" (UID: \"5dfe4211-4bb6-47e4-9797-652393f66bc5\") " pod="openshift-marketplace/marketplace-operator-79b997595-nbrh4" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.197519 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98af3efd-3e5b-4bfd-96ae-f3629aa18f43-config-volume\") pod \"collect-profiles-29322210-dtf9r\" (UID: \"98af3efd-3e5b-4bfd-96ae-f3629aa18f43\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322210-dtf9r" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.197554 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mmtg8\" (UniqueName: \"kubernetes.io/projected/c049ec19-17c6-4838-873d-686ea408b5dc-kube-api-access-mmtg8\") pod \"kube-storage-version-migrator-operator-b67b599dd-tszj5\" (UID: \"c049ec19-17c6-4838-873d-686ea408b5dc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tszj5" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.199715 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwzxd\" (UniqueName: \"kubernetes.io/projected/bf8fbefc-06d3-4792-b11c-86e8488b231e-kube-api-access-xwzxd\") pod \"dns-default-8s6sd\" (UID: \"bf8fbefc-06d3-4792-b11c-86e8488b231e\") " pod="openshift-dns/dns-default-8s6sd" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.199745 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2rzm7\" (UniqueName: \"kubernetes.io/projected/1dc73581-03ed-4c2a-8631-0e3adab48686-kube-api-access-2rzm7\") pod \"package-server-manager-789f6589d5-5hfpc\" (UID: \"1dc73581-03ed-4c2a-8631-0e3adab48686\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5hfpc" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.199780 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7ef25191-6af6-4921-973b-f2b127a01a6a-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-twgpv\" (UID: \"7ef25191-6af6-4921-973b-f2b127a01a6a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-twgpv" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.199822 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9682d1e4-ba48-496f-822e-6b0262676cca-config\") pod \"service-ca-operator-777779d784-gclj9\" (UID: \"9682d1e4-ba48-496f-822e-6b0262676cca\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gclj9" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.199859 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1dc73581-03ed-4c2a-8631-0e3adab48686-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5hfpc\" (UID: \"1dc73581-03ed-4c2a-8631-0e3adab48686\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5hfpc" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.199891 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5dfe4211-4bb6-47e4-9797-652393f66bc5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nbrh4\" (UID: \"5dfe4211-4bb6-47e4-9797-652393f66bc5\") " pod="openshift-marketplace/marketplace-operator-79b997595-nbrh4" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.199952 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/837935a1-6cd1-4472-a692-e9c13f2b7ad7-registration-dir\") pod \"csi-hostpathplugin-rsrns\" (UID: \"837935a1-6cd1-4472-a692-e9c13f2b7ad7\") " pod="hostpath-provisioner/csi-hostpathplugin-rsrns" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.199984 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dc588bff-a558-4793-ba57-c1efaf23f92a-webhook-cert\") pod \"packageserver-d55dfcdfc-w9c7m\" (UID: \"dc588bff-a558-4793-ba57-c1efaf23f92a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w9c7m" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.200007 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7ef25191-6af6-4921-973b-f2b127a01a6a-proxy-tls\") pod \"machine-config-controller-84d6567774-twgpv\" (UID: \"7ef25191-6af6-4921-973b-f2b127a01a6a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-twgpv" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.200030 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrfgh\" (UniqueName: \"kubernetes.io/projected/8bfda5f1-ce68-4682-bb75-e62f14514d81-kube-api-access-qrfgh\") pod \"service-ca-9c57cc56f-7t2lr\" (UID: \"8bfda5f1-ce68-4682-bb75-e62f14514d81\") " pod="openshift-service-ca/service-ca-9c57cc56f-7t2lr" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.200054 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/837935a1-6cd1-4472-a692-e9c13f2b7ad7-socket-dir\") pod \"csi-hostpathplugin-rsrns\" (UID: \"837935a1-6cd1-4472-a692-e9c13f2b7ad7\") " pod="hostpath-provisioner/csi-hostpathplugin-rsrns" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.200079 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vr274\" (UniqueName: \"kubernetes.io/projected/837935a1-6cd1-4472-a692-e9c13f2b7ad7-kube-api-access-vr274\") pod \"csi-hostpathplugin-rsrns\" (UID: \"837935a1-6cd1-4472-a692-e9c13f2b7ad7\") " pod="hostpath-provisioner/csi-hostpathplugin-rsrns" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.200567 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/dc588bff-a558-4793-ba57-c1efaf23f92a-tmpfs\") pod \"packageserver-d55dfcdfc-w9c7m\" (UID: \"dc588bff-a558-4793-ba57-c1efaf23f92a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w9c7m" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.199190 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/837935a1-6cd1-4472-a692-e9c13f2b7ad7-plugins-dir\") pod \"csi-hostpathplugin-rsrns\" (UID: \"837935a1-6cd1-4472-a692-e9c13f2b7ad7\") " pod="hostpath-provisioner/csi-hostpathplugin-rsrns" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.200624 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c049ec19-17c6-4838-873d-686ea408b5dc-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-tszj5\" (UID: \"c049ec19-17c6-4838-873d-686ea408b5dc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tszj5" Oct 01 15:44:02 crc kubenswrapper[4949]: E1001 15:44:02.200716 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:02.700700472 +0000 UTC m=+142.006306663 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.200834 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/837935a1-6cd1-4472-a692-e9c13f2b7ad7-mountpoint-dir\") pod \"csi-hostpathplugin-rsrns\" (UID: \"837935a1-6cd1-4472-a692-e9c13f2b7ad7\") " pod="hostpath-provisioner/csi-hostpathplugin-rsrns" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.200844 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/8bfda5f1-ce68-4682-bb75-e62f14514d81-signing-key\") pod \"service-ca-9c57cc56f-7t2lr\" (UID: \"8bfda5f1-ce68-4682-bb75-e62f14514d81\") " pod="openshift-service-ca/service-ca-9c57cc56f-7t2lr" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.201224 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9682d1e4-ba48-496f-822e-6b0262676cca-config\") pod \"service-ca-operator-777779d784-gclj9\" (UID: \"9682d1e4-ba48-496f-822e-6b0262676cca\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gclj9" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.201303 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/7ef25191-6af6-4921-973b-f2b127a01a6a-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-twgpv\" (UID: \"7ef25191-6af6-4921-973b-f2b127a01a6a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-twgpv" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.202687 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/dc588bff-a558-4793-ba57-c1efaf23f92a-apiservice-cert\") pod \"packageserver-d55dfcdfc-w9c7m\" (UID: \"dc588bff-a558-4793-ba57-c1efaf23f92a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w9c7m" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.202983 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/bf8fbefc-06d3-4792-b11c-86e8488b231e-config-volume\") pod \"dns-default-8s6sd\" (UID: \"bf8fbefc-06d3-4792-b11c-86e8488b231e\") " pod="openshift-dns/dns-default-8s6sd" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.203258 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/33bf5fa2-f552-4a5a-828d-30b3db9c29a3-bound-sa-token\") pod \"ingress-operator-5b745b69d9-fv7wt\" (UID: \"33bf5fa2-f552-4a5a-828d-30b3db9c29a3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fv7wt" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.198031 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c049ec19-17c6-4838-873d-686ea408b5dc-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-tszj5\" (UID: \"c049ec19-17c6-4838-873d-686ea408b5dc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tszj5" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.203797 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/cc4f3054-5b96-488b-b209-fa9433c513ab-srv-cert\") pod \"olm-operator-6b444d44fb-hbkqd\" (UID: \"cc4f3054-5b96-488b-b209-fa9433c513ab\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hbkqd" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.204460 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/837935a1-6cd1-4472-a692-e9c13f2b7ad7-socket-dir\") pod \"csi-hostpathplugin-rsrns\" (UID: \"837935a1-6cd1-4472-a692-e9c13f2b7ad7\") " pod="hostpath-provisioner/csi-hostpathplugin-rsrns" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.204600 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98af3efd-3e5b-4bfd-96ae-f3629aa18f43-config-volume\") pod \"collect-profiles-29322210-dtf9r\" (UID: \"98af3efd-3e5b-4bfd-96ae-f3629aa18f43\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322210-dtf9r" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.205164 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/7ef25191-6af6-4921-973b-f2b127a01a6a-proxy-tls\") pod \"machine-config-controller-84d6567774-twgpv\" (UID: \"7ef25191-6af6-4921-973b-f2b127a01a6a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-twgpv" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.205265 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/837935a1-6cd1-4472-a692-e9c13f2b7ad7-registration-dir\") pod \"csi-hostpathplugin-rsrns\" (UID: \"837935a1-6cd1-4472-a692-e9c13f2b7ad7\") " pod="hostpath-provisioner/csi-hostpathplugin-rsrns" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.198915 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/8bfda5f1-ce68-4682-bb75-e62f14514d81-signing-cabundle\") pod \"service-ca-9c57cc56f-7t2lr\" (UID: \"8bfda5f1-ce68-4682-bb75-e62f14514d81\") " pod="openshift-service-ca/service-ca-9c57cc56f-7t2lr" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.198982 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/837935a1-6cd1-4472-a692-e9c13f2b7ad7-csi-data-dir\") pod \"csi-hostpathplugin-rsrns\" (UID: \"837935a1-6cd1-4472-a692-e9c13f2b7ad7\") " pod="hostpath-provisioner/csi-hostpathplugin-rsrns" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.205876 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/edc35680-cc63-485d-aa02-4e08f96d86fa-cert\") pod \"ingress-canary-kgxcz\" (UID: \"edc35680-cc63-485d-aa02-4e08f96d86fa\") " pod="openshift-ingress-canary/ingress-canary-kgxcz" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.206917 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/cc4f3054-5b96-488b-b209-fa9433c513ab-profile-collector-cert\") pod \"olm-operator-6b444d44fb-hbkqd\" (UID: \"cc4f3054-5b96-488b-b209-fa9433c513ab\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hbkqd" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.207663 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5dfe4211-4bb6-47e4-9797-652393f66bc5-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-nbrh4\" (UID: \"5dfe4211-4bb6-47e4-9797-652393f66bc5\") " pod="openshift-marketplace/marketplace-operator-79b997595-nbrh4" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.207982 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/7755b8a6-1aee-422e-ad5a-b56cd77c0234-node-bootstrap-token\") pod \"machine-config-server-qb6b6\" (UID: \"7755b8a6-1aee-422e-ad5a-b56cd77c0234\") " pod="openshift-machine-config-operator/machine-config-server-qb6b6" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.209362 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/bf8fbefc-06d3-4792-b11c-86e8488b231e-metrics-tls\") pod \"dns-default-8s6sd\" (UID: \"bf8fbefc-06d3-4792-b11c-86e8488b231e\") " pod="openshift-dns/dns-default-8s6sd" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.210573 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dc588bff-a558-4793-ba57-c1efaf23f92a-webhook-cert\") pod \"packageserver-d55dfcdfc-w9c7m\" (UID: \"dc588bff-a558-4793-ba57-c1efaf23f92a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w9c7m" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.210672 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/98af3efd-3e5b-4bfd-96ae-f3629aa18f43-secret-volume\") pod \"collect-profiles-29322210-dtf9r\" (UID: \"98af3efd-3e5b-4bfd-96ae-f3629aa18f43\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322210-dtf9r" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.210703 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5dfe4211-4bb6-47e4-9797-652393f66bc5-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-nbrh4\" (UID: \"5dfe4211-4bb6-47e4-9797-652393f66bc5\") " pod="openshift-marketplace/marketplace-operator-79b997595-nbrh4" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.211385 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9682d1e4-ba48-496f-822e-6b0262676cca-serving-cert\") pod \"service-ca-operator-777779d784-gclj9\" (UID: \"9682d1e4-ba48-496f-822e-6b0262676cca\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gclj9" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.224035 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/7755b8a6-1aee-422e-ad5a-b56cd77c0234-certs\") pod \"machine-config-server-qb6b6\" (UID: \"7755b8a6-1aee-422e-ad5a-b56cd77c0234\") " pod="openshift-machine-config-operator/machine-config-server-qb6b6" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.224505 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lj724\" (UniqueName: \"kubernetes.io/projected/62b77904-e0d8-4a98-b6e0-49b2c18821db-kube-api-access-lj724\") pod \"console-f9d7485db-xlwdp\" (UID: \"62b77904-e0d8-4a98-b6e0-49b2c18821db\") " pod="openshift-console/console-f9d7485db-xlwdp" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.224085 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/1dc73581-03ed-4c2a-8631-0e3adab48686-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-5hfpc\" (UID: \"1dc73581-03ed-4c2a-8631-0e3adab48686\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5hfpc" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.244003 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs4b6\" (UniqueName: \"kubernetes.io/projected/3d1c7446-d5a2-4dd3-996c-0ca11ccbcf0b-kube-api-access-bs4b6\") pod \"control-plane-machine-set-operator-78cbb6b69f-zz76h\" (UID: \"3d1c7446-d5a2-4dd3-996c-0ca11ccbcf0b\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zz76h" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.266687 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7w7zp\" (UniqueName: \"kubernetes.io/projected/33bf5fa2-f552-4a5a-828d-30b3db9c29a3-kube-api-access-7w7zp\") pod \"ingress-operator-5b745b69d9-fv7wt\" (UID: \"33bf5fa2-f552-4a5a-828d-30b3db9c29a3\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fv7wt" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.280817 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7qqk\" (UniqueName: \"kubernetes.io/projected/971a7b5f-3076-4e58-a96c-d5902c05b319-kube-api-access-p7qqk\") pod \"migrator-59844c95c7-htjbx\" (UID: \"971a7b5f-3076-4e58-a96c-d5902c05b319\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-htjbx" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.286868 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lk9lz" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.296021 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-p2vmt" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.299261 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-xlwdp" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.302460 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:02 crc kubenswrapper[4949]: E1001 15:44:02.303657 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:02.803640062 +0000 UTC m=+142.109246253 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.305199 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-j5tq6" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.307671 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-6tt8n"] Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.308569 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q9l64\" (UniqueName: \"kubernetes.io/projected/19e22ec0-f904-4fe5-a5ff-ba976489c026-kube-api-access-q9l64\") pod \"machine-approver-56656f9798-bpvnv\" (UID: \"19e22ec0-f904-4fe5-a5ff-ba976489c026\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bpvnv" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.325648 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pg9pg\" (UniqueName: \"kubernetes.io/projected/08be9cf8-e8ce-4ef3-bde6-fc57dfb0df23-kube-api-access-pg9pg\") pod \"etcd-operator-b45778765-l2tfx\" (UID: \"08be9cf8-e8ce-4ef3-bde6-fc57dfb0df23\") " pod="openshift-etcd-operator/etcd-operator-b45778765-l2tfx" Oct 01 15:44:02 crc kubenswrapper[4949]: W1001 15:44:02.334731 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddbf87b2b_7468_46c2_bf9b_cd95faaedbb9.slice/crio-65f922ecbc0ad56e613420482a1686bb392522af63d926b69ab637c2cb4350b7 WatchSource:0}: Error finding container 65f922ecbc0ad56e613420482a1686bb392522af63d926b69ab637c2cb4350b7: Status 404 returned error can't find the container with id 65f922ecbc0ad56e613420482a1686bb392522af63d926b69ab637c2cb4350b7 Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.337067 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" event={"ID":"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8","Type":"ContainerStarted","Data":"b54bf423cc69ab5f5907b845da2760e77f3e912cba66ed4bdb0f8e226c60272b"} Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.339260 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-nrnsk" event={"ID":"4ddc725e-4c58-4cd2-a228-e53fede0a61e","Type":"ContainerStarted","Data":"2e7feb8d3235aca0aac1fedda4dedbb444975be1a583c86cf1657776b6126ba3"} Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.339345 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wld4k\" (UniqueName: \"kubernetes.io/projected/049bc083-906a-459a-9df8-1e1fb5ff8918-kube-api-access-wld4k\") pod \"authentication-operator-69f744f599-jpqsm\" (UID: \"049bc083-906a-459a-9df8-1e1fb5ff8918\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-jpqsm" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.347953 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dppnh" event={"ID":"44c12e71-4fbb-407d-b4b2-81ae599a853a","Type":"ContainerStarted","Data":"7a1b1a7052ccf113a9240f08108074689d68153947c0314235d37c32d0401bb9"} Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.354001 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fv7wt" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.354925 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-np8v7" event={"ID":"b3103144-2f18-4cc3-82ad-3fedf7e23914","Type":"ContainerStarted","Data":"d62e2f545bab5accda00a38e9ad850c3b3ca8ad7bebef8c4a2e64ad82e747dfa"} Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.361535 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2e81ff5c-f656-4f24-bf49-33fbec1f7052-bound-sa-token\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.361684 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gd6wp" event={"ID":"6d0cfaab-30c5-4483-8f77-5929e026cb31","Type":"ContainerStarted","Data":"8091c872bb9d7f4477565f7a7f34092a85a26282ffbd9cc45ab7b9fa46fa57c4"} Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.365341 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8znr5"] Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.366297 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hjkx4" event={"ID":"54bc2b08-ed4b-45fc-baa6-e681a412b2ed","Type":"ContainerStarted","Data":"61631ea9174eae2008fa441ea5db20df906a138a7e4f25b14c4684426433f1f9"} Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.370526 4949 generic.go:334] "Generic (PLEG): container finished" podID="91e40664-3369-4ee6-816c-03b6272c3d15" containerID="2f9d5bb3de1268b7203070f5dbe5560d86327ad7ced4dd78f24d8a6ea69adc43" exitCode=0 Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.370669 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-kwm7g" event={"ID":"91e40664-3369-4ee6-816c-03b6272c3d15","Type":"ContainerDied","Data":"2f9d5bb3de1268b7203070f5dbe5560d86327ad7ced4dd78f24d8a6ea69adc43"} Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.370705 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-kwm7g" event={"ID":"91e40664-3369-4ee6-816c-03b6272c3d15","Type":"ContainerStarted","Data":"2a0cb2386b4d2363f6c826066a1e5b568435a9cd5d4164a427d6b0c5e116a4ab"} Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.374071 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4v6mr" event={"ID":"ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342","Type":"ContainerStarted","Data":"c66212f138dd731ce04816c9313303d83af05d266f166596e7f0acbada9a70c5"} Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.374115 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4v6mr" event={"ID":"ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342","Type":"ContainerStarted","Data":"be713065403d589f3e5295d9b29a93c5e8fc94c9713a27166c2fe42d0df05aa0"} Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.374576 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4v6mr" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.375052 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bpvnv" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.377417 4949 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-4v6mr container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.377464 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4v6mr" podUID="ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.379136 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-sm5dr" event={"ID":"0df389db-e47d-4b16-9221-f1e5311c5cd6","Type":"ContainerStarted","Data":"5e42e8d85aadb3a15018a374233fd37e736c557cd32a7c6c346257840aab55d8"} Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.379191 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-sm5dr" event={"ID":"0df389db-e47d-4b16-9221-f1e5311c5cd6","Type":"ContainerStarted","Data":"df9635a815ea03823da56a9c4158745f5984d0fd963e2fbc26517136c9eff0b1"} Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.379924 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-sm5dr" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.381533 4949 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-sm5dr container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.381580 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-sm5dr" podUID="0df389db-e47d-4b16-9221-f1e5311c5cd6" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.382489 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zz76h" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.382920 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mnxdl\" (UniqueName: \"kubernetes.io/projected/ff9c89a1-2f76-47e5-9e37-866d1d8adef2-kube-api-access-mnxdl\") pod \"machine-api-operator-5694c8668f-t8vc2\" (UID: \"ff9c89a1-2f76-47e5-9e37-866d1d8adef2\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-t8vc2" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.403758 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:02 crc kubenswrapper[4949]: E1001 15:44:02.404283 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:02.904257882 +0000 UTC m=+142.209864123 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.407234 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9njbk\" (UniqueName: \"kubernetes.io/projected/f22fcb2d-8db9-4200-bedb-b47ab677a3f2-kube-api-access-9njbk\") pod \"dns-operator-744455d44c-mlktd\" (UID: \"f22fcb2d-8db9-4200-bedb-b47ab677a3f2\") " pod="openshift-dns-operator/dns-operator-744455d44c-mlktd" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.425453 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zpd5\" (UniqueName: \"kubernetes.io/projected/a4a80548-d2f1-40ee-a409-dd3cbed9ead2-kube-api-access-9zpd5\") pod \"machine-config-operator-74547568cd-pblzs\" (UID: \"a4a80548-d2f1-40ee-a409-dd3cbed9ead2\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pblzs" Oct 01 15:44:02 crc kubenswrapper[4949]: E1001 15:44:02.443985 4949 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3103144_2f18_4cc3_82ad_3fedf7e23914.slice/crio-conmon-a0e8959ca928d474866ed16891d0efebf9eb066feee1760475b25bc21a13cb5e.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3103144_2f18_4cc3_82ad_3fedf7e23914.slice/crio-a0e8959ca928d474866ed16891d0efebf9eb066feee1760475b25bc21a13cb5e.scope\": RecentStats: unable to find data in memory cache]" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.445271 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2d6tn\" (UniqueName: \"kubernetes.io/projected/0dede0f5-5799-426f-94dc-cba3a14494fb-kube-api-access-2d6tn\") pod \"router-default-5444994796-58ggj\" (UID: \"0dede0f5-5799-426f-94dc-cba3a14494fb\") " pod="openshift-ingress/router-default-5444994796-58ggj" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.499088 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-htjbx" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.506787 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:02 crc kubenswrapper[4949]: E1001 15:44:02.510024 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:03.010007383 +0000 UTC m=+142.315613574 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.519069 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-l2tfx" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.527313 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0accd572-1725-44eb-9d94-94184e622485-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-9gv8b\" (UID: \"0accd572-1725-44eb-9d94-94184e622485\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9gv8b" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.534428 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ddcaaa4-4dfe-4b11-a24c-ffc2adbbeb5e-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-t7cmz\" (UID: \"3ddcaaa4-4dfe-4b11-a24c-ffc2adbbeb5e\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t7cmz" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.535876 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1209b691-1ca5-4577-a46f-00b590053ec9-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-mqv74\" (UID: \"1209b691-1ca5-4577-a46f-00b590053ec9\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mqv74" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.536262 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bpcjq\" (UniqueName: \"kubernetes.io/projected/2e81ff5c-f656-4f24-bf49-33fbec1f7052-kube-api-access-bpcjq\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.568246 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-mlktd" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.571430 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdw6t\" (UniqueName: \"kubernetes.io/projected/9682d1e4-ba48-496f-822e-6b0262676cca-kube-api-access-pdw6t\") pod \"service-ca-operator-777779d784-gclj9\" (UID: \"9682d1e4-ba48-496f-822e-6b0262676cca\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-gclj9" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.580731 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-t8vc2" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.584589 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ltq8c\" (UniqueName: \"kubernetes.io/projected/7ef25191-6af6-4921-973b-f2b127a01a6a-kube-api-access-ltq8c\") pod \"machine-config-controller-84d6567774-twgpv\" (UID: \"7ef25191-6af6-4921-973b-f2b127a01a6a\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-twgpv" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.604445 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hfc44\" (UniqueName: \"kubernetes.io/projected/edc35680-cc63-485d-aa02-4e08f96d86fa-kube-api-access-hfc44\") pod \"ingress-canary-kgxcz\" (UID: \"edc35680-cc63-485d-aa02-4e08f96d86fa\") " pod="openshift-ingress-canary/ingress-canary-kgxcz" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.612555 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t7cmz" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.613582 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:02 crc kubenswrapper[4949]: E1001 15:44:02.614309 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:03.114289562 +0000 UTC m=+142.419895753 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.625818 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pblzs" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.626129 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-jpqsm" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.633439 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vr274\" (UniqueName: \"kubernetes.io/projected/837935a1-6cd1-4472-a692-e9c13f2b7ad7-kube-api-access-vr274\") pod \"csi-hostpathplugin-rsrns\" (UID: \"837935a1-6cd1-4472-a692-e9c13f2b7ad7\") " pod="hostpath-provisioner/csi-hostpathplugin-rsrns" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.633561 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mqv74" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.645053 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25hkw\" (UniqueName: \"kubernetes.io/projected/7755b8a6-1aee-422e-ad5a-b56cd77c0234-kube-api-access-25hkw\") pod \"machine-config-server-qb6b6\" (UID: \"7755b8a6-1aee-422e-ad5a-b56cd77c0234\") " pod="openshift-machine-config-operator/machine-config-server-qb6b6" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.647466 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9gv8b" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.669107 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwklq\" (UniqueName: \"kubernetes.io/projected/98af3efd-3e5b-4bfd-96ae-f3629aa18f43-kube-api-access-gwklq\") pod \"collect-profiles-29322210-dtf9r\" (UID: \"98af3efd-3e5b-4bfd-96ae-f3629aa18f43\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322210-dtf9r" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.691949 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-58ggj" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.704733 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrfgh\" (UniqueName: \"kubernetes.io/projected/8bfda5f1-ce68-4682-bb75-e62f14514d81-kube-api-access-qrfgh\") pod \"service-ca-9c57cc56f-7t2lr\" (UID: \"8bfda5f1-ce68-4682-bb75-e62f14514d81\") " pod="openshift-service-ca/service-ca-9c57cc56f-7t2lr" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.705024 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-twgpv" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.711577 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-85snb\" (UniqueName: \"kubernetes.io/projected/cc4f3054-5b96-488b-b209-fa9433c513ab-kube-api-access-85snb\") pod \"olm-operator-6b444d44fb-hbkqd\" (UID: \"cc4f3054-5b96-488b-b209-fa9433c513ab\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hbkqd" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.717333 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:02 crc kubenswrapper[4949]: E1001 15:44:02.717929 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:03.217899961 +0000 UTC m=+142.523506152 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.723739 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gclj9" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.734499 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-7t2lr" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.741419 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwzxd\" (UniqueName: \"kubernetes.io/projected/bf8fbefc-06d3-4792-b11c-86e8488b231e-kube-api-access-xwzxd\") pod \"dns-default-8s6sd\" (UID: \"bf8fbefc-06d3-4792-b11c-86e8488b231e\") " pod="openshift-dns/dns-default-8s6sd" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.742443 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-p2vmt"] Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.756065 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hbkqd" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.763838 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29322210-dtf9r" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.767801 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2rzm7\" (UniqueName: \"kubernetes.io/projected/1dc73581-03ed-4c2a-8631-0e3adab48686-kube-api-access-2rzm7\") pod \"package-server-manager-789f6589d5-5hfpc\" (UID: \"1dc73581-03ed-4c2a-8631-0e3adab48686\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5hfpc" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.769611 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mmtg8\" (UniqueName: \"kubernetes.io/projected/c049ec19-17c6-4838-873d-686ea408b5dc-kube-api-access-mmtg8\") pod \"kube-storage-version-migrator-operator-b67b599dd-tszj5\" (UID: \"c049ec19-17c6-4838-873d-686ea408b5dc\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tszj5" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.781013 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-8s6sd" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.786592 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5ms8k\" (UniqueName: \"kubernetes.io/projected/5dfe4211-4bb6-47e4-9797-652393f66bc5-kube-api-access-5ms8k\") pod \"marketplace-operator-79b997595-nbrh4\" (UID: \"5dfe4211-4bb6-47e4-9797-652393f66bc5\") " pod="openshift-marketplace/marketplace-operator-79b997595-nbrh4" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.794702 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-kgxcz" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.819916 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-rsrns" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.820565 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:02 crc kubenswrapper[4949]: E1001 15:44:02.820863 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:03.320851229 +0000 UTC m=+142.626457420 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.820925 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-qb6b6" Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.829965 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qhpk6\" (UniqueName: \"kubernetes.io/projected/dc588bff-a558-4793-ba57-c1efaf23f92a-kube-api-access-qhpk6\") pod \"packageserver-d55dfcdfc-w9c7m\" (UID: \"dc588bff-a558-4793-ba57-c1efaf23f92a\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w9c7m" Oct 01 15:44:02 crc kubenswrapper[4949]: W1001 15:44:02.837076 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc04759f6_7a1a_43cb_a705_41d2319646b6.slice/crio-0b6bbb5c105cad32505766a33bb777c5970d796c3072ad62737baf71bd5cfd83 WatchSource:0}: Error finding container 0b6bbb5c105cad32505766a33bb777c5970d796c3072ad62737baf71bd5cfd83: Status 404 returned error can't find the container with id 0b6bbb5c105cad32505766a33bb777c5970d796c3072ad62737baf71bd5cfd83 Oct 01 15:44:02 crc kubenswrapper[4949]: I1001 15:44:02.922663 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:02 crc kubenswrapper[4949]: E1001 15:44:02.923211 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:03.423195291 +0000 UTC m=+142.728801482 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:03 crc kubenswrapper[4949]: W1001 15:44:03.004111 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0dede0f5_5799_426f_94dc_cba3a14494fb.slice/crio-9fba825c093209fff6dd72489119a78718755ea8b3a2d96bf8a59ccda9e3326b WatchSource:0}: Error finding container 9fba825c093209fff6dd72489119a78718755ea8b3a2d96bf8a59ccda9e3326b: Status 404 returned error can't find the container with id 9fba825c093209fff6dd72489119a78718755ea8b3a2d96bf8a59ccda9e3326b Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.006780 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tszj5" Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.016956 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-j5tq6"] Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.017264 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w9c7m" Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.024185 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.024556 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lk9lz"] Oct 01 15:44:03 crc kubenswrapper[4949]: E1001 15:44:03.024708 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:03.524672266 +0000 UTC m=+142.830278457 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.040057 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-fv7wt"] Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.047252 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5hfpc" Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.072752 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nbrh4" Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.075483 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-xlwdp"] Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.119299 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zz76h"] Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.126976 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:03 crc kubenswrapper[4949]: E1001 15:44:03.127925 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:03.627909864 +0000 UTC m=+142.933516055 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.140672 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-l2tfx"] Oct 01 15:44:03 crc kubenswrapper[4949]: W1001 15:44:03.141142 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod33bf5fa2_f552_4a5a_828d_30b3db9c29a3.slice/crio-32ba9fbc841ec1867e86d01a5f99b4c122f8994e13190c68bcaba0a2d00bc5ba WatchSource:0}: Error finding container 32ba9fbc841ec1867e86d01a5f99b4c122f8994e13190c68bcaba0a2d00bc5ba: Status 404 returned error can't find the container with id 32ba9fbc841ec1867e86d01a5f99b4c122f8994e13190c68bcaba0a2d00bc5ba Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.227973 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:03 crc kubenswrapper[4949]: E1001 15:44:03.228143 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:03.728103672 +0000 UTC m=+143.033709863 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.228424 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:03 crc kubenswrapper[4949]: E1001 15:44:03.228815 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:03.728802502 +0000 UTC m=+143.034408683 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.328366 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-mlktd"] Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.350245 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:03 crc kubenswrapper[4949]: E1001 15:44:03.354536 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:03.854506605 +0000 UTC m=+143.160112796 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.358152 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-t8vc2"] Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.371188 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-htjbx"] Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.421240 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-nrnsk" event={"ID":"4ddc725e-4c58-4cd2-a228-e53fede0a61e","Type":"ContainerStarted","Data":"7ef537f82d4bd015d59c882ee03124510f63ac0a01fe99b2f1fb02fa32fdc936"} Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.427762 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-nrnsk" Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.430076 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8znr5" event={"ID":"4dec871b-0b00-4f01-b97c-aaf139f5f879","Type":"ContainerStarted","Data":"f3111ff8f41345c88852d1c43ac313033fe7a59563027489aafdb7b93ef55ec2"} Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.430109 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8znr5" event={"ID":"4dec871b-0b00-4f01-b97c-aaf139f5f879","Type":"ContainerStarted","Data":"148ddf8fcd27a1e78746005e3b84ed29b888031d514ba76b8e4d972874ba70ff"} Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.430390 4949 patch_prober.go:28] interesting pod/console-operator-58897d9998-nrnsk container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.430440 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-nrnsk" podUID="4ddc725e-4c58-4cd2-a228-e53fede0a61e" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.8:8443/readyz\": dial tcp 10.217.0.8:8443: connect: connection refused" Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.430493 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8znr5" Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.431184 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" event={"ID":"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8","Type":"ContainerStarted","Data":"7ccdcafee2a482df43cbb2ab9fd1b8882c5e4bdfa09d3f095b796ca58e89ef7d"} Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.431727 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.432334 4949 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-8znr5 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" start-of-body= Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.432365 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8znr5" podUID="4dec871b-0b00-4f01-b97c-aaf139f5f879" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.21:8443/healthz\": dial tcp 10.217.0.21:8443: connect: connection refused" Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.433829 4949 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-wjx4c container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.14:6443/healthz\": dial tcp 10.217.0.14:6443: connect: connection refused" start-of-body= Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.433875 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" podUID="2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.14:6443/healthz\": dial tcp 10.217.0.14:6443: connect: connection refused" Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.433930 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-kwm7g" event={"ID":"91e40664-3369-4ee6-816c-03b6272c3d15","Type":"ContainerStarted","Data":"9c48cb51b56ed82af9887330bf2a270cafa1b9a65fbf959d33eda6803bd67404"} Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.434224 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-kwm7g" Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.434889 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zz76h" event={"ID":"3d1c7446-d5a2-4dd3-996c-0ca11ccbcf0b","Type":"ContainerStarted","Data":"b0f0cf5dcb90d188377ca202355673821478ffcd42765f8b01c82bfd03e605ba"} Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.437636 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gd6wp" event={"ID":"6d0cfaab-30c5-4483-8f77-5929e026cb31","Type":"ContainerStarted","Data":"5051abb9179a83df7e01d8b320848860040942f15fd17b1de81580dd72808053"} Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.437667 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gd6wp" event={"ID":"6d0cfaab-30c5-4483-8f77-5929e026cb31","Type":"ContainerStarted","Data":"6dfc017f0b5af4b41bfb558bf07c889fb6f8884e56ee9c3b635a4bcf140ca586"} Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.442187 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-6tt8n" event={"ID":"dbf87b2b-7468-46c2-bf9b-cd95faaedbb9","Type":"ContainerStarted","Data":"a1206ee1039adc7d5343c60e36856795224f9d4099091bc473365f1629d153f8"} Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.442227 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-6tt8n" event={"ID":"dbf87b2b-7468-46c2-bf9b-cd95faaedbb9","Type":"ContainerStarted","Data":"65f922ecbc0ad56e613420482a1686bb392522af63d926b69ab637c2cb4350b7"} Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.443336 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fv7wt" event={"ID":"33bf5fa2-f552-4a5a-828d-30b3db9c29a3","Type":"ContainerStarted","Data":"32ba9fbc841ec1867e86d01a5f99b4c122f8994e13190c68bcaba0a2d00bc5ba"} Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.444689 4949 generic.go:334] "Generic (PLEG): container finished" podID="54bc2b08-ed4b-45fc-baa6-e681a412b2ed" containerID="e6f8c4c7096faaf0aed7eb41439c32339524a9af71a06762c0673c964cdfd700" exitCode=0 Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.444742 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hjkx4" event={"ID":"54bc2b08-ed4b-45fc-baa6-e681a412b2ed","Type":"ContainerDied","Data":"e6f8c4c7096faaf0aed7eb41439c32339524a9af71a06762c0673c964cdfd700"} Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.447267 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-58ggj" event={"ID":"0dede0f5-5799-426f-94dc-cba3a14494fb","Type":"ContainerStarted","Data":"9fba825c093209fff6dd72489119a78718755ea8b3a2d96bf8a59ccda9e3326b"} Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.449015 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-qb6b6" event={"ID":"7755b8a6-1aee-422e-ad5a-b56cd77c0234","Type":"ContainerStarted","Data":"e98b77503d307396fa741051ec278b5a4bdc01daf032e06013801184736b7a9a"} Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.450603 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bpvnv" event={"ID":"19e22ec0-f904-4fe5-a5ff-ba976489c026","Type":"ContainerStarted","Data":"39e25a4f88bffdfdb4bfbf4d0ea0140de651d9baf29f29bf9ae52e86cb730bc2"} Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.450775 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bpvnv" event={"ID":"19e22ec0-f904-4fe5-a5ff-ba976489c026","Type":"ContainerStarted","Data":"72d2195615cba6b503b9927db4697fe3f7cc835c6e5f5db0b3daf474b8a7862d"} Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.451866 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:03 crc kubenswrapper[4949]: E1001 15:44:03.452384 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:03.952371434 +0000 UTC m=+143.257977625 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.453219 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-l2tfx" event={"ID":"08be9cf8-e8ce-4ef3-bde6-fc57dfb0df23","Type":"ContainerStarted","Data":"a1308171fb9a9a759937782192675452ca415ff7576b85f8f7e0fe455703bb3a"} Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.454464 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lk9lz" event={"ID":"433c73f4-6132-487a-b44d-ac351cd0fb7d","Type":"ContainerStarted","Data":"f0f036c918825d45746f3f13e1c5bf38aac3bf9d65bc3aebdfa325b35124d42e"} Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.456696 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-j5tq6" event={"ID":"7e3cc267-5513-4e94-a950-31dace366440","Type":"ContainerStarted","Data":"46b0216e3fac997c8a4ac5e66b22bfae35d8741759b818ced8967df9c82a0490"} Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.458812 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-p2vmt" event={"ID":"c04759f6-7a1a-43cb-a705-41d2319646b6","Type":"ContainerStarted","Data":"0b6bbb5c105cad32505766a33bb777c5970d796c3072ad62737baf71bd5cfd83"} Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.461042 4949 generic.go:334] "Generic (PLEG): container finished" podID="b3103144-2f18-4cc3-82ad-3fedf7e23914" containerID="a0e8959ca928d474866ed16891d0efebf9eb066feee1760475b25bc21a13cb5e" exitCode=0 Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.461426 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-np8v7" event={"ID":"b3103144-2f18-4cc3-82ad-3fedf7e23914","Type":"ContainerDied","Data":"a0e8959ca928d474866ed16891d0efebf9eb066feee1760475b25bc21a13cb5e"} Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.462604 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-xlwdp" event={"ID":"62b77904-e0d8-4a98-b6e0-49b2c18821db","Type":"ContainerStarted","Data":"dc98c9cfd0dc6a0079dfb2137f79dea8785293a720f9c15966cd71e18f481208"} Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.463751 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dppnh" event={"ID":"44c12e71-4fbb-407d-b4b2-81ae599a853a","Type":"ContainerStarted","Data":"52d01ed96d5d48caad0ae55e7ac66783d24062249cecc1cea65ce52bab634c0b"} Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.464169 4949 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-sm5dr container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.464205 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-sm5dr" podUID="0df389db-e47d-4b16-9221-f1e5311c5cd6" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.494417 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4v6mr" podStartSLOduration=118.494400359 podStartE2EDuration="1m58.494400359s" podCreationTimestamp="2025-10-01 15:42:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:44:03.492894093 +0000 UTC m=+142.798500284" watchObservedRunningTime="2025-10-01 15:44:03.494400359 +0000 UTC m=+142.800006550" Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.522660 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-pblzs"] Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.525075 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mqv74"] Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.551709 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-jpqsm"] Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.553444 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.553495 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-gclj9"] Oct 01 15:44:03 crc kubenswrapper[4949]: E1001 15:44:03.554719 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:04.054698794 +0000 UTC m=+143.360304975 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.555106 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9gv8b"] Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.573974 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t7cmz"] Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.656216 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:03 crc kubenswrapper[4949]: E1001 15:44:03.658994 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:04.158958183 +0000 UTC m=+143.464564444 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.659007 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-7t2lr"] Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.708790 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-sm5dr" podStartSLOduration=119.708766117 podStartE2EDuration="1m59.708766117s" podCreationTimestamp="2025-10-01 15:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:44:03.704673426 +0000 UTC m=+143.010279637" watchObservedRunningTime="2025-10-01 15:44:03.708766117 +0000 UTC m=+143.014372298" Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.758186 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:03 crc kubenswrapper[4949]: E1001 15:44:03.765022 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:04.26492882 +0000 UTC m=+143.570535021 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.765523 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:03 crc kubenswrapper[4949]: E1001 15:44:03.765858 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:04.265846188 +0000 UTC m=+143.571452379 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.867542 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:03 crc kubenswrapper[4949]: E1001 15:44:03.868003 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:04.367983113 +0000 UTC m=+143.673589304 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.969230 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:03 crc kubenswrapper[4949]: E1001 15:44:03.969899 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:04.469888051 +0000 UTC m=+143.775494242 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:03 crc kubenswrapper[4949]: I1001 15:44:03.983958 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hbkqd"] Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.037750 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-twgpv"] Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.070139 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:04 crc kubenswrapper[4949]: E1001 15:44:04.070479 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:04.570462319 +0000 UTC m=+143.876068510 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.097710 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4v6mr" Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.153899 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-8s6sd"] Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.159455 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29322210-dtf9r"] Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.173548 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-kgxcz"] Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.173960 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:04 crc kubenswrapper[4949]: E1001 15:44:04.174418 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:04.674404509 +0000 UTC m=+143.980010690 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.275729 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:04 crc kubenswrapper[4949]: E1001 15:44:04.275941 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:04.775913435 +0000 UTC m=+144.081519626 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.276097 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:04 crc kubenswrapper[4949]: E1001 15:44:04.276704 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:04.776694628 +0000 UTC m=+144.082300819 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.293181 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tszj5"] Oct 01 15:44:04 crc kubenswrapper[4949]: W1001 15:44:04.356272 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbf8fbefc_06d3_4792_b11c_86e8488b231e.slice/crio-aa21715ff407d3cf1ab2846ff064e36260a1b93671f7f09eeaec94b232ab480b WatchSource:0}: Error finding container aa21715ff407d3cf1ab2846ff064e36260a1b93671f7f09eeaec94b232ab480b: Status 404 returned error can't find the container with id aa21715ff407d3cf1ab2846ff064e36260a1b93671f7f09eeaec94b232ab480b Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.376818 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:04 crc kubenswrapper[4949]: E1001 15:44:04.377146 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:04.877114392 +0000 UTC m=+144.182720583 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.438383 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-rsrns"] Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.479609 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:04 crc kubenswrapper[4949]: E1001 15:44:04.515993 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:05.015965275 +0000 UTC m=+144.321571466 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.529179 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5hfpc"] Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.537707 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w9c7m"] Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.541592 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nbrh4"] Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.569438 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fv7wt" event={"ID":"33bf5fa2-f552-4a5a-828d-30b3db9c29a3","Type":"ContainerStarted","Data":"922c9f7d34914b6bcafedc5a90b0790f668e53f85db585675d96a244dd44d808"} Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.571423 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-jpqsm" event={"ID":"049bc083-906a-459a-9df8-1e1fb5ff8918","Type":"ContainerStarted","Data":"9d19e27d6e31029a4f1bf0274a3600c85434db95bbb82278ad6ac31c9be946be"} Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.574224 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hbkqd" event={"ID":"cc4f3054-5b96-488b-b209-fa9433c513ab","Type":"ContainerStarted","Data":"f6f0ff50d639f8b52bd4c63f4c0e6fc466b942da20037631802e0204ca2146be"} Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.576255 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mqv74" event={"ID":"1209b691-1ca5-4577-a46f-00b590053ec9","Type":"ContainerStarted","Data":"822dd85a3095a454f7b644d2b859686cb9dc5d20fb200fd8335e56b7475bc9db"} Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.581152 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:04 crc kubenswrapper[4949]: E1001 15:44:04.581270 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:05.081233168 +0000 UTC m=+144.386839359 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.581396 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:04 crc kubenswrapper[4949]: E1001 15:44:04.581689 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:05.081677551 +0000 UTC m=+144.387283742 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.582730 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pblzs" event={"ID":"a4a80548-d2f1-40ee-a409-dd3cbed9ead2","Type":"ContainerStarted","Data":"b88312d4c67be7f644d95669cf3ee6cabce898407d82010ea7dc20b34855fd43"} Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.586001 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-58ggj" event={"ID":"0dede0f5-5799-426f-94dc-cba3a14494fb","Type":"ContainerStarted","Data":"516d781e05b219773e79237397b65741cf69a24d4725c77f18173c28a4f50b57"} Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.592292 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9gv8b" event={"ID":"0accd572-1725-44eb-9d94-94184e622485","Type":"ContainerStarted","Data":"ad698146e9fe21992371d1a7171d39c077ef06a55d4ebcbf9ba19fcac7e2670e"} Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.602137 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-nrnsk" podStartSLOduration=120.602108816 podStartE2EDuration="2m0.602108816s" podCreationTimestamp="2025-10-01 15:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:44:04.601012614 +0000 UTC m=+143.906618805" watchObservedRunningTime="2025-10-01 15:44:04.602108816 +0000 UTC m=+143.907715007" Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.603403 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-twgpv" event={"ID":"7ef25191-6af6-4921-973b-f2b127a01a6a","Type":"ContainerStarted","Data":"bbeeb4153376bab7f1bca4407d5204a8f0a1501fe48a46ab68202328993825a3"} Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.610714 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t7cmz" event={"ID":"3ddcaaa4-4dfe-4b11-a24c-ffc2adbbeb5e","Type":"ContainerStarted","Data":"d09ac0711dbaf411b1957157be625a67ad1842dc59cecf4d1d864db723efb055"} Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.629180 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gclj9" event={"ID":"9682d1e4-ba48-496f-822e-6b0262676cca","Type":"ContainerStarted","Data":"d80fc79c80912568ebed26ec474444da4aeffee596a09ab4d42353d612f455a5"} Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.638860 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-qb6b6" event={"ID":"7755b8a6-1aee-422e-ad5a-b56cd77c0234","Type":"ContainerStarted","Data":"f4c29c8b23e238de3228e69d8bd8fb3c2df0b3eb8bba4dd73ab525eb4d9f21dc"} Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.651963 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bpvnv" event={"ID":"19e22ec0-f904-4fe5-a5ff-ba976489c026","Type":"ContainerStarted","Data":"daeda89bb01b39801a69d43448a5a169e9688c9032af3a15babcda4465be6daa"} Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.653585 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29322210-dtf9r" event={"ID":"98af3efd-3e5b-4bfd-96ae-f3629aa18f43","Type":"ContainerStarted","Data":"add42f11318f86015bb0b00f88cd7acf6e9bd892be2d3c74e409b0f0fc1dc12a"} Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.676312 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-7t2lr" event={"ID":"8bfda5f1-ce68-4682-bb75-e62f14514d81","Type":"ContainerStarted","Data":"04342eb10383c172889de402dec9c2b413607ca0366b891f0a7c853defc9a3e5"} Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.682366 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:04 crc kubenswrapper[4949]: E1001 15:44:04.683443 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:05.183429534 +0000 UTC m=+144.489035725 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.696291 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" podStartSLOduration=120.696276215 podStartE2EDuration="2m0.696276215s" podCreationTimestamp="2025-10-01 15:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:44:04.677193 +0000 UTC m=+143.982799191" watchObservedRunningTime="2025-10-01 15:44:04.696276215 +0000 UTC m=+144.001882406" Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.698590 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-58ggj" Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.698740 4949 patch_prober.go:28] interesting pod/router-default-5444994796-58ggj container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.698773 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-58ggj" podUID="0dede0f5-5799-426f-94dc-cba3a14494fb" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.720517 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-htjbx" event={"ID":"971a7b5f-3076-4e58-a96c-d5902c05b319","Type":"ContainerStarted","Data":"302136bb25d6aed03e543523d5f62d3a7d36545ecf94d101409aa546c5df094e"} Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.720574 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-htjbx" event={"ID":"971a7b5f-3076-4e58-a96c-d5902c05b319","Type":"ContainerStarted","Data":"be77f9edbbffff9f47442995cdde542368f16815d92569a4a4c72260208e420f"} Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.721846 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8znr5" podStartSLOduration=119.721833822 podStartE2EDuration="1m59.721833822s" podCreationTimestamp="2025-10-01 15:42:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:44:04.720972007 +0000 UTC m=+144.026578218" watchObservedRunningTime="2025-10-01 15:44:04.721833822 +0000 UTC m=+144.027440013" Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.726776 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-kwm7g" podStartSLOduration=120.726766388 podStartE2EDuration="2m0.726766388s" podCreationTimestamp="2025-10-01 15:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:44:04.696897353 +0000 UTC m=+144.002503544" watchObservedRunningTime="2025-10-01 15:44:04.726766388 +0000 UTC m=+144.032372579" Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.731295 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-p2vmt" event={"ID":"c04759f6-7a1a-43cb-a705-41d2319646b6","Type":"ContainerStarted","Data":"9a99006fe6c3d5aeb96fa33d97486ae9f135f728b581690680d1857fea8e6e0a"} Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.732276 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-p2vmt" Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.735226 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-t8vc2" event={"ID":"ff9c89a1-2f76-47e5-9e37-866d1d8adef2","Type":"ContainerStarted","Data":"a6e7be0b808027999fcc74ef89900c9ac689a153cd33e1bd9ed9c176dfc0b6e1"} Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.735279 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-t8vc2" event={"ID":"ff9c89a1-2f76-47e5-9e37-866d1d8adef2","Type":"ContainerStarted","Data":"4ba91872d800fdc79f80a09fdc877851f58438bf2ad34902dd53795f0408a55c"} Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.736096 4949 patch_prober.go:28] interesting pod/downloads-7954f5f757-p2vmt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.736175 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-p2vmt" podUID="c04759f6-7a1a-43cb-a705-41d2319646b6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.745991 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mlktd" event={"ID":"f22fcb2d-8db9-4200-bedb-b47ab677a3f2","Type":"ContainerStarted","Data":"4c9c201e4c93760f66953b64fe30ecc36916511d9d4c8dcd1d15ffa0ee1fc212"} Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.746115 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-gd6wp" podStartSLOduration=120.746091941 podStartE2EDuration="2m0.746091941s" podCreationTimestamp="2025-10-01 15:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:44:04.740791994 +0000 UTC m=+144.046398185" watchObservedRunningTime="2025-10-01 15:44:04.746091941 +0000 UTC m=+144.051698132" Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.753260 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-kgxcz" event={"ID":"edc35680-cc63-485d-aa02-4e08f96d86fa","Type":"ContainerStarted","Data":"a84d5def85dd2803482b406f974e85e2b6456211dab7fae3cec54d2cae7949db"} Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.764347 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-6tt8n" event={"ID":"dbf87b2b-7468-46c2-bf9b-cd95faaedbb9","Type":"ContainerStarted","Data":"7d125061f1cdf407f7cd62281b7cd49305adb4a875bedfe06c1de7798acc139e"} Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.768106 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lk9lz" event={"ID":"433c73f4-6132-487a-b44d-ac351cd0fb7d","Type":"ContainerStarted","Data":"9f4507340d50c73c25ccaa3a437bf279b0a068702cb7f170475d3efdf37e813b"} Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.781188 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zz76h" event={"ID":"3d1c7446-d5a2-4dd3-996c-0ca11ccbcf0b","Type":"ContainerStarted","Data":"7e86a1f4e875759a7472c5ee38326b47c39e98c99f11d9009d8846811dd038ee"} Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.783985 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:04 crc kubenswrapper[4949]: E1001 15:44:04.786515 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:05.286496457 +0000 UTC m=+144.592102728 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.795986 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-8s6sd" event={"ID":"bf8fbefc-06d3-4792-b11c-86e8488b231e","Type":"ContainerStarted","Data":"aa21715ff407d3cf1ab2846ff064e36260a1b93671f7f09eeaec94b232ab480b"} Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.800737 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-j5tq6" event={"ID":"7e3cc267-5513-4e94-a950-31dace366440","Type":"ContainerStarted","Data":"b5df50c33545e2858bfbd99f589ad719dff144e350f7f2390682fb60ddea8543"} Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.819042 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-xlwdp" event={"ID":"62b77904-e0d8-4a98-b6e0-49b2c18821db","Type":"ContainerStarted","Data":"fac36984e56b9db23222a3c3358942cb70499209c3233056b2824f5f3f3f5e43"} Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.845930 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-8znr5" Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.885597 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.886297 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-zz76h" podStartSLOduration=120.886278613 podStartE2EDuration="2m0.886278613s" podCreationTimestamp="2025-10-01 15:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:44:04.885657983 +0000 UTC m=+144.191264184" watchObservedRunningTime="2025-10-01 15:44:04.886278613 +0000 UTC m=+144.191884814" Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.889520 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" Oct 01 15:44:04 crc kubenswrapper[4949]: E1001 15:44:04.889721 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:05.389699054 +0000 UTC m=+144.695305325 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.894219 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-dppnh" podStartSLOduration=120.894203507 podStartE2EDuration="2m0.894203507s" podCreationTimestamp="2025-10-01 15:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:44:04.864520888 +0000 UTC m=+144.170127079" watchObservedRunningTime="2025-10-01 15:44:04.894203507 +0000 UTC m=+144.199809698" Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.920540 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-p2vmt" podStartSLOduration=120.920524967 podStartE2EDuration="2m0.920524967s" podCreationTimestamp="2025-10-01 15:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:44:04.920323531 +0000 UTC m=+144.225929722" watchObservedRunningTime="2025-10-01 15:44:04.920524967 +0000 UTC m=+144.226131158" Oct 01 15:44:04 crc kubenswrapper[4949]: I1001 15:44:04.992152 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:04 crc kubenswrapper[4949]: E1001 15:44:04.992512 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:05.492500459 +0000 UTC m=+144.798106650 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:05 crc kubenswrapper[4949]: I1001 15:44:05.009495 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-xlwdp" podStartSLOduration=121.009478381 podStartE2EDuration="2m1.009478381s" podCreationTimestamp="2025-10-01 15:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:44:05.008946005 +0000 UTC m=+144.314552196" watchObservedRunningTime="2025-10-01 15:44:05.009478381 +0000 UTC m=+144.315084572" Oct 01 15:44:05 crc kubenswrapper[4949]: I1001 15:44:05.009812 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-58ggj" podStartSLOduration=121.009808391 podStartE2EDuration="2m1.009808391s" podCreationTimestamp="2025-10-01 15:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:44:04.978130112 +0000 UTC m=+144.283736323" watchObservedRunningTime="2025-10-01 15:44:05.009808391 +0000 UTC m=+144.315414582" Oct 01 15:44:05 crc kubenswrapper[4949]: I1001 15:44:05.041467 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-nrnsk" Oct 01 15:44:05 crc kubenswrapper[4949]: I1001 15:44:05.042577 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-j5tq6" podStartSLOduration=121.042569261 podStartE2EDuration="2m1.042569261s" podCreationTimestamp="2025-10-01 15:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:44:05.041756417 +0000 UTC m=+144.347362608" watchObservedRunningTime="2025-10-01 15:44:05.042569261 +0000 UTC m=+144.348175452" Oct 01 15:44:05 crc kubenswrapper[4949]: I1001 15:44:05.079245 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-qb6b6" podStartSLOduration=6.079226157 podStartE2EDuration="6.079226157s" podCreationTimestamp="2025-10-01 15:43:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:44:05.076979731 +0000 UTC m=+144.382585922" watchObservedRunningTime="2025-10-01 15:44:05.079226157 +0000 UTC m=+144.384832358" Oct 01 15:44:05 crc kubenswrapper[4949]: I1001 15:44:05.092794 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:05 crc kubenswrapper[4949]: E1001 15:44:05.092924 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:05.592901271 +0000 UTC m=+144.898507462 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:05 crc kubenswrapper[4949]: I1001 15:44:05.093088 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:05 crc kubenswrapper[4949]: E1001 15:44:05.093620 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:05.593604513 +0000 UTC m=+144.899210704 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:05 crc kubenswrapper[4949]: I1001 15:44:05.122372 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-lk9lz" podStartSLOduration=121.122356434 podStartE2EDuration="2m1.122356434s" podCreationTimestamp="2025-10-01 15:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:44:05.119963163 +0000 UTC m=+144.425569364" watchObservedRunningTime="2025-10-01 15:44:05.122356434 +0000 UTC m=+144.427962625" Oct 01 15:44:05 crc kubenswrapper[4949]: I1001 15:44:05.195247 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:05 crc kubenswrapper[4949]: E1001 15:44:05.195906 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:05.695891312 +0000 UTC m=+145.001497503 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:05 crc kubenswrapper[4949]: I1001 15:44:05.202234 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-6tt8n" podStartSLOduration=120.202217219 podStartE2EDuration="2m0.202217219s" podCreationTimestamp="2025-10-01 15:42:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:44:05.19986551 +0000 UTC m=+144.505471701" watchObservedRunningTime="2025-10-01 15:44:05.202217219 +0000 UTC m=+144.507823410" Oct 01 15:44:05 crc kubenswrapper[4949]: I1001 15:44:05.291371 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-bpvnv" podStartSLOduration=121.291348329 podStartE2EDuration="2m1.291348329s" podCreationTimestamp="2025-10-01 15:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:44:05.262213877 +0000 UTC m=+144.567820068" watchObservedRunningTime="2025-10-01 15:44:05.291348329 +0000 UTC m=+144.596954520" Oct 01 15:44:05 crc kubenswrapper[4949]: I1001 15:44:05.307278 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:05 crc kubenswrapper[4949]: E1001 15:44:05.307680 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:05.807663952 +0000 UTC m=+145.113270143 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:05 crc kubenswrapper[4949]: I1001 15:44:05.407914 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:05 crc kubenswrapper[4949]: E1001 15:44:05.408300 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:05.908285203 +0000 UTC m=+145.213891394 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:05 crc kubenswrapper[4949]: I1001 15:44:05.441796 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zq8nf"] Oct 01 15:44:05 crc kubenswrapper[4949]: I1001 15:44:05.442930 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zq8nf" Oct 01 15:44:05 crc kubenswrapper[4949]: I1001 15:44:05.447908 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Oct 01 15:44:05 crc kubenswrapper[4949]: I1001 15:44:05.450852 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zq8nf"] Oct 01 15:44:05 crc kubenswrapper[4949]: I1001 15:44:05.514962 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:05 crc kubenswrapper[4949]: E1001 15:44:05.515368 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:06.015354894 +0000 UTC m=+145.320961085 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:05 crc kubenswrapper[4949]: I1001 15:44:05.615850 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:05 crc kubenswrapper[4949]: E1001 15:44:05.616010 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:06.115979864 +0000 UTC m=+145.421586055 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:05 crc kubenswrapper[4949]: I1001 15:44:05.616180 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46c5822e-cd0c-4c66-828c-a0f9a50879de-utilities\") pod \"certified-operators-zq8nf\" (UID: \"46c5822e-cd0c-4c66-828c-a0f9a50879de\") " pod="openshift-marketplace/certified-operators-zq8nf" Oct 01 15:44:05 crc kubenswrapper[4949]: I1001 15:44:05.616234 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:05 crc kubenswrapper[4949]: I1001 15:44:05.616291 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gc8nl\" (UniqueName: \"kubernetes.io/projected/46c5822e-cd0c-4c66-828c-a0f9a50879de-kube-api-access-gc8nl\") pod \"certified-operators-zq8nf\" (UID: \"46c5822e-cd0c-4c66-828c-a0f9a50879de\") " pod="openshift-marketplace/certified-operators-zq8nf" Oct 01 15:44:05 crc kubenswrapper[4949]: I1001 15:44:05.616361 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46c5822e-cd0c-4c66-828c-a0f9a50879de-catalog-content\") pod \"certified-operators-zq8nf\" (UID: \"46c5822e-cd0c-4c66-828c-a0f9a50879de\") " pod="openshift-marketplace/certified-operators-zq8nf" Oct 01 15:44:05 crc kubenswrapper[4949]: E1001 15:44:05.616577 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:06.116559941 +0000 UTC m=+145.422166122 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:05 crc kubenswrapper[4949]: I1001 15:44:05.707756 4949 patch_prober.go:28] interesting pod/router-default-5444994796-58ggj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 01 15:44:05 crc kubenswrapper[4949]: [-]has-synced failed: reason withheld Oct 01 15:44:05 crc kubenswrapper[4949]: [+]process-running ok Oct 01 15:44:05 crc kubenswrapper[4949]: healthz check failed Oct 01 15:44:05 crc kubenswrapper[4949]: I1001 15:44:05.708042 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-58ggj" podUID="0dede0f5-5799-426f-94dc-cba3a14494fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 01 15:44:05 crc kubenswrapper[4949]: I1001 15:44:05.717509 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:05 crc kubenswrapper[4949]: I1001 15:44:05.717660 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gc8nl\" (UniqueName: \"kubernetes.io/projected/46c5822e-cd0c-4c66-828c-a0f9a50879de-kube-api-access-gc8nl\") pod \"certified-operators-zq8nf\" (UID: \"46c5822e-cd0c-4c66-828c-a0f9a50879de\") " pod="openshift-marketplace/certified-operators-zq8nf" Oct 01 15:44:05 crc kubenswrapper[4949]: I1001 15:44:05.717717 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46c5822e-cd0c-4c66-828c-a0f9a50879de-catalog-content\") pod \"certified-operators-zq8nf\" (UID: \"46c5822e-cd0c-4c66-828c-a0f9a50879de\") " pod="openshift-marketplace/certified-operators-zq8nf" Oct 01 15:44:05 crc kubenswrapper[4949]: I1001 15:44:05.717771 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46c5822e-cd0c-4c66-828c-a0f9a50879de-utilities\") pod \"certified-operators-zq8nf\" (UID: \"46c5822e-cd0c-4c66-828c-a0f9a50879de\") " pod="openshift-marketplace/certified-operators-zq8nf" Oct 01 15:44:05 crc kubenswrapper[4949]: I1001 15:44:05.718171 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46c5822e-cd0c-4c66-828c-a0f9a50879de-utilities\") pod \"certified-operators-zq8nf\" (UID: \"46c5822e-cd0c-4c66-828c-a0f9a50879de\") " pod="openshift-marketplace/certified-operators-zq8nf" Oct 01 15:44:05 crc kubenswrapper[4949]: E1001 15:44:05.718256 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:06.218241253 +0000 UTC m=+145.523847444 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:05 crc kubenswrapper[4949]: I1001 15:44:05.718696 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46c5822e-cd0c-4c66-828c-a0f9a50879de-catalog-content\") pod \"certified-operators-zq8nf\" (UID: \"46c5822e-cd0c-4c66-828c-a0f9a50879de\") " pod="openshift-marketplace/certified-operators-zq8nf" Oct 01 15:44:05 crc kubenswrapper[4949]: I1001 15:44:05.747675 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gc8nl\" (UniqueName: \"kubernetes.io/projected/46c5822e-cd0c-4c66-828c-a0f9a50879de-kube-api-access-gc8nl\") pod \"certified-operators-zq8nf\" (UID: \"46c5822e-cd0c-4c66-828c-a0f9a50879de\") " pod="openshift-marketplace/certified-operators-zq8nf" Oct 01 15:44:05 crc kubenswrapper[4949]: I1001 15:44:05.788111 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-h9w85"] Oct 01 15:44:05 crc kubenswrapper[4949]: I1001 15:44:05.789163 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h9w85" Oct 01 15:44:05 crc kubenswrapper[4949]: I1001 15:44:05.798413 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zq8nf" Oct 01 15:44:05 crc kubenswrapper[4949]: I1001 15:44:05.805652 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h9w85"] Oct 01 15:44:05 crc kubenswrapper[4949]: I1001 15:44:05.818827 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:05 crc kubenswrapper[4949]: E1001 15:44:05.819271 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:06.319256205 +0000 UTC m=+145.624862396 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:05 crc kubenswrapper[4949]: I1001 15:44:05.824403 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w9c7m" event={"ID":"dc588bff-a558-4793-ba57-c1efaf23f92a","Type":"ContainerStarted","Data":"b4956bf2621cfa3e87aca42254bf935765754a5975b96024f5fbb6dd4da42c7e"} Oct 01 15:44:05 crc kubenswrapper[4949]: I1001 15:44:05.825528 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tszj5" event={"ID":"c049ec19-17c6-4838-873d-686ea408b5dc","Type":"ContainerStarted","Data":"aed967a9e26a4953b05282e6b63ae82ebd2e71a22873e17114e135bdff7405d8"} Oct 01 15:44:05 crc kubenswrapper[4949]: I1001 15:44:05.826512 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rsrns" event={"ID":"837935a1-6cd1-4472-a692-e9c13f2b7ad7","Type":"ContainerStarted","Data":"c89950207b7e1cefbcc128bf789259dc57b3fef423a4006ba0e3047f6589593c"} Oct 01 15:44:05 crc kubenswrapper[4949]: I1001 15:44:05.830281 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pblzs" event={"ID":"a4a80548-d2f1-40ee-a409-dd3cbed9ead2","Type":"ContainerStarted","Data":"2ed02c3c641ee2116f51c345a856a0aa4fbf2d832c86d1073ae9fc3b05ef8caf"} Oct 01 15:44:05 crc kubenswrapper[4949]: I1001 15:44:05.839494 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5hfpc" event={"ID":"1dc73581-03ed-4c2a-8631-0e3adab48686","Type":"ContainerStarted","Data":"3a344d0f13897d81fca8401884822ccd9394945e9014992f11ca9565586ce5d9"} Oct 01 15:44:05 crc kubenswrapper[4949]: I1001 15:44:05.845620 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nbrh4" event={"ID":"5dfe4211-4bb6-47e4-9797-652393f66bc5","Type":"ContainerStarted","Data":"2733f566caeac0e0ae9cc726b4ac76dfdf19d147082ff0908667a3c686d7b036"} Oct 01 15:44:05 crc kubenswrapper[4949]: I1001 15:44:05.847574 4949 patch_prober.go:28] interesting pod/downloads-7954f5f757-p2vmt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Oct 01 15:44:05 crc kubenswrapper[4949]: I1001 15:44:05.847648 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-p2vmt" podUID="c04759f6-7a1a-43cb-a705-41d2319646b6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" Oct 01 15:44:05 crc kubenswrapper[4949]: I1001 15:44:05.919796 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:05 crc kubenswrapper[4949]: E1001 15:44:05.935197 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:06.420833493 +0000 UTC m=+145.726439684 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:05 crc kubenswrapper[4949]: I1001 15:44:05.937031 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48941e1e-2481-47ca-832d-047a6d3220c8-utilities\") pod \"certified-operators-h9w85\" (UID: \"48941e1e-2481-47ca-832d-047a6d3220c8\") " pod="openshift-marketplace/certified-operators-h9w85" Oct 01 15:44:05 crc kubenswrapper[4949]: I1001 15:44:05.937063 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6t5ql\" (UniqueName: \"kubernetes.io/projected/48941e1e-2481-47ca-832d-047a6d3220c8-kube-api-access-6t5ql\") pod \"certified-operators-h9w85\" (UID: \"48941e1e-2481-47ca-832d-047a6d3220c8\") " pod="openshift-marketplace/certified-operators-h9w85" Oct 01 15:44:05 crc kubenswrapper[4949]: I1001 15:44:05.937334 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48941e1e-2481-47ca-832d-047a6d3220c8-catalog-content\") pod \"certified-operators-h9w85\" (UID: \"48941e1e-2481-47ca-832d-047a6d3220c8\") " pod="openshift-marketplace/certified-operators-h9w85" Oct 01 15:44:05 crc kubenswrapper[4949]: I1001 15:44:05.937384 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:05 crc kubenswrapper[4949]: E1001 15:44:05.940506 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:06.440490675 +0000 UTC m=+145.746096866 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:05 crc kubenswrapper[4949]: I1001 15:44:05.987080 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6s25h"] Oct 01 15:44:05 crc kubenswrapper[4949]: I1001 15:44:05.989571 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6s25h" Oct 01 15:44:05 crc kubenswrapper[4949]: I1001 15:44:05.994042 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.008237 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6s25h"] Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.039866 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.040032 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48941e1e-2481-47ca-832d-047a6d3220c8-utilities\") pod \"certified-operators-h9w85\" (UID: \"48941e1e-2481-47ca-832d-047a6d3220c8\") " pod="openshift-marketplace/certified-operators-h9w85" Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.040054 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6t5ql\" (UniqueName: \"kubernetes.io/projected/48941e1e-2481-47ca-832d-047a6d3220c8-kube-api-access-6t5ql\") pod \"certified-operators-h9w85\" (UID: \"48941e1e-2481-47ca-832d-047a6d3220c8\") " pod="openshift-marketplace/certified-operators-h9w85" Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.040091 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48941e1e-2481-47ca-832d-047a6d3220c8-catalog-content\") pod \"certified-operators-h9w85\" (UID: \"48941e1e-2481-47ca-832d-047a6d3220c8\") " pod="openshift-marketplace/certified-operators-h9w85" Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.040515 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48941e1e-2481-47ca-832d-047a6d3220c8-catalog-content\") pod \"certified-operators-h9w85\" (UID: \"48941e1e-2481-47ca-832d-047a6d3220c8\") " pod="openshift-marketplace/certified-operators-h9w85" Oct 01 15:44:06 crc kubenswrapper[4949]: E1001 15:44:06.040784 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:06.540769265 +0000 UTC m=+145.846375456 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.040988 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48941e1e-2481-47ca-832d-047a6d3220c8-utilities\") pod \"certified-operators-h9w85\" (UID: \"48941e1e-2481-47ca-832d-047a6d3220c8\") " pod="openshift-marketplace/certified-operators-h9w85" Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.074978 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6t5ql\" (UniqueName: \"kubernetes.io/projected/48941e1e-2481-47ca-832d-047a6d3220c8-kube-api-access-6t5ql\") pod \"certified-operators-h9w85\" (UID: \"48941e1e-2481-47ca-832d-047a6d3220c8\") " pod="openshift-marketplace/certified-operators-h9w85" Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.103980 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h9w85" Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.140907 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbl65\" (UniqueName: \"kubernetes.io/projected/afd2fd7e-b101-41fa-bc35-87cdf06d791a-kube-api-access-cbl65\") pod \"community-operators-6s25h\" (UID: \"afd2fd7e-b101-41fa-bc35-87cdf06d791a\") " pod="openshift-marketplace/community-operators-6s25h" Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.140988 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afd2fd7e-b101-41fa-bc35-87cdf06d791a-catalog-content\") pod \"community-operators-6s25h\" (UID: \"afd2fd7e-b101-41fa-bc35-87cdf06d791a\") " pod="openshift-marketplace/community-operators-6s25h" Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.141009 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afd2fd7e-b101-41fa-bc35-87cdf06d791a-utilities\") pod \"community-operators-6s25h\" (UID: \"afd2fd7e-b101-41fa-bc35-87cdf06d791a\") " pod="openshift-marketplace/community-operators-6s25h" Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.141036 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:06 crc kubenswrapper[4949]: E1001 15:44:06.141367 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:06.641356255 +0000 UTC m=+145.946962446 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.153046 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zq8nf"] Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.184458 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-84mb5"] Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.185319 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-84mb5" Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.220312 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-84mb5"] Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.245898 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.246179 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbl65\" (UniqueName: \"kubernetes.io/projected/afd2fd7e-b101-41fa-bc35-87cdf06d791a-kube-api-access-cbl65\") pod \"community-operators-6s25h\" (UID: \"afd2fd7e-b101-41fa-bc35-87cdf06d791a\") " pod="openshift-marketplace/community-operators-6s25h" Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.246262 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afd2fd7e-b101-41fa-bc35-87cdf06d791a-catalog-content\") pod \"community-operators-6s25h\" (UID: \"afd2fd7e-b101-41fa-bc35-87cdf06d791a\") " pod="openshift-marketplace/community-operators-6s25h" Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.246291 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afd2fd7e-b101-41fa-bc35-87cdf06d791a-utilities\") pod \"community-operators-6s25h\" (UID: \"afd2fd7e-b101-41fa-bc35-87cdf06d791a\") " pod="openshift-marketplace/community-operators-6s25h" Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.247159 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afd2fd7e-b101-41fa-bc35-87cdf06d791a-utilities\") pod \"community-operators-6s25h\" (UID: \"afd2fd7e-b101-41fa-bc35-87cdf06d791a\") " pod="openshift-marketplace/community-operators-6s25h" Oct 01 15:44:06 crc kubenswrapper[4949]: E1001 15:44:06.247250 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:06.747231701 +0000 UTC m=+146.052837892 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.247722 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afd2fd7e-b101-41fa-bc35-87cdf06d791a-catalog-content\") pod \"community-operators-6s25h\" (UID: \"afd2fd7e-b101-41fa-bc35-87cdf06d791a\") " pod="openshift-marketplace/community-operators-6s25h" Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.281680 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbl65\" (UniqueName: \"kubernetes.io/projected/afd2fd7e-b101-41fa-bc35-87cdf06d791a-kube-api-access-cbl65\") pod \"community-operators-6s25h\" (UID: \"afd2fd7e-b101-41fa-bc35-87cdf06d791a\") " pod="openshift-marketplace/community-operators-6s25h" Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.323874 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6s25h" Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.347380 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c-utilities\") pod \"community-operators-84mb5\" (UID: \"a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c\") " pod="openshift-marketplace/community-operators-84mb5" Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.347515 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.347553 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c-catalog-content\") pod \"community-operators-84mb5\" (UID: \"a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c\") " pod="openshift-marketplace/community-operators-84mb5" Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.347596 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66p4p\" (UniqueName: \"kubernetes.io/projected/a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c-kube-api-access-66p4p\") pod \"community-operators-84mb5\" (UID: \"a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c\") " pod="openshift-marketplace/community-operators-84mb5" Oct 01 15:44:06 crc kubenswrapper[4949]: E1001 15:44:06.347947 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:06.847933572 +0000 UTC m=+146.153539773 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.452807 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.453417 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c-catalog-content\") pod \"community-operators-84mb5\" (UID: \"a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c\") " pod="openshift-marketplace/community-operators-84mb5" Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.453474 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66p4p\" (UniqueName: \"kubernetes.io/projected/a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c-kube-api-access-66p4p\") pod \"community-operators-84mb5\" (UID: \"a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c\") " pod="openshift-marketplace/community-operators-84mb5" Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.453536 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c-utilities\") pod \"community-operators-84mb5\" (UID: \"a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c\") " pod="openshift-marketplace/community-operators-84mb5" Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.454022 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c-utilities\") pod \"community-operators-84mb5\" (UID: \"a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c\") " pod="openshift-marketplace/community-operators-84mb5" Oct 01 15:44:06 crc kubenswrapper[4949]: E1001 15:44:06.454109 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:06.954089066 +0000 UTC m=+146.259695257 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.454371 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c-catalog-content\") pod \"community-operators-84mb5\" (UID: \"a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c\") " pod="openshift-marketplace/community-operators-84mb5" Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.482642 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66p4p\" (UniqueName: \"kubernetes.io/projected/a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c-kube-api-access-66p4p\") pod \"community-operators-84mb5\" (UID: \"a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c\") " pod="openshift-marketplace/community-operators-84mb5" Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.483140 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-h9w85"] Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.502190 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-84mb5" Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.555052 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:06 crc kubenswrapper[4949]: E1001 15:44:06.555467 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:07.055454689 +0000 UTC m=+146.361060880 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.576435 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6s25h"] Oct 01 15:44:06 crc kubenswrapper[4949]: W1001 15:44:06.608417 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podafd2fd7e_b101_41fa_bc35_87cdf06d791a.slice/crio-cfe976f41015920846695c1ac8679079c9f3c357cca837923733892fcdbc42ce WatchSource:0}: Error finding container cfe976f41015920846695c1ac8679079c9f3c357cca837923733892fcdbc42ce: Status 404 returned error can't find the container with id cfe976f41015920846695c1ac8679079c9f3c357cca837923733892fcdbc42ce Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.656170 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:06 crc kubenswrapper[4949]: E1001 15:44:06.656355 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:07.156329687 +0000 UTC m=+146.461935868 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.656753 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:06 crc kubenswrapper[4949]: E1001 15:44:06.657069 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:07.157060488 +0000 UTC m=+146.462666679 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.703719 4949 patch_prober.go:28] interesting pod/router-default-5444994796-58ggj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 01 15:44:06 crc kubenswrapper[4949]: [-]has-synced failed: reason withheld Oct 01 15:44:06 crc kubenswrapper[4949]: [+]process-running ok Oct 01 15:44:06 crc kubenswrapper[4949]: healthz check failed Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.703769 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-58ggj" podUID="0dede0f5-5799-426f-94dc-cba3a14494fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.757655 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:06 crc kubenswrapper[4949]: E1001 15:44:06.757802 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:07.257775701 +0000 UTC m=+146.563381892 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.758148 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:06 crc kubenswrapper[4949]: E1001 15:44:06.759770 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:07.259751159 +0000 UTC m=+146.565357360 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.773855 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-84mb5"] Oct 01 15:44:06 crc kubenswrapper[4949]: W1001 15:44:06.795572 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda7e7a912_5eb3_47a0_99c8_69c5bfc5a37c.slice/crio-d3840491823d0bf6c6bcf44456c0377a6286f3ca064c6410b7d994d452b3e6f6 WatchSource:0}: Error finding container d3840491823d0bf6c6bcf44456c0377a6286f3ca064c6410b7d994d452b3e6f6: Status 404 returned error can't find the container with id d3840491823d0bf6c6bcf44456c0377a6286f3ca064c6410b7d994d452b3e6f6 Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.859556 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:06 crc kubenswrapper[4949]: E1001 15:44:06.860259 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:07.360239755 +0000 UTC m=+146.665845946 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.861680 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mqv74" event={"ID":"1209b691-1ca5-4577-a46f-00b590053ec9","Type":"ContainerStarted","Data":"8425d2b1856452e6cf0686ce3a8d6862ace31cff6f4dd30c0d51dce3695eb53a"} Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.865366 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mlktd" event={"ID":"f22fcb2d-8db9-4200-bedb-b47ab677a3f2","Type":"ContainerStarted","Data":"12bf23bffabe147764c1c27564a4479d787dcbdf88ae5107f170aab497485376"} Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.866920 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6s25h" event={"ID":"afd2fd7e-b101-41fa-bc35-87cdf06d791a","Type":"ContainerStarted","Data":"cfe976f41015920846695c1ac8679079c9f3c357cca837923733892fcdbc42ce"} Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.869007 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-twgpv" event={"ID":"7ef25191-6af6-4921-973b-f2b127a01a6a","Type":"ContainerStarted","Data":"886659ae27fa508864f20c34ee4f3f13aa2744e7eac51d1d373c2e416eb2c862"} Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.889191 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29322210-dtf9r" event={"ID":"98af3efd-3e5b-4bfd-96ae-f3629aa18f43","Type":"ContainerStarted","Data":"2d490e2f46af61ffe392c319b26ad8d1d9ce07b8f2e490d32a341c0386f66336"} Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.892056 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mqv74" podStartSLOduration=122.892038718 podStartE2EDuration="2m2.892038718s" podCreationTimestamp="2025-10-01 15:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:44:06.890238704 +0000 UTC m=+146.195844915" watchObservedRunningTime="2025-10-01 15:44:06.892038718 +0000 UTC m=+146.197644919" Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.895388 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h9w85" event={"ID":"48941e1e-2481-47ca-832d-047a6d3220c8","Type":"ContainerStarted","Data":"d22e45720f9122e21622e804b56e93b62a66aa097ea6bb08759e84ca6a88f201"} Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.924757 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29322210-dtf9r" podStartSLOduration=122.924698255 podStartE2EDuration="2m2.924698255s" podCreationTimestamp="2025-10-01 15:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:44:06.923029696 +0000 UTC m=+146.228635897" watchObservedRunningTime="2025-10-01 15:44:06.924698255 +0000 UTC m=+146.230304446" Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.945949 4949 generic.go:334] "Generic (PLEG): container finished" podID="46c5822e-cd0c-4c66-828c-a0f9a50879de" containerID="44ebec05e05c4a73be29218a81b45c6388d6a3cc3338a94ecd3dbd446992b2d8" exitCode=0 Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.946020 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zq8nf" event={"ID":"46c5822e-cd0c-4c66-828c-a0f9a50879de","Type":"ContainerDied","Data":"44ebec05e05c4a73be29218a81b45c6388d6a3cc3338a94ecd3dbd446992b2d8"} Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.946047 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zq8nf" event={"ID":"46c5822e-cd0c-4c66-828c-a0f9a50879de","Type":"ContainerStarted","Data":"20c23a00008a36a3933b89020c697db11ee2a0629cca84330c9d6c522f6c895f"} Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.959149 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9gv8b" event={"ID":"0accd572-1725-44eb-9d94-94184e622485","Type":"ContainerStarted","Data":"a961947f3ca4ee6b114fef1c63c0b54d87b4a1017af1e6be97e84ef3e7ca9124"} Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.961438 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:06 crc kubenswrapper[4949]: E1001 15:44:06.961744 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:07.461700991 +0000 UTC m=+146.767307172 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.987807 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-np8v7" event={"ID":"b3103144-2f18-4cc3-82ad-3fedf7e23914","Type":"ContainerStarted","Data":"83b48ed38b83269fdfefc045dcecfadbaf1bc0e8e2b3cd4a2f32167ad402d502"} Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.993473 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w9c7m" event={"ID":"dc588bff-a558-4793-ba57-c1efaf23f92a","Type":"ContainerStarted","Data":"94d5b6a7363d3eab9dc294a0bbb0dd81f88a041ac2e292812ee5fece95da61a9"} Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.994926 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-t8vc2" event={"ID":"ff9c89a1-2f76-47e5-9e37-866d1d8adef2","Type":"ContainerStarted","Data":"f87f365c8b30e6ddb4f0ba94332db0e45d94a22bf4997cc018a8d6f9d9070e51"} Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.997581 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5hfpc" event={"ID":"1dc73581-03ed-4c2a-8631-0e3adab48686","Type":"ContainerStarted","Data":"f3de0284e5ba5fca386916855246d442c2b75be15a709dd6c9fba49f5e172f41"} Oct 01 15:44:06 crc kubenswrapper[4949]: I1001 15:44:06.998881 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nbrh4" event={"ID":"5dfe4211-4bb6-47e4-9797-652393f66bc5","Type":"ContainerStarted","Data":"ceca7cd25d5788c1b882b4b53d8f231946357954b5c03b1f602eeb286ca7d13f"} Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.007732 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-9gv8b" podStartSLOduration=123.007717964 podStartE2EDuration="2m3.007717964s" podCreationTimestamp="2025-10-01 15:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:44:07.006925861 +0000 UTC m=+146.312532062" watchObservedRunningTime="2025-10-01 15:44:07.007717964 +0000 UTC m=+146.313324155" Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.011532 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fv7wt" event={"ID":"33bf5fa2-f552-4a5a-828d-30b3db9c29a3","Type":"ContainerStarted","Data":"871be3ca0c95b261cf690288a77f105fa952f83715b2066f3677ebdab01fcf2d"} Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.017739 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hbkqd" event={"ID":"cc4f3054-5b96-488b-b209-fa9433c513ab","Type":"ContainerStarted","Data":"c88c02171f48b962a315460dee61eb9ce78075112e8a47569cfe938f06d53365"} Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.032174 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-t8vc2" podStartSLOduration=123.032155308 podStartE2EDuration="2m3.032155308s" podCreationTimestamp="2025-10-01 15:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:44:07.029633413 +0000 UTC m=+146.335239624" watchObservedRunningTime="2025-10-01 15:44:07.032155308 +0000 UTC m=+146.337761499" Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.034722 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pblzs" event={"ID":"a4a80548-d2f1-40ee-a409-dd3cbed9ead2","Type":"ContainerStarted","Data":"c752040110451d8dc01867df424374dbdd2559c6d9786aad1310b3589506488e"} Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.050375 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-htjbx" event={"ID":"971a7b5f-3076-4e58-a96c-d5902c05b319","Type":"ContainerStarted","Data":"d560ab565246dc8efa1b3bccaad3d8b266adef2fda550fbe24eac0400af0cb22"} Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.060840 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-kgxcz" event={"ID":"edc35680-cc63-485d-aa02-4e08f96d86fa","Type":"ContainerStarted","Data":"8e6de1cb7d0842a1ebdb4228fafe6bbcac39e1b2f3e9a460290eddc9cbfc1f1c"} Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.062015 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:07 crc kubenswrapper[4949]: E1001 15:44:07.062650 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:07.56262561 +0000 UTC m=+146.868231871 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.064178 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84mb5" event={"ID":"a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c","Type":"ContainerStarted","Data":"d3840491823d0bf6c6bcf44456c0377a6286f3ca064c6410b7d994d452b3e6f6"} Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.077859 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hjkx4" event={"ID":"54bc2b08-ed4b-45fc-baa6-e681a412b2ed","Type":"ContainerStarted","Data":"82355230b5bb3ee6459858a4e4f9a949348b270191b444752c4602ac74ed6f07"} Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.078250 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-np8v7" podStartSLOduration=122.078235472 podStartE2EDuration="2m2.078235472s" podCreationTimestamp="2025-10-01 15:42:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:44:07.06024177 +0000 UTC m=+146.365847981" watchObservedRunningTime="2025-10-01 15:44:07.078235472 +0000 UTC m=+146.383841673" Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.081266 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-htjbx" podStartSLOduration=123.081252331 podStartE2EDuration="2m3.081252331s" podCreationTimestamp="2025-10-01 15:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:44:07.078583502 +0000 UTC m=+146.384189703" watchObservedRunningTime="2025-10-01 15:44:07.081252331 +0000 UTC m=+146.386858522" Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.091627 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-8s6sd" event={"ID":"bf8fbefc-06d3-4792-b11c-86e8488b231e","Type":"ContainerStarted","Data":"ec324e1f0042ef518a7df228db4c67de96ea1f248345483d0e0570bfab8b2715"} Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.093139 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-l2tfx" event={"ID":"08be9cf8-e8ce-4ef3-bde6-fc57dfb0df23","Type":"ContainerStarted","Data":"1db607b86d07845b91ebe46f8c025da540590a2a692ea18dce1cdf51f61329cb"} Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.116642 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-fv7wt" podStartSLOduration=123.116619039 podStartE2EDuration="2m3.116619039s" podCreationTimestamp="2025-10-01 15:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:44:07.112772375 +0000 UTC m=+146.418378596" watchObservedRunningTime="2025-10-01 15:44:07.116619039 +0000 UTC m=+146.422225230" Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.133629 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tszj5" event={"ID":"c049ec19-17c6-4838-873d-686ea408b5dc","Type":"ContainerStarted","Data":"6340e767bad91407b505a59e72f13e4bc65e43005c060e317df895e9aa9f328a"} Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.136796 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-kgxcz" podStartSLOduration=8.136782096 podStartE2EDuration="8.136782096s" podCreationTimestamp="2025-10-01 15:43:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:44:07.135635803 +0000 UTC m=+146.441241994" watchObservedRunningTime="2025-10-01 15:44:07.136782096 +0000 UTC m=+146.442388287" Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.149461 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gclj9" event={"ID":"9682d1e4-ba48-496f-822e-6b0262676cca","Type":"ContainerStarted","Data":"5a97ad46300714dddb1dafbdeb6458d7750784b36a03906d95c307cc9b78eea0"} Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.152117 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-7t2lr" event={"ID":"8bfda5f1-ce68-4682-bb75-e62f14514d81","Type":"ContainerStarted","Data":"a2bd87ee3819343f9879219899e31dc2c5778a73706bd6bf60dcf0b93a16f4c5"} Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.155852 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t7cmz" event={"ID":"3ddcaaa4-4dfe-4b11-a24c-ffc2adbbeb5e","Type":"ContainerStarted","Data":"def44ccfedcc0e2506e46242a738181ca955851641cb18f37cce2ff528412033"} Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.159600 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-l2tfx" podStartSLOduration=123.159584181 podStartE2EDuration="2m3.159584181s" podCreationTimestamp="2025-10-01 15:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:44:07.158706676 +0000 UTC m=+146.464312867" watchObservedRunningTime="2025-10-01 15:44:07.159584181 +0000 UTC m=+146.465190372" Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.160160 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-jpqsm" event={"ID":"049bc083-906a-459a-9df8-1e1fb5ff8918","Type":"ContainerStarted","Data":"43406661d929e1f10d7b4c19e4173cbdfc3dc0858a07eab44d0e106c39ce27f1"} Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.160828 4949 patch_prober.go:28] interesting pod/downloads-7954f5f757-p2vmt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.160943 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-p2vmt" podUID="c04759f6-7a1a-43cb-a705-41d2319646b6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.170076 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:07 crc kubenswrapper[4949]: E1001 15:44:07.172278 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:07.672258487 +0000 UTC m=+146.977864758 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.183389 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-gclj9" podStartSLOduration=122.183368386 podStartE2EDuration="2m2.183368386s" podCreationTimestamp="2025-10-01 15:42:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:44:07.181534402 +0000 UTC m=+146.487140603" watchObservedRunningTime="2025-10-01 15:44:07.183368386 +0000 UTC m=+146.488974577" Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.221618 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-t7cmz" podStartSLOduration=123.221600768 podStartE2EDuration="2m3.221600768s" podCreationTimestamp="2025-10-01 15:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:44:07.202310037 +0000 UTC m=+146.507916228" watchObservedRunningTime="2025-10-01 15:44:07.221600768 +0000 UTC m=+146.527206959" Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.222979 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-7t2lr" podStartSLOduration=122.222971229 podStartE2EDuration="2m2.222971229s" podCreationTimestamp="2025-10-01 15:42:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:44:07.221447154 +0000 UTC m=+146.527053425" watchObservedRunningTime="2025-10-01 15:44:07.222971229 +0000 UTC m=+146.528577420" Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.273855 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:07 crc kubenswrapper[4949]: E1001 15:44:07.275326 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:07.775306229 +0000 UTC m=+147.080912430 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.375467 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:07 crc kubenswrapper[4949]: E1001 15:44:07.375943 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:07.875932979 +0000 UTC m=+147.181539170 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.476607 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:07 crc kubenswrapper[4949]: E1001 15:44:07.476806 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:07.976777776 +0000 UTC m=+147.282383967 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.477167 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:07 crc kubenswrapper[4949]: E1001 15:44:07.477502 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:07.977487237 +0000 UTC m=+147.283093428 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.539222 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-kwm7g" Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.563015 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-jpqsm" podStartSLOduration=123.562992969 podStartE2EDuration="2m3.562992969s" podCreationTimestamp="2025-10-01 15:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:44:07.239341574 +0000 UTC m=+146.544947765" watchObservedRunningTime="2025-10-01 15:44:07.562992969 +0000 UTC m=+146.868599160" Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.578369 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:07 crc kubenswrapper[4949]: E1001 15:44:07.578548 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:08.07852275 +0000 UTC m=+147.384128941 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.579105 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:07 crc kubenswrapper[4949]: E1001 15:44:07.579444 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:08.079433446 +0000 UTC m=+147.385039697 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.591052 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-2vm94"] Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.592417 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2vm94" Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.595112 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.611773 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2vm94"] Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.681041 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.681355 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdqdr\" (UniqueName: \"kubernetes.io/projected/505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9-kube-api-access-zdqdr\") pod \"redhat-marketplace-2vm94\" (UID: \"505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9\") " pod="openshift-marketplace/redhat-marketplace-2vm94" Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.681389 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9-utilities\") pod \"redhat-marketplace-2vm94\" (UID: \"505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9\") " pod="openshift-marketplace/redhat-marketplace-2vm94" Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.681427 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9-catalog-content\") pod \"redhat-marketplace-2vm94\" (UID: \"505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9\") " pod="openshift-marketplace/redhat-marketplace-2vm94" Oct 01 15:44:07 crc kubenswrapper[4949]: E1001 15:44:07.681688 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:08.181657795 +0000 UTC m=+147.487264056 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.695879 4949 patch_prober.go:28] interesting pod/router-default-5444994796-58ggj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 01 15:44:07 crc kubenswrapper[4949]: [-]has-synced failed: reason withheld Oct 01 15:44:07 crc kubenswrapper[4949]: [+]process-running ok Oct 01 15:44:07 crc kubenswrapper[4949]: healthz check failed Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.695934 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-58ggj" podUID="0dede0f5-5799-426f-94dc-cba3a14494fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.783698 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zdqdr\" (UniqueName: \"kubernetes.io/projected/505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9-kube-api-access-zdqdr\") pod \"redhat-marketplace-2vm94\" (UID: \"505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9\") " pod="openshift-marketplace/redhat-marketplace-2vm94" Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.783733 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9-utilities\") pod \"redhat-marketplace-2vm94\" (UID: \"505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9\") " pod="openshift-marketplace/redhat-marketplace-2vm94" Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.783759 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.783776 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9-catalog-content\") pod \"redhat-marketplace-2vm94\" (UID: \"505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9\") " pod="openshift-marketplace/redhat-marketplace-2vm94" Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.784211 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9-catalog-content\") pod \"redhat-marketplace-2vm94\" (UID: \"505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9\") " pod="openshift-marketplace/redhat-marketplace-2vm94" Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.784277 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9-utilities\") pod \"redhat-marketplace-2vm94\" (UID: \"505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9\") " pod="openshift-marketplace/redhat-marketplace-2vm94" Oct 01 15:44:07 crc kubenswrapper[4949]: E1001 15:44:07.784310 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:08.284295834 +0000 UTC m=+147.589902025 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.810070 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zdqdr\" (UniqueName: \"kubernetes.io/projected/505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9-kube-api-access-zdqdr\") pod \"redhat-marketplace-2vm94\" (UID: \"505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9\") " pod="openshift-marketplace/redhat-marketplace-2vm94" Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.885156 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:07 crc kubenswrapper[4949]: E1001 15:44:07.885357 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:08.385327506 +0000 UTC m=+147.690933697 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.885418 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:07 crc kubenswrapper[4949]: E1001 15:44:07.885757 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:08.385743978 +0000 UTC m=+147.691350239 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.907359 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2vm94" Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.984221 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-cllpk"] Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.985551 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cllpk" Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.986446 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:07 crc kubenswrapper[4949]: E1001 15:44:07.986554 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:08.486535803 +0000 UTC m=+147.792141994 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.986759 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:07 crc kubenswrapper[4949]: E1001 15:44:07.987052 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:08.487043489 +0000 UTC m=+147.792649680 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:07 crc kubenswrapper[4949]: I1001 15:44:07.999558 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cllpk"] Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.087989 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:08 crc kubenswrapper[4949]: E1001 15:44:08.088192 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:08.588162114 +0000 UTC m=+147.893768305 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.088322 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fce939d-7a50-418c-876e-05cc8619a809-catalog-content\") pod \"redhat-marketplace-cllpk\" (UID: \"2fce939d-7a50-418c-876e-05cc8619a809\") " pod="openshift-marketplace/redhat-marketplace-cllpk" Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.088369 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7gm8k\" (UniqueName: \"kubernetes.io/projected/2fce939d-7a50-418c-876e-05cc8619a809-kube-api-access-7gm8k\") pod \"redhat-marketplace-cllpk\" (UID: \"2fce939d-7a50-418c-876e-05cc8619a809\") " pod="openshift-marketplace/redhat-marketplace-cllpk" Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.088418 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fce939d-7a50-418c-876e-05cc8619a809-utilities\") pod \"redhat-marketplace-cllpk\" (UID: \"2fce939d-7a50-418c-876e-05cc8619a809\") " pod="openshift-marketplace/redhat-marketplace-cllpk" Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.088447 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:08 crc kubenswrapper[4949]: E1001 15:44:08.088720 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:08.5887086 +0000 UTC m=+147.894314791 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.131252 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-2vm94"] Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.182811 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-hjkx4" event={"ID":"54bc2b08-ed4b-45fc-baa6-e681a412b2ed","Type":"ContainerStarted","Data":"afe1388646bd257c1b1af58d0f144f669cc2b482b1379d731b2c4be00c3f7dfe"} Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.189306 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-8s6sd" event={"ID":"bf8fbefc-06d3-4792-b11c-86e8488b231e","Type":"ContainerStarted","Data":"df9709a91c48c2ccf53ef733fd39bc55ba289921d4f508ee0c97463d1e286618"} Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.189992 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-8s6sd" Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.190381 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:08 crc kubenswrapper[4949]: E1001 15:44:08.190544 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:08.690509505 +0000 UTC m=+147.996115686 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.190693 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7gm8k\" (UniqueName: \"kubernetes.io/projected/2fce939d-7a50-418c-876e-05cc8619a809-kube-api-access-7gm8k\") pod \"redhat-marketplace-cllpk\" (UID: \"2fce939d-7a50-418c-876e-05cc8619a809\") " pod="openshift-marketplace/redhat-marketplace-cllpk" Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.190764 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fce939d-7a50-418c-876e-05cc8619a809-utilities\") pod \"redhat-marketplace-cllpk\" (UID: \"2fce939d-7a50-418c-876e-05cc8619a809\") " pod="openshift-marketplace/redhat-marketplace-cllpk" Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.190804 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.190874 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fce939d-7a50-418c-876e-05cc8619a809-catalog-content\") pod \"redhat-marketplace-cllpk\" (UID: \"2fce939d-7a50-418c-876e-05cc8619a809\") " pod="openshift-marketplace/redhat-marketplace-cllpk" Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.191416 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fce939d-7a50-418c-876e-05cc8619a809-catalog-content\") pod \"redhat-marketplace-cllpk\" (UID: \"2fce939d-7a50-418c-876e-05cc8619a809\") " pod="openshift-marketplace/redhat-marketplace-cllpk" Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.191562 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fce939d-7a50-418c-876e-05cc8619a809-utilities\") pod \"redhat-marketplace-cllpk\" (UID: \"2fce939d-7a50-418c-876e-05cc8619a809\") " pod="openshift-marketplace/redhat-marketplace-cllpk" Oct 01 15:44:08 crc kubenswrapper[4949]: E1001 15:44:08.191618 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:08.691577447 +0000 UTC m=+147.997183708 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.194672 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-mlktd" event={"ID":"f22fcb2d-8db9-4200-bedb-b47ab677a3f2","Type":"ContainerStarted","Data":"e230b9bdb0feb778928f5f362e7fbac59451c5e9cbbd6fc0b9738b0975c15463"} Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.202598 4949 generic.go:334] "Generic (PLEG): container finished" podID="afd2fd7e-b101-41fa-bc35-87cdf06d791a" containerID="a39ccf65c6d5bb62356609ef55f26ecbe3832bb02f7be6e6c5b1839927333d6b" exitCode=0 Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.202680 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6s25h" event={"ID":"afd2fd7e-b101-41fa-bc35-87cdf06d791a","Type":"ContainerDied","Data":"a39ccf65c6d5bb62356609ef55f26ecbe3832bb02f7be6e6c5b1839927333d6b"} Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.205736 4949 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.212277 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7gm8k\" (UniqueName: \"kubernetes.io/projected/2fce939d-7a50-418c-876e-05cc8619a809-kube-api-access-7gm8k\") pod \"redhat-marketplace-cllpk\" (UID: \"2fce939d-7a50-418c-876e-05cc8619a809\") " pod="openshift-marketplace/redhat-marketplace-cllpk" Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.214445 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-hjkx4" podStartSLOduration=124.214428154 podStartE2EDuration="2m4.214428154s" podCreationTimestamp="2025-10-01 15:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:44:08.212016652 +0000 UTC m=+147.517622843" watchObservedRunningTime="2025-10-01 15:44:08.214428154 +0000 UTC m=+147.520034345" Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.214493 4949 generic.go:334] "Generic (PLEG): container finished" podID="a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c" containerID="454f0a7ffe07950d33c793c2537bdb84dada942c7bc74dd5d60d009f8634b1bd" exitCode=0 Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.214557 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84mb5" event={"ID":"a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c","Type":"ContainerDied","Data":"454f0a7ffe07950d33c793c2537bdb84dada942c7bc74dd5d60d009f8634b1bd"} Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.218665 4949 generic.go:334] "Generic (PLEG): container finished" podID="48941e1e-2481-47ca-832d-047a6d3220c8" containerID="cb688aecf38a364dabe74046126fad8c15ff3914925dd3f93d22fb322595e15d" exitCode=0 Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.218825 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h9w85" event={"ID":"48941e1e-2481-47ca-832d-047a6d3220c8","Type":"ContainerDied","Data":"cb688aecf38a364dabe74046126fad8c15ff3914925dd3f93d22fb322595e15d"} Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.223925 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5hfpc" event={"ID":"1dc73581-03ed-4c2a-8631-0e3adab48686","Type":"ContainerStarted","Data":"16227d11fbef99c9b2b49ba108ce124dc88201d91ad589cf4efab036ea7a7bae"} Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.224541 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5hfpc" Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.226203 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2vm94" event={"ID":"505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9","Type":"ContainerStarted","Data":"e9f89d1056edd940d47e5489fcf2ab5fc92b064001eb81d1285215b738cde6f2"} Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.230178 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-mlktd" podStartSLOduration=124.230166379 podStartE2EDuration="2m4.230166379s" podCreationTimestamp="2025-10-01 15:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:44:08.227731687 +0000 UTC m=+147.533337878" watchObservedRunningTime="2025-10-01 15:44:08.230166379 +0000 UTC m=+147.535772570" Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.230615 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-twgpv" event={"ID":"7ef25191-6af6-4921-973b-f2b127a01a6a","Type":"ContainerStarted","Data":"0aa3162730c414143cf7d4a24ab046f259280b8ae3ab722ee1dc2de917675955"} Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.271117 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-8s6sd" podStartSLOduration=9.271096712 podStartE2EDuration="9.271096712s" podCreationTimestamp="2025-10-01 15:43:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:44:08.265445875 +0000 UTC m=+147.571052076" watchObservedRunningTime="2025-10-01 15:44:08.271096712 +0000 UTC m=+147.576702903" Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.296942 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:08 crc kubenswrapper[4949]: E1001 15:44:08.297959 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:08.797940527 +0000 UTC m=+148.103546708 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.301450 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cllpk" Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.316481 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hbkqd" podStartSLOduration=123.316461085 podStartE2EDuration="2m3.316461085s" podCreationTimestamp="2025-10-01 15:42:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:44:08.29636629 +0000 UTC m=+147.601972481" watchObservedRunningTime="2025-10-01 15:44:08.316461085 +0000 UTC m=+147.622067276" Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.362465 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5hfpc" podStartSLOduration=123.362443808 podStartE2EDuration="2m3.362443808s" podCreationTimestamp="2025-10-01 15:42:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:44:08.343461545 +0000 UTC m=+147.649067746" watchObservedRunningTime="2025-10-01 15:44:08.362443808 +0000 UTC m=+147.668049999" Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.398999 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.399671 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.399788 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:44:08 crc kubenswrapper[4949]: E1001 15:44:08.400179 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:08.900160824 +0000 UTC m=+148.205767065 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.408564 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.409955 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.430715 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-pblzs" podStartSLOduration=123.430673408 podStartE2EDuration="2m3.430673408s" podCreationTimestamp="2025-10-01 15:42:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:44:08.3871984 +0000 UTC m=+147.692804601" watchObservedRunningTime="2025-10-01 15:44:08.430673408 +0000 UTC m=+147.736279609" Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.448348 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-twgpv" podStartSLOduration=123.448328921 podStartE2EDuration="2m3.448328921s" podCreationTimestamp="2025-10-01 15:42:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:44:08.447192547 +0000 UTC m=+147.752798748" watchObservedRunningTime="2025-10-01 15:44:08.448328921 +0000 UTC m=+147.753935112" Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.501081 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-tszj5" podStartSLOduration=124.501059853 podStartE2EDuration="2m4.501059853s" podCreationTimestamp="2025-10-01 15:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:44:08.497414405 +0000 UTC m=+147.803020596" watchObservedRunningTime="2025-10-01 15:44:08.501059853 +0000 UTC m=+147.806666044" Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.501474 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-nbrh4" podStartSLOduration=123.501457594 podStartE2EDuration="2m3.501457594s" podCreationTimestamp="2025-10-01 15:42:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:44:08.473374772 +0000 UTC m=+147.778980964" watchObservedRunningTime="2025-10-01 15:44:08.501457594 +0000 UTC m=+147.807063785" Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.502442 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.502708 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.502797 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:44:08 crc kubenswrapper[4949]: E1001 15:44:08.504805 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:09.004785293 +0000 UTC m=+148.310391484 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.509995 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.518522 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.520350 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.524660 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w9c7m" podStartSLOduration=123.524639161 podStartE2EDuration="2m3.524639161s" podCreationTimestamp="2025-10-01 15:42:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:44:08.523689163 +0000 UTC m=+147.829295364" watchObservedRunningTime="2025-10-01 15:44:08.524639161 +0000 UTC m=+147.830245352" Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.530684 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.604909 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:08 crc kubenswrapper[4949]: E1001 15:44:08.605355 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:09.105341121 +0000 UTC m=+148.410947312 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.696758 4949 patch_prober.go:28] interesting pod/router-default-5444994796-58ggj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 01 15:44:08 crc kubenswrapper[4949]: [-]has-synced failed: reason withheld Oct 01 15:44:08 crc kubenswrapper[4949]: [+]process-running ok Oct 01 15:44:08 crc kubenswrapper[4949]: healthz check failed Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.696810 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-58ggj" podUID="0dede0f5-5799-426f-94dc-cba3a14494fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.709762 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:08 crc kubenswrapper[4949]: E1001 15:44:08.716103 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:09.216076101 +0000 UTC m=+148.521682292 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.740635 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.812564 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:08 crc kubenswrapper[4949]: E1001 15:44:08.812880 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:09.312867757 +0000 UTC m=+148.618473948 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.909439 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-cllpk"] Oct 01 15:44:08 crc kubenswrapper[4949]: I1001 15:44:08.913667 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:08 crc kubenswrapper[4949]: E1001 15:44:08.914043 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:09.414025033 +0000 UTC m=+148.719631224 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.015872 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:09 crc kubenswrapper[4949]: E1001 15:44:09.016283 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:09.516267262 +0000 UTC m=+148.821873453 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.038750 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ltmmw"] Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.039897 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ltmmw" Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.042430 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.071648 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ltmmw"] Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.117625 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.117901 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2pk7\" (UniqueName: \"kubernetes.io/projected/470a3d16-cc7d-4824-a36e-a6004d9b530f-kube-api-access-v2pk7\") pod \"redhat-operators-ltmmw\" (UID: \"470a3d16-cc7d-4824-a36e-a6004d9b530f\") " pod="openshift-marketplace/redhat-operators-ltmmw" Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.117951 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/470a3d16-cc7d-4824-a36e-a6004d9b530f-catalog-content\") pod \"redhat-operators-ltmmw\" (UID: \"470a3d16-cc7d-4824-a36e-a6004d9b530f\") " pod="openshift-marketplace/redhat-operators-ltmmw" Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.117987 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/470a3d16-cc7d-4824-a36e-a6004d9b530f-utilities\") pod \"redhat-operators-ltmmw\" (UID: \"470a3d16-cc7d-4824-a36e-a6004d9b530f\") " pod="openshift-marketplace/redhat-operators-ltmmw" Oct 01 15:44:09 crc kubenswrapper[4949]: E1001 15:44:09.118387 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:09.618319894 +0000 UTC m=+148.923926085 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.220795 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v2pk7\" (UniqueName: \"kubernetes.io/projected/470a3d16-cc7d-4824-a36e-a6004d9b530f-kube-api-access-v2pk7\") pod \"redhat-operators-ltmmw\" (UID: \"470a3d16-cc7d-4824-a36e-a6004d9b530f\") " pod="openshift-marketplace/redhat-operators-ltmmw" Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.221085 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/470a3d16-cc7d-4824-a36e-a6004d9b530f-catalog-content\") pod \"redhat-operators-ltmmw\" (UID: \"470a3d16-cc7d-4824-a36e-a6004d9b530f\") " pod="openshift-marketplace/redhat-operators-ltmmw" Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.221109 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.221154 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/470a3d16-cc7d-4824-a36e-a6004d9b530f-utilities\") pod \"redhat-operators-ltmmw\" (UID: \"470a3d16-cc7d-4824-a36e-a6004d9b530f\") " pod="openshift-marketplace/redhat-operators-ltmmw" Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.221542 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/470a3d16-cc7d-4824-a36e-a6004d9b530f-utilities\") pod \"redhat-operators-ltmmw\" (UID: \"470a3d16-cc7d-4824-a36e-a6004d9b530f\") " pod="openshift-marketplace/redhat-operators-ltmmw" Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.221609 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/470a3d16-cc7d-4824-a36e-a6004d9b530f-catalog-content\") pod \"redhat-operators-ltmmw\" (UID: \"470a3d16-cc7d-4824-a36e-a6004d9b530f\") " pod="openshift-marketplace/redhat-operators-ltmmw" Oct 01 15:44:09 crc kubenswrapper[4949]: E1001 15:44:09.221768 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:09.721757278 +0000 UTC m=+149.027363469 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.256861 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v2pk7\" (UniqueName: \"kubernetes.io/projected/470a3d16-cc7d-4824-a36e-a6004d9b530f-kube-api-access-v2pk7\") pod \"redhat-operators-ltmmw\" (UID: \"470a3d16-cc7d-4824-a36e-a6004d9b530f\") " pod="openshift-marketplace/redhat-operators-ltmmw" Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.261211 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cllpk" event={"ID":"2fce939d-7a50-418c-876e-05cc8619a809","Type":"ContainerStarted","Data":"4a913027f74f0c671308cae5906c5a8d36c9c7aff0ef84329292e7857098d1bc"} Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.272289 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"bcd802ce1cda797c37b6ca041500133b98396ba82cc5f0bfcc1859eee01a83b5"} Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.288245 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"7a461bb88d992def6df629431344c0e517a720a5cc2675647869822d67afcf80"} Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.322397 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:09 crc kubenswrapper[4949]: E1001 15:44:09.322756 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:09.822741168 +0000 UTC m=+149.128347359 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.331472 4949 generic.go:334] "Generic (PLEG): container finished" podID="505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9" containerID="8b8386ad119c1cf40d321913a8d81ce5676d70974948a5aaf9ab51fb61d82eb0" exitCode=0 Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.333531 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2vm94" event={"ID":"505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9","Type":"ContainerDied","Data":"8b8386ad119c1cf40d321913a8d81ce5676d70974948a5aaf9ab51fb61d82eb0"} Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.384406 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jvj9j"] Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.385390 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jvj9j" Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.397205 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jvj9j"] Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.423752 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:09 crc kubenswrapper[4949]: E1001 15:44:09.425940 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:09.925919194 +0000 UTC m=+149.231525385 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.462649 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ltmmw" Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.524396 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:09 crc kubenswrapper[4949]: E1001 15:44:09.524522 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:10.024500354 +0000 UTC m=+149.330106555 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.524799 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a935699b-031c-41c8-ae81-e631cfb6d465-catalog-content\") pod \"redhat-operators-jvj9j\" (UID: \"a935699b-031c-41c8-ae81-e631cfb6d465\") " pod="openshift-marketplace/redhat-operators-jvj9j" Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.524863 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a935699b-031c-41c8-ae81-e631cfb6d465-utilities\") pod \"redhat-operators-jvj9j\" (UID: \"a935699b-031c-41c8-ae81-e631cfb6d465\") " pod="openshift-marketplace/redhat-operators-jvj9j" Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.524880 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssfnc\" (UniqueName: \"kubernetes.io/projected/a935699b-031c-41c8-ae81-e631cfb6d465-kube-api-access-ssfnc\") pod \"redhat-operators-jvj9j\" (UID: \"a935699b-031c-41c8-ae81-e631cfb6d465\") " pod="openshift-marketplace/redhat-operators-jvj9j" Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.524970 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:09 crc kubenswrapper[4949]: E1001 15:44:09.525225 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:10.025216996 +0000 UTC m=+149.330823187 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.611591 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.612099 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.612186 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.616575 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.616653 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.628186 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.628469 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a935699b-031c-41c8-ae81-e631cfb6d465-catalog-content\") pod \"redhat-operators-jvj9j\" (UID: \"a935699b-031c-41c8-ae81-e631cfb6d465\") " pod="openshift-marketplace/redhat-operators-jvj9j" Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.628549 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a935699b-031c-41c8-ae81-e631cfb6d465-utilities\") pod \"redhat-operators-jvj9j\" (UID: \"a935699b-031c-41c8-ae81-e631cfb6d465\") " pod="openshift-marketplace/redhat-operators-jvj9j" Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.628572 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssfnc\" (UniqueName: \"kubernetes.io/projected/a935699b-031c-41c8-ae81-e631cfb6d465-kube-api-access-ssfnc\") pod \"redhat-operators-jvj9j\" (UID: \"a935699b-031c-41c8-ae81-e631cfb6d465\") " pod="openshift-marketplace/redhat-operators-jvj9j" Oct 01 15:44:09 crc kubenswrapper[4949]: E1001 15:44:09.629001 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:10.128984459 +0000 UTC m=+149.434590650 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.629571 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a935699b-031c-41c8-ae81-e631cfb6d465-catalog-content\") pod \"redhat-operators-jvj9j\" (UID: \"a935699b-031c-41c8-ae81-e631cfb6d465\") " pod="openshift-marketplace/redhat-operators-jvj9j" Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.629799 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a935699b-031c-41c8-ae81-e631cfb6d465-utilities\") pod \"redhat-operators-jvj9j\" (UID: \"a935699b-031c-41c8-ae81-e631cfb6d465\") " pod="openshift-marketplace/redhat-operators-jvj9j" Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.653077 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssfnc\" (UniqueName: \"kubernetes.io/projected/a935699b-031c-41c8-ae81-e631cfb6d465-kube-api-access-ssfnc\") pod \"redhat-operators-jvj9j\" (UID: \"a935699b-031c-41c8-ae81-e631cfb6d465\") " pod="openshift-marketplace/redhat-operators-jvj9j" Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.702283 4949 patch_prober.go:28] interesting pod/router-default-5444994796-58ggj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 01 15:44:09 crc kubenswrapper[4949]: [-]has-synced failed: reason withheld Oct 01 15:44:09 crc kubenswrapper[4949]: [+]process-running ok Oct 01 15:44:09 crc kubenswrapper[4949]: healthz check failed Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.702574 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-58ggj" podUID="0dede0f5-5799-426f-94dc-cba3a14494fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.729500 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4f942e26-b3f7-4cea-9d31-b4e882f1250a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4f942e26-b3f7-4cea-9d31-b4e882f1250a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.729573 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.729618 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4f942e26-b3f7-4cea-9d31-b4e882f1250a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4f942e26-b3f7-4cea-9d31-b4e882f1250a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 01 15:44:09 crc kubenswrapper[4949]: E1001 15:44:09.729904 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:10.229886918 +0000 UTC m=+149.535493109 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.730527 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ltmmw"] Oct 01 15:44:09 crc kubenswrapper[4949]: W1001 15:44:09.740612 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod470a3d16_cc7d_4824_a36e_a6004d9b530f.slice/crio-2d87cbe69ac722606b3b33c0867feb83356ccd65686a71c23c1323296494b73e WatchSource:0}: Error finding container 2d87cbe69ac722606b3b33c0867feb83356ccd65686a71c23c1323296494b73e: Status 404 returned error can't find the container with id 2d87cbe69ac722606b3b33c0867feb83356ccd65686a71c23c1323296494b73e Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.830948 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:09 crc kubenswrapper[4949]: E1001 15:44:09.831190 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:10.331159667 +0000 UTC m=+149.636765868 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.831313 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.831399 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4f942e26-b3f7-4cea-9d31-b4e882f1250a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4f942e26-b3f7-4cea-9d31-b4e882f1250a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.831511 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4f942e26-b3f7-4cea-9d31-b4e882f1250a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4f942e26-b3f7-4cea-9d31-b4e882f1250a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 01 15:44:09 crc kubenswrapper[4949]: E1001 15:44:09.831756 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:10.331744674 +0000 UTC m=+149.637350865 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.831454 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4f942e26-b3f7-4cea-9d31-b4e882f1250a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4f942e26-b3f7-4cea-9d31-b4e882f1250a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.850088 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4f942e26-b3f7-4cea-9d31-b4e882f1250a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4f942e26-b3f7-4cea-9d31-b4e882f1250a\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.852959 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jvj9j" Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.933780 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:09 crc kubenswrapper[4949]: E1001 15:44:09.933980 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:10.433952222 +0000 UTC m=+149.739558413 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.934047 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:09 crc kubenswrapper[4949]: E1001 15:44:09.934389 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:10.434378064 +0000 UTC m=+149.739984255 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:09 crc kubenswrapper[4949]: I1001 15:44:09.937400 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 01 15:44:10 crc kubenswrapper[4949]: I1001 15:44:10.034997 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:10 crc kubenswrapper[4949]: E1001 15:44:10.035398 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:10.535379475 +0000 UTC m=+149.840985666 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:10 crc kubenswrapper[4949]: I1001 15:44:10.123421 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jvj9j"] Oct 01 15:44:10 crc kubenswrapper[4949]: I1001 15:44:10.136753 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:10 crc kubenswrapper[4949]: E1001 15:44:10.137144 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:10.637110418 +0000 UTC m=+149.942716609 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:10 crc kubenswrapper[4949]: W1001 15:44:10.173458 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda935699b_031c_41c8_ae81_e631cfb6d465.slice/crio-d65c9443d26eaeca923df1986010a516fe1522a81f2b78ab276a6f1b2d5423fe WatchSource:0}: Error finding container d65c9443d26eaeca923df1986010a516fe1522a81f2b78ab276a6f1b2d5423fe: Status 404 returned error can't find the container with id d65c9443d26eaeca923df1986010a516fe1522a81f2b78ab276a6f1b2d5423fe Oct 01 15:44:10 crc kubenswrapper[4949]: I1001 15:44:10.239573 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:10 crc kubenswrapper[4949]: E1001 15:44:10.239981 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:10.739961375 +0000 UTC m=+150.045567566 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:10 crc kubenswrapper[4949]: I1001 15:44:10.310455 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Oct 01 15:44:10 crc kubenswrapper[4949]: I1001 15:44:10.340831 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:10 crc kubenswrapper[4949]: E1001 15:44:10.341415 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:10.841401329 +0000 UTC m=+150.147007520 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:10 crc kubenswrapper[4949]: I1001 15:44:10.397741 4949 generic.go:334] "Generic (PLEG): container finished" podID="470a3d16-cc7d-4824-a36e-a6004d9b530f" containerID="cdedce3ba48b13a5321aa560db01b76033274451a2462a953d15ac96fe30bec6" exitCode=0 Oct 01 15:44:10 crc kubenswrapper[4949]: I1001 15:44:10.397802 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ltmmw" event={"ID":"470a3d16-cc7d-4824-a36e-a6004d9b530f","Type":"ContainerDied","Data":"cdedce3ba48b13a5321aa560db01b76033274451a2462a953d15ac96fe30bec6"} Oct 01 15:44:10 crc kubenswrapper[4949]: I1001 15:44:10.397827 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ltmmw" event={"ID":"470a3d16-cc7d-4824-a36e-a6004d9b530f","Type":"ContainerStarted","Data":"2d87cbe69ac722606b3b33c0867feb83356ccd65686a71c23c1323296494b73e"} Oct 01 15:44:10 crc kubenswrapper[4949]: I1001 15:44:10.416690 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jvj9j" event={"ID":"a935699b-031c-41c8-ae81-e631cfb6d465","Type":"ContainerStarted","Data":"d65c9443d26eaeca923df1986010a516fe1522a81f2b78ab276a6f1b2d5423fe"} Oct 01 15:44:10 crc kubenswrapper[4949]: I1001 15:44:10.436593 4949 generic.go:334] "Generic (PLEG): container finished" podID="2fce939d-7a50-418c-876e-05cc8619a809" containerID="111fa9828ed33f70d91234ec54572d82481ea11932d35fcef2df5492b54d6bd6" exitCode=0 Oct 01 15:44:10 crc kubenswrapper[4949]: I1001 15:44:10.436688 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cllpk" event={"ID":"2fce939d-7a50-418c-876e-05cc8619a809","Type":"ContainerDied","Data":"111fa9828ed33f70d91234ec54572d82481ea11932d35fcef2df5492b54d6bd6"} Oct 01 15:44:10 crc kubenswrapper[4949]: I1001 15:44:10.441694 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:10 crc kubenswrapper[4949]: E1001 15:44:10.442043 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:10.942028619 +0000 UTC m=+150.247634810 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:10 crc kubenswrapper[4949]: I1001 15:44:10.443807 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"ef8b327cf2bcab98c1e81c0765f607c5f862ac19da24e87646f7244319e0be82"} Oct 01 15:44:10 crc kubenswrapper[4949]: I1001 15:44:10.465751 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"028a5613f0c0933f7d255c4d23fb0dcb237de3eb7f385ca39c9ba4494ff8750a"} Oct 01 15:44:10 crc kubenswrapper[4949]: I1001 15:44:10.465791 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"43d17d7c15d20736ed748ea949636bdae5e2848921b6ba01ebceb86d3d10d15b"} Oct 01 15:44:10 crc kubenswrapper[4949]: I1001 15:44:10.466415 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:44:10 crc kubenswrapper[4949]: I1001 15:44:10.489159 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"1e1e94fae552b2d9b428acdb83a7a39b6433ec4e0016872f558e0027c723dd56"} Oct 01 15:44:10 crc kubenswrapper[4949]: I1001 15:44:10.492261 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rsrns" event={"ID":"837935a1-6cd1-4472-a692-e9c13f2b7ad7","Type":"ContainerStarted","Data":"3419f1ff717a44a729d61e1113b0f748f586db0f18b6e0aee6da70b4a2e479be"} Oct 01 15:44:10 crc kubenswrapper[4949]: I1001 15:44:10.545897 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:10 crc kubenswrapper[4949]: E1001 15:44:10.546236 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:11.046223905 +0000 UTC m=+150.351830096 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:10 crc kubenswrapper[4949]: I1001 15:44:10.647122 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:10 crc kubenswrapper[4949]: E1001 15:44:10.647268 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:11.147241207 +0000 UTC m=+150.452847398 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:10 crc kubenswrapper[4949]: I1001 15:44:10.647359 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:10 crc kubenswrapper[4949]: E1001 15:44:10.648619 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:11.148603848 +0000 UTC m=+150.454210039 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:10 crc kubenswrapper[4949]: I1001 15:44:10.698157 4949 patch_prober.go:28] interesting pod/router-default-5444994796-58ggj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 01 15:44:10 crc kubenswrapper[4949]: [-]has-synced failed: reason withheld Oct 01 15:44:10 crc kubenswrapper[4949]: [+]process-running ok Oct 01 15:44:10 crc kubenswrapper[4949]: healthz check failed Oct 01 15:44:10 crc kubenswrapper[4949]: I1001 15:44:10.698212 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-58ggj" podUID="0dede0f5-5799-426f-94dc-cba3a14494fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 01 15:44:10 crc kubenswrapper[4949]: I1001 15:44:10.748469 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:10 crc kubenswrapper[4949]: E1001 15:44:10.748614 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:11.248581448 +0000 UTC m=+150.554187629 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:10 crc kubenswrapper[4949]: I1001 15:44:10.748689 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:10 crc kubenswrapper[4949]: E1001 15:44:10.748962 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:11.248948399 +0000 UTC m=+150.554554590 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:10 crc kubenswrapper[4949]: I1001 15:44:10.849720 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:10 crc kubenswrapper[4949]: E1001 15:44:10.850037 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:11.350008723 +0000 UTC m=+150.655614914 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:10 crc kubenswrapper[4949]: I1001 15:44:10.850270 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:10 crc kubenswrapper[4949]: E1001 15:44:10.850590 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:11.350575119 +0000 UTC m=+150.656181320 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:10 crc kubenswrapper[4949]: I1001 15:44:10.951637 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:10 crc kubenswrapper[4949]: E1001 15:44:10.951797 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:11.451765206 +0000 UTC m=+150.757371397 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:10 crc kubenswrapper[4949]: I1001 15:44:10.951988 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:10 crc kubenswrapper[4949]: E1001 15:44:10.952342 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:11.452329213 +0000 UTC m=+150.757935404 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:11 crc kubenswrapper[4949]: I1001 15:44:11.052841 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:11 crc kubenswrapper[4949]: E1001 15:44:11.053291 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:11.553271632 +0000 UTC m=+150.858877823 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:11 crc kubenswrapper[4949]: I1001 15:44:11.154992 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:11 crc kubenswrapper[4949]: E1001 15:44:11.155453 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:11.655436829 +0000 UTC m=+150.961043020 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:11 crc kubenswrapper[4949]: I1001 15:44:11.256217 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:11 crc kubenswrapper[4949]: E1001 15:44:11.256567 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:11.756550223 +0000 UTC m=+151.062156414 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:11 crc kubenswrapper[4949]: I1001 15:44:11.360865 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:11 crc kubenswrapper[4949]: E1001 15:44:11.361404 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:11.861387638 +0000 UTC m=+151.166993829 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:11 crc kubenswrapper[4949]: I1001 15:44:11.461762 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:11 crc kubenswrapper[4949]: E1001 15:44:11.462154 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:11.962115271 +0000 UTC m=+151.267721462 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:11 crc kubenswrapper[4949]: I1001 15:44:11.473387 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-np8v7" Oct 01 15:44:11 crc kubenswrapper[4949]: I1001 15:44:11.473436 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-np8v7" Oct 01 15:44:11 crc kubenswrapper[4949]: I1001 15:44:11.484092 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-np8v7" Oct 01 15:44:11 crc kubenswrapper[4949]: I1001 15:44:11.489297 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-sm5dr" Oct 01 15:44:11 crc kubenswrapper[4949]: I1001 15:44:11.527580 4949 generic.go:334] "Generic (PLEG): container finished" podID="a935699b-031c-41c8-ae81-e631cfb6d465" containerID="ce16896e65d62fb65450d2c9c66a1055ad70726405b58032edbfa1072224f0ac" exitCode=0 Oct 01 15:44:11 crc kubenswrapper[4949]: I1001 15:44:11.527881 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jvj9j" event={"ID":"a935699b-031c-41c8-ae81-e631cfb6d465","Type":"ContainerDied","Data":"ce16896e65d62fb65450d2c9c66a1055ad70726405b58032edbfa1072224f0ac"} Oct 01 15:44:11 crc kubenswrapper[4949]: I1001 15:44:11.563593 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:11 crc kubenswrapper[4949]: I1001 15:44:11.565465 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4f942e26-b3f7-4cea-9d31-b4e882f1250a","Type":"ContainerStarted","Data":"9966fb39b6cc8528397de2c4278752da7ee2cf1db6d8dae0c4008b1c4ca32f3a"} Oct 01 15:44:11 crc kubenswrapper[4949]: I1001 15:44:11.565502 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4f942e26-b3f7-4cea-9d31-b4e882f1250a","Type":"ContainerStarted","Data":"34c118a96a6bfbb9991ba0c6f3d3f0cff836cf7383b5e1952b6875c59233efff"} Oct 01 15:44:11 crc kubenswrapper[4949]: E1001 15:44:11.566357 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:12.066340478 +0000 UTC m=+151.371946669 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:11 crc kubenswrapper[4949]: I1001 15:44:11.581437 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-hjkx4" Oct 01 15:44:11 crc kubenswrapper[4949]: I1001 15:44:11.581477 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-hjkx4" Oct 01 15:44:11 crc kubenswrapper[4949]: I1001 15:44:11.584307 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-np8v7" Oct 01 15:44:11 crc kubenswrapper[4949]: I1001 15:44:11.667024 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:11 crc kubenswrapper[4949]: E1001 15:44:11.667783 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:12.167766862 +0000 UTC m=+151.473373053 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:11 crc kubenswrapper[4949]: I1001 15:44:11.677796 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.677774778 podStartE2EDuration="2.677774778s" podCreationTimestamp="2025-10-01 15:44:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:44:11.671321128 +0000 UTC m=+150.976927329" watchObservedRunningTime="2025-10-01 15:44:11.677774778 +0000 UTC m=+150.983380969" Oct 01 15:44:11 crc kubenswrapper[4949]: I1001 15:44:11.700456 4949 patch_prober.go:28] interesting pod/router-default-5444994796-58ggj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 01 15:44:11 crc kubenswrapper[4949]: [-]has-synced failed: reason withheld Oct 01 15:44:11 crc kubenswrapper[4949]: [+]process-running ok Oct 01 15:44:11 crc kubenswrapper[4949]: healthz check failed Oct 01 15:44:11 crc kubenswrapper[4949]: I1001 15:44:11.700524 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-58ggj" podUID="0dede0f5-5799-426f-94dc-cba3a14494fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 01 15:44:11 crc kubenswrapper[4949]: I1001 15:44:11.773866 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:11 crc kubenswrapper[4949]: E1001 15:44:11.774307 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:12.274289676 +0000 UTC m=+151.579895867 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:11 crc kubenswrapper[4949]: I1001 15:44:11.874506 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:11 crc kubenswrapper[4949]: E1001 15:44:11.874970 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:12.374954648 +0000 UTC m=+151.680560839 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:11 crc kubenswrapper[4949]: I1001 15:44:11.983043 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:11 crc kubenswrapper[4949]: E1001 15:44:11.983404 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:12.4833899 +0000 UTC m=+151.788996091 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:12 crc kubenswrapper[4949]: I1001 15:44:12.089022 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:12 crc kubenswrapper[4949]: E1001 15:44:12.089317 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:12.589302337 +0000 UTC m=+151.894908528 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:12 crc kubenswrapper[4949]: I1001 15:44:12.191061 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:12 crc kubenswrapper[4949]: E1001 15:44:12.191865 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:12.691632647 +0000 UTC m=+151.997238828 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:12 crc kubenswrapper[4949]: I1001 15:44:12.291866 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:12 crc kubenswrapper[4949]: E1001 15:44:12.292303 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:12.792287388 +0000 UTC m=+152.097893579 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:12 crc kubenswrapper[4949]: I1001 15:44:12.297466 4949 patch_prober.go:28] interesting pod/downloads-7954f5f757-p2vmt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Oct 01 15:44:12 crc kubenswrapper[4949]: I1001 15:44:12.297514 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-p2vmt" podUID="c04759f6-7a1a-43cb-a705-41d2319646b6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" Oct 01 15:44:12 crc kubenswrapper[4949]: I1001 15:44:12.297471 4949 patch_prober.go:28] interesting pod/downloads-7954f5f757-p2vmt container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Oct 01 15:44:12 crc kubenswrapper[4949]: I1001 15:44:12.298580 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-p2vmt" podUID="c04759f6-7a1a-43cb-a705-41d2319646b6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" Oct 01 15:44:12 crc kubenswrapper[4949]: I1001 15:44:12.299885 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-xlwdp" Oct 01 15:44:12 crc kubenswrapper[4949]: I1001 15:44:12.299910 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-xlwdp" Oct 01 15:44:12 crc kubenswrapper[4949]: I1001 15:44:12.301357 4949 patch_prober.go:28] interesting pod/console-f9d7485db-xlwdp container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Oct 01 15:44:12 crc kubenswrapper[4949]: I1001 15:44:12.301391 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-xlwdp" podUID="62b77904-e0d8-4a98-b6e0-49b2c18821db" containerName="console" probeResult="failure" output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" Oct 01 15:44:12 crc kubenswrapper[4949]: I1001 15:44:12.395938 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:12 crc kubenswrapper[4949]: E1001 15:44:12.396269 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:12.896255678 +0000 UTC m=+152.201861869 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:12 crc kubenswrapper[4949]: I1001 15:44:12.406739 4949 patch_prober.go:28] interesting pod/apiserver-76f77b778f-hjkx4 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Oct 01 15:44:12 crc kubenswrapper[4949]: [+]log ok Oct 01 15:44:12 crc kubenswrapper[4949]: [+]etcd ok Oct 01 15:44:12 crc kubenswrapper[4949]: [+]poststarthook/start-apiserver-admission-initializer ok Oct 01 15:44:12 crc kubenswrapper[4949]: [+]poststarthook/generic-apiserver-start-informers ok Oct 01 15:44:12 crc kubenswrapper[4949]: [+]poststarthook/max-in-flight-filter ok Oct 01 15:44:12 crc kubenswrapper[4949]: [+]poststarthook/storage-object-count-tracker-hook ok Oct 01 15:44:12 crc kubenswrapper[4949]: [+]poststarthook/image.openshift.io-apiserver-caches ok Oct 01 15:44:12 crc kubenswrapper[4949]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Oct 01 15:44:12 crc kubenswrapper[4949]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Oct 01 15:44:12 crc kubenswrapper[4949]: [+]poststarthook/project.openshift.io-projectcache ok Oct 01 15:44:12 crc kubenswrapper[4949]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Oct 01 15:44:12 crc kubenswrapper[4949]: [-]poststarthook/openshift.io-startinformers failed: reason withheld Oct 01 15:44:12 crc kubenswrapper[4949]: [+]poststarthook/openshift.io-restmapperupdater ok Oct 01 15:44:12 crc kubenswrapper[4949]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Oct 01 15:44:12 crc kubenswrapper[4949]: livez check failed Oct 01 15:44:12 crc kubenswrapper[4949]: I1001 15:44:12.406794 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-hjkx4" podUID="54bc2b08-ed4b-45fc-baa6-e681a412b2ed" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 01 15:44:12 crc kubenswrapper[4949]: I1001 15:44:12.497220 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:12 crc kubenswrapper[4949]: E1001 15:44:12.497347 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:12.997324362 +0000 UTC m=+152.302930563 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:12 crc kubenswrapper[4949]: I1001 15:44:12.497718 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:12 crc kubenswrapper[4949]: E1001 15:44:12.498032 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:12.998020072 +0000 UTC m=+152.303626263 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:12 crc kubenswrapper[4949]: I1001 15:44:12.600218 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:12 crc kubenswrapper[4949]: E1001 15:44:12.600556 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:13.100517357 +0000 UTC m=+152.406123558 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:12 crc kubenswrapper[4949]: I1001 15:44:12.600653 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:12 crc kubenswrapper[4949]: E1001 15:44:12.600992 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:13.100978492 +0000 UTC m=+152.406584703 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:12 crc kubenswrapper[4949]: I1001 15:44:12.620071 4949 generic.go:334] "Generic (PLEG): container finished" podID="98af3efd-3e5b-4bfd-96ae-f3629aa18f43" containerID="2d490e2f46af61ffe392c319b26ad8d1d9ce07b8f2e490d32a341c0386f66336" exitCode=0 Oct 01 15:44:12 crc kubenswrapper[4949]: I1001 15:44:12.620197 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29322210-dtf9r" event={"ID":"98af3efd-3e5b-4bfd-96ae-f3629aa18f43","Type":"ContainerDied","Data":"2d490e2f46af61ffe392c319b26ad8d1d9ce07b8f2e490d32a341c0386f66336"} Oct 01 15:44:12 crc kubenswrapper[4949]: I1001 15:44:12.625404 4949 generic.go:334] "Generic (PLEG): container finished" podID="4f942e26-b3f7-4cea-9d31-b4e882f1250a" containerID="9966fb39b6cc8528397de2c4278752da7ee2cf1db6d8dae0c4008b1c4ca32f3a" exitCode=0 Oct 01 15:44:12 crc kubenswrapper[4949]: I1001 15:44:12.625574 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4f942e26-b3f7-4cea-9d31-b4e882f1250a","Type":"ContainerDied","Data":"9966fb39b6cc8528397de2c4278752da7ee2cf1db6d8dae0c4008b1c4ca32f3a"} Oct 01 15:44:12 crc kubenswrapper[4949]: I1001 15:44:12.693310 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-58ggj" Oct 01 15:44:12 crc kubenswrapper[4949]: I1001 15:44:12.698680 4949 patch_prober.go:28] interesting pod/router-default-5444994796-58ggj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 01 15:44:12 crc kubenswrapper[4949]: [-]has-synced failed: reason withheld Oct 01 15:44:12 crc kubenswrapper[4949]: [+]process-running ok Oct 01 15:44:12 crc kubenswrapper[4949]: healthz check failed Oct 01 15:44:12 crc kubenswrapper[4949]: I1001 15:44:12.698731 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-58ggj" podUID="0dede0f5-5799-426f-94dc-cba3a14494fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 01 15:44:12 crc kubenswrapper[4949]: I1001 15:44:12.702316 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:12 crc kubenswrapper[4949]: E1001 15:44:12.702467 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:13.202443497 +0000 UTC m=+152.508049688 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:12 crc kubenswrapper[4949]: I1001 15:44:12.702556 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:12 crc kubenswrapper[4949]: E1001 15:44:12.703585 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:13.20357038 +0000 UTC m=+152.509176571 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:12 crc kubenswrapper[4949]: I1001 15:44:12.757160 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hbkqd" Oct 01 15:44:12 crc kubenswrapper[4949]: I1001 15:44:12.772045 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-hbkqd" Oct 01 15:44:12 crc kubenswrapper[4949]: I1001 15:44:12.804856 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:12 crc kubenswrapper[4949]: E1001 15:44:12.806076 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:13.306044705 +0000 UTC m=+152.611650906 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:12 crc kubenswrapper[4949]: I1001 15:44:12.919868 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:12 crc kubenswrapper[4949]: E1001 15:44:12.920208 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:13.420197036 +0000 UTC m=+152.725803227 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:13 crc kubenswrapper[4949]: I1001 15:44:13.018011 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w9c7m" Oct 01 15:44:13 crc kubenswrapper[4949]: I1001 15:44:13.020838 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:13 crc kubenswrapper[4949]: E1001 15:44:13.021208 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:13.521188567 +0000 UTC m=+152.826794758 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:13 crc kubenswrapper[4949]: I1001 15:44:13.022812 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-w9c7m" Oct 01 15:44:13 crc kubenswrapper[4949]: I1001 15:44:13.074200 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-nbrh4" Oct 01 15:44:13 crc kubenswrapper[4949]: I1001 15:44:13.079070 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-nbrh4" Oct 01 15:44:13 crc kubenswrapper[4949]: I1001 15:44:13.123077 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:13 crc kubenswrapper[4949]: E1001 15:44:13.123524 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:13.623503107 +0000 UTC m=+152.929109298 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:13 crc kubenswrapper[4949]: I1001 15:44:13.226297 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:13 crc kubenswrapper[4949]: E1001 15:44:13.226482 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:13.726456786 +0000 UTC m=+153.032062977 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:13 crc kubenswrapper[4949]: I1001 15:44:13.227702 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:13 crc kubenswrapper[4949]: E1001 15:44:13.228034 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:13.728016062 +0000 UTC m=+153.033622253 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:13 crc kubenswrapper[4949]: I1001 15:44:13.279137 4949 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Oct 01 15:44:13 crc kubenswrapper[4949]: I1001 15:44:13.330858 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:13 crc kubenswrapper[4949]: E1001 15:44:13.331048 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:13.831014983 +0000 UTC m=+153.136621184 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:13 crc kubenswrapper[4949]: I1001 15:44:13.331867 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:13 crc kubenswrapper[4949]: E1001 15:44:13.332408 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:13.832396524 +0000 UTC m=+153.138002715 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:13 crc kubenswrapper[4949]: I1001 15:44:13.432806 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:13 crc kubenswrapper[4949]: E1001 15:44:13.432944 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:13.932925452 +0000 UTC m=+153.238531643 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:13 crc kubenswrapper[4949]: I1001 15:44:13.433035 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:13 crc kubenswrapper[4949]: E1001 15:44:13.433337 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:13.933329454 +0000 UTC m=+153.238935645 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:13 crc kubenswrapper[4949]: I1001 15:44:13.533748 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:13 crc kubenswrapper[4949]: E1001 15:44:13.534049 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:14.034021726 +0000 UTC m=+153.339627917 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:13 crc kubenswrapper[4949]: I1001 15:44:13.534167 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:13 crc kubenswrapper[4949]: E1001 15:44:13.535006 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:14.034995395 +0000 UTC m=+153.340601586 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:13 crc kubenswrapper[4949]: I1001 15:44:13.634813 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:13 crc kubenswrapper[4949]: E1001 15:44:13.634985 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:14.134963556 +0000 UTC m=+153.440569747 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:13 crc kubenswrapper[4949]: I1001 15:44:13.635190 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:13 crc kubenswrapper[4949]: E1001 15:44:13.635542 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2025-10-01 15:44:14.135532032 +0000 UTC m=+153.441138213 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-47lbj" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:13 crc kubenswrapper[4949]: I1001 15:44:13.663492 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rsrns" event={"ID":"837935a1-6cd1-4472-a692-e9c13f2b7ad7","Type":"ContainerStarted","Data":"b4d150761343db305fb0a98465d3fc6973fe5acfca3861655fcd5cd2ffe5ab4b"} Oct 01 15:44:13 crc kubenswrapper[4949]: I1001 15:44:13.697006 4949 patch_prober.go:28] interesting pod/router-default-5444994796-58ggj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 01 15:44:13 crc kubenswrapper[4949]: [-]has-synced failed: reason withheld Oct 01 15:44:13 crc kubenswrapper[4949]: [+]process-running ok Oct 01 15:44:13 crc kubenswrapper[4949]: healthz check failed Oct 01 15:44:13 crc kubenswrapper[4949]: I1001 15:44:13.697058 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-58ggj" podUID="0dede0f5-5799-426f-94dc-cba3a14494fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 01 15:44:13 crc kubenswrapper[4949]: I1001 15:44:13.736353 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:13 crc kubenswrapper[4949]: E1001 15:44:13.737114 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2025-10-01 15:44:14.23709541 +0000 UTC m=+153.542701601 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Oct 01 15:44:13 crc kubenswrapper[4949]: I1001 15:44:13.765233 4949 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2025-10-01T15:44:13.279192469Z","Handler":null,"Name":""} Oct 01 15:44:13 crc kubenswrapper[4949]: I1001 15:44:13.773444 4949 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Oct 01 15:44:13 crc kubenswrapper[4949]: I1001 15:44:13.773507 4949 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Oct 01 15:44:13 crc kubenswrapper[4949]: I1001 15:44:13.843495 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:13 crc kubenswrapper[4949]: I1001 15:44:13.846872 4949 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Oct 01 15:44:13 crc kubenswrapper[4949]: I1001 15:44:13.846907 4949 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:13 crc kubenswrapper[4949]: I1001 15:44:13.881258 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-47lbj\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:13 crc kubenswrapper[4949]: I1001 15:44:13.945169 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Oct 01 15:44:13 crc kubenswrapper[4949]: I1001 15:44:13.966719 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 01 15:44:13 crc kubenswrapper[4949]: I1001 15:44:13.975545 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:13 crc kubenswrapper[4949]: I1001 15:44:13.989637 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 01 15:44:13 crc kubenswrapper[4949]: I1001 15:44:13.990384 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 01 15:44:13 crc kubenswrapper[4949]: I1001 15:44:13.993383 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Oct 01 15:44:13 crc kubenswrapper[4949]: I1001 15:44:13.993624 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Oct 01 15:44:13 crc kubenswrapper[4949]: I1001 15:44:13.997638 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 01 15:44:14 crc kubenswrapper[4949]: I1001 15:44:14.064749 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29322210-dtf9r" Oct 01 15:44:14 crc kubenswrapper[4949]: I1001 15:44:14.154219 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2e8b8282-5372-428c-b11c-94f059bc9a0e-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"2e8b8282-5372-428c-b11c-94f059bc9a0e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 01 15:44:14 crc kubenswrapper[4949]: I1001 15:44:14.154351 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e8b8282-5372-428c-b11c-94f059bc9a0e-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"2e8b8282-5372-428c-b11c-94f059bc9a0e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 01 15:44:14 crc kubenswrapper[4949]: I1001 15:44:14.176181 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 01 15:44:14 crc kubenswrapper[4949]: I1001 15:44:14.255880 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gwklq\" (UniqueName: \"kubernetes.io/projected/98af3efd-3e5b-4bfd-96ae-f3629aa18f43-kube-api-access-gwklq\") pod \"98af3efd-3e5b-4bfd-96ae-f3629aa18f43\" (UID: \"98af3efd-3e5b-4bfd-96ae-f3629aa18f43\") " Oct 01 15:44:14 crc kubenswrapper[4949]: I1001 15:44:14.255981 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/98af3efd-3e5b-4bfd-96ae-f3629aa18f43-secret-volume\") pod \"98af3efd-3e5b-4bfd-96ae-f3629aa18f43\" (UID: \"98af3efd-3e5b-4bfd-96ae-f3629aa18f43\") " Oct 01 15:44:14 crc kubenswrapper[4949]: I1001 15:44:14.256012 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98af3efd-3e5b-4bfd-96ae-f3629aa18f43-config-volume\") pod \"98af3efd-3e5b-4bfd-96ae-f3629aa18f43\" (UID: \"98af3efd-3e5b-4bfd-96ae-f3629aa18f43\") " Oct 01 15:44:14 crc kubenswrapper[4949]: I1001 15:44:14.256282 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2e8b8282-5372-428c-b11c-94f059bc9a0e-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"2e8b8282-5372-428c-b11c-94f059bc9a0e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 01 15:44:14 crc kubenswrapper[4949]: I1001 15:44:14.256383 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e8b8282-5372-428c-b11c-94f059bc9a0e-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"2e8b8282-5372-428c-b11c-94f059bc9a0e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 01 15:44:14 crc kubenswrapper[4949]: I1001 15:44:14.257800 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98af3efd-3e5b-4bfd-96ae-f3629aa18f43-config-volume" (OuterVolumeSpecName: "config-volume") pod "98af3efd-3e5b-4bfd-96ae-f3629aa18f43" (UID: "98af3efd-3e5b-4bfd-96ae-f3629aa18f43"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:44:14 crc kubenswrapper[4949]: I1001 15:44:14.257866 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2e8b8282-5372-428c-b11c-94f059bc9a0e-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"2e8b8282-5372-428c-b11c-94f059bc9a0e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 01 15:44:14 crc kubenswrapper[4949]: I1001 15:44:14.264584 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98af3efd-3e5b-4bfd-96ae-f3629aa18f43-kube-api-access-gwklq" (OuterVolumeSpecName: "kube-api-access-gwklq") pod "98af3efd-3e5b-4bfd-96ae-f3629aa18f43" (UID: "98af3efd-3e5b-4bfd-96ae-f3629aa18f43"). InnerVolumeSpecName "kube-api-access-gwklq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:44:14 crc kubenswrapper[4949]: I1001 15:44:14.264923 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/98af3efd-3e5b-4bfd-96ae-f3629aa18f43-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "98af3efd-3e5b-4bfd-96ae-f3629aa18f43" (UID: "98af3efd-3e5b-4bfd-96ae-f3629aa18f43"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:44:14 crc kubenswrapper[4949]: I1001 15:44:14.276606 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e8b8282-5372-428c-b11c-94f059bc9a0e-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"2e8b8282-5372-428c-b11c-94f059bc9a0e\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 01 15:44:14 crc kubenswrapper[4949]: I1001 15:44:14.357016 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4f942e26-b3f7-4cea-9d31-b4e882f1250a-kubelet-dir\") pod \"4f942e26-b3f7-4cea-9d31-b4e882f1250a\" (UID: \"4f942e26-b3f7-4cea-9d31-b4e882f1250a\") " Oct 01 15:44:14 crc kubenswrapper[4949]: I1001 15:44:14.357071 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4f942e26-b3f7-4cea-9d31-b4e882f1250a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4f942e26-b3f7-4cea-9d31-b4e882f1250a" (UID: "4f942e26-b3f7-4cea-9d31-b4e882f1250a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 01 15:44:14 crc kubenswrapper[4949]: I1001 15:44:14.357514 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4f942e26-b3f7-4cea-9d31-b4e882f1250a-kube-api-access\") pod \"4f942e26-b3f7-4cea-9d31-b4e882f1250a\" (UID: \"4f942e26-b3f7-4cea-9d31-b4e882f1250a\") " Oct 01 15:44:14 crc kubenswrapper[4949]: I1001 15:44:14.357788 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gwklq\" (UniqueName: \"kubernetes.io/projected/98af3efd-3e5b-4bfd-96ae-f3629aa18f43-kube-api-access-gwklq\") on node \"crc\" DevicePath \"\"" Oct 01 15:44:14 crc kubenswrapper[4949]: I1001 15:44:14.357807 4949 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4f942e26-b3f7-4cea-9d31-b4e882f1250a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Oct 01 15:44:14 crc kubenswrapper[4949]: I1001 15:44:14.357823 4949 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/98af3efd-3e5b-4bfd-96ae-f3629aa18f43-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 01 15:44:14 crc kubenswrapper[4949]: I1001 15:44:14.357831 4949 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/98af3efd-3e5b-4bfd-96ae-f3629aa18f43-config-volume\") on node \"crc\" DevicePath \"\"" Oct 01 15:44:14 crc kubenswrapper[4949]: I1001 15:44:14.363641 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f942e26-b3f7-4cea-9d31-b4e882f1250a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4f942e26-b3f7-4cea-9d31-b4e882f1250a" (UID: "4f942e26-b3f7-4cea-9d31-b4e882f1250a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:44:14 crc kubenswrapper[4949]: I1001 15:44:14.378754 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 01 15:44:14 crc kubenswrapper[4949]: I1001 15:44:14.445773 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-47lbj"] Oct 01 15:44:14 crc kubenswrapper[4949]: I1001 15:44:14.459477 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4f942e26-b3f7-4cea-9d31-b4e882f1250a-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 01 15:44:14 crc kubenswrapper[4949]: W1001 15:44:14.491290 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2e81ff5c_f656_4f24_bf49_33fbec1f7052.slice/crio-75c4790252abdaafdd08e664512529488e29683c3a8549c2002436046be9ad70 WatchSource:0}: Error finding container 75c4790252abdaafdd08e664512529488e29683c3a8549c2002436046be9ad70: Status 404 returned error can't find the container with id 75c4790252abdaafdd08e664512529488e29683c3a8549c2002436046be9ad70 Oct 01 15:44:14 crc kubenswrapper[4949]: I1001 15:44:14.671107 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"4f942e26-b3f7-4cea-9d31-b4e882f1250a","Type":"ContainerDied","Data":"34c118a96a6bfbb9991ba0c6f3d3f0cff836cf7383b5e1952b6875c59233efff"} Oct 01 15:44:14 crc kubenswrapper[4949]: I1001 15:44:14.671170 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34c118a96a6bfbb9991ba0c6f3d3f0cff836cf7383b5e1952b6875c59233efff" Oct 01 15:44:14 crc kubenswrapper[4949]: I1001 15:44:14.671227 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Oct 01 15:44:14 crc kubenswrapper[4949]: I1001 15:44:14.675538 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29322210-dtf9r" event={"ID":"98af3efd-3e5b-4bfd-96ae-f3629aa18f43","Type":"ContainerDied","Data":"add42f11318f86015bb0b00f88cd7acf6e9bd892be2d3c74e409b0f0fc1dc12a"} Oct 01 15:44:14 crc kubenswrapper[4949]: I1001 15:44:14.675774 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="add42f11318f86015bb0b00f88cd7acf6e9bd892be2d3c74e409b0f0fc1dc12a" Oct 01 15:44:14 crc kubenswrapper[4949]: I1001 15:44:14.675575 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29322210-dtf9r" Oct 01 15:44:14 crc kubenswrapper[4949]: I1001 15:44:14.681039 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" event={"ID":"2e81ff5c-f656-4f24-bf49-33fbec1f7052","Type":"ContainerStarted","Data":"75c4790252abdaafdd08e664512529488e29683c3a8549c2002436046be9ad70"} Oct 01 15:44:14 crc kubenswrapper[4949]: I1001 15:44:14.693562 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rsrns" event={"ID":"837935a1-6cd1-4472-a692-e9c13f2b7ad7","Type":"ContainerStarted","Data":"c7834f9cf4aa68882ac1dcfed517747a6a848b0ca76c484d9744700b02bafce2"} Oct 01 15:44:14 crc kubenswrapper[4949]: I1001 15:44:14.699349 4949 patch_prober.go:28] interesting pod/router-default-5444994796-58ggj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 01 15:44:14 crc kubenswrapper[4949]: [-]has-synced failed: reason withheld Oct 01 15:44:14 crc kubenswrapper[4949]: [+]process-running ok Oct 01 15:44:14 crc kubenswrapper[4949]: healthz check failed Oct 01 15:44:14 crc kubenswrapper[4949]: I1001 15:44:14.699391 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-58ggj" podUID="0dede0f5-5799-426f-94dc-cba3a14494fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 01 15:44:15 crc kubenswrapper[4949]: I1001 15:44:15.090714 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Oct 01 15:44:15 crc kubenswrapper[4949]: W1001 15:44:15.125450 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod2e8b8282_5372_428c_b11c_94f059bc9a0e.slice/crio-328bda78434b22bd55371fba476a71aa0096d768d90d2357426f3ee67f6e9db2 WatchSource:0}: Error finding container 328bda78434b22bd55371fba476a71aa0096d768d90d2357426f3ee67f6e9db2: Status 404 returned error can't find the container with id 328bda78434b22bd55371fba476a71aa0096d768d90d2357426f3ee67f6e9db2 Oct 01 15:44:15 crc kubenswrapper[4949]: I1001 15:44:15.637814 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Oct 01 15:44:15 crc kubenswrapper[4949]: I1001 15:44:15.706593 4949 patch_prober.go:28] interesting pod/router-default-5444994796-58ggj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 01 15:44:15 crc kubenswrapper[4949]: [-]has-synced failed: reason withheld Oct 01 15:44:15 crc kubenswrapper[4949]: [+]process-running ok Oct 01 15:44:15 crc kubenswrapper[4949]: healthz check failed Oct 01 15:44:15 crc kubenswrapper[4949]: I1001 15:44:15.706649 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-58ggj" podUID="0dede0f5-5799-426f-94dc-cba3a14494fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 01 15:44:15 crc kubenswrapper[4949]: I1001 15:44:15.714663 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" event={"ID":"2e81ff5c-f656-4f24-bf49-33fbec1f7052","Type":"ContainerStarted","Data":"7b8a6c2394399b015d49a35cff7d8692b28191ee5c40aa0a9bc1f42f61d3e94b"} Oct 01 15:44:15 crc kubenswrapper[4949]: I1001 15:44:15.715663 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:15 crc kubenswrapper[4949]: I1001 15:44:15.748993 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-rsrns" event={"ID":"837935a1-6cd1-4472-a692-e9c13f2b7ad7","Type":"ContainerStarted","Data":"d7269805c3eaa3fa36b57979bf2935f77350bb13c8950a48fa3aaaa070059cfe"} Oct 01 15:44:15 crc kubenswrapper[4949]: I1001 15:44:15.749362 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" podStartSLOduration=131.749344228 podStartE2EDuration="2m11.749344228s" podCreationTimestamp="2025-10-01 15:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:44:15.742149455 +0000 UTC m=+155.047755656" watchObservedRunningTime="2025-10-01 15:44:15.749344228 +0000 UTC m=+155.054950419" Oct 01 15:44:15 crc kubenswrapper[4949]: I1001 15:44:15.774654 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-rsrns" podStartSLOduration=16.774630097 podStartE2EDuration="16.774630097s" podCreationTimestamp="2025-10-01 15:43:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:44:15.774371459 +0000 UTC m=+155.079977650" watchObservedRunningTime="2025-10-01 15:44:15.774630097 +0000 UTC m=+155.080236288" Oct 01 15:44:15 crc kubenswrapper[4949]: I1001 15:44:15.780258 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"2e8b8282-5372-428c-b11c-94f059bc9a0e","Type":"ContainerStarted","Data":"328bda78434b22bd55371fba476a71aa0096d768d90d2357426f3ee67f6e9db2"} Oct 01 15:44:16 crc kubenswrapper[4949]: I1001 15:44:16.587052 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-hjkx4" Oct 01 15:44:16 crc kubenswrapper[4949]: I1001 15:44:16.592231 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-hjkx4" Oct 01 15:44:16 crc kubenswrapper[4949]: I1001 15:44:16.698364 4949 patch_prober.go:28] interesting pod/router-default-5444994796-58ggj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 01 15:44:16 crc kubenswrapper[4949]: [-]has-synced failed: reason withheld Oct 01 15:44:16 crc kubenswrapper[4949]: [+]process-running ok Oct 01 15:44:16 crc kubenswrapper[4949]: healthz check failed Oct 01 15:44:16 crc kubenswrapper[4949]: I1001 15:44:16.698420 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-58ggj" podUID="0dede0f5-5799-426f-94dc-cba3a14494fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 01 15:44:16 crc kubenswrapper[4949]: I1001 15:44:16.792506 4949 generic.go:334] "Generic (PLEG): container finished" podID="2e8b8282-5372-428c-b11c-94f059bc9a0e" containerID="f7a2a0037bfae8a39135ac36e70f5ef001c1752c3fc5103a02b05fc7205d9a7e" exitCode=0 Oct 01 15:44:16 crc kubenswrapper[4949]: I1001 15:44:16.792618 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"2e8b8282-5372-428c-b11c-94f059bc9a0e","Type":"ContainerDied","Data":"f7a2a0037bfae8a39135ac36e70f5ef001c1752c3fc5103a02b05fc7205d9a7e"} Oct 01 15:44:17 crc kubenswrapper[4949]: I1001 15:44:17.696750 4949 patch_prober.go:28] interesting pod/router-default-5444994796-58ggj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 01 15:44:17 crc kubenswrapper[4949]: [-]has-synced failed: reason withheld Oct 01 15:44:17 crc kubenswrapper[4949]: [+]process-running ok Oct 01 15:44:17 crc kubenswrapper[4949]: healthz check failed Oct 01 15:44:17 crc kubenswrapper[4949]: I1001 15:44:17.697056 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-58ggj" podUID="0dede0f5-5799-426f-94dc-cba3a14494fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 01 15:44:17 crc kubenswrapper[4949]: I1001 15:44:17.785324 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-8s6sd" Oct 01 15:44:18 crc kubenswrapper[4949]: I1001 15:44:18.038558 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 15:44:18 crc kubenswrapper[4949]: I1001 15:44:18.038619 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 15:44:18 crc kubenswrapper[4949]: I1001 15:44:18.697913 4949 patch_prober.go:28] interesting pod/router-default-5444994796-58ggj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 01 15:44:18 crc kubenswrapper[4949]: [-]has-synced failed: reason withheld Oct 01 15:44:18 crc kubenswrapper[4949]: [+]process-running ok Oct 01 15:44:18 crc kubenswrapper[4949]: healthz check failed Oct 01 15:44:18 crc kubenswrapper[4949]: I1001 15:44:18.698211 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-58ggj" podUID="0dede0f5-5799-426f-94dc-cba3a14494fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 01 15:44:19 crc kubenswrapper[4949]: I1001 15:44:19.392857 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:44:19 crc kubenswrapper[4949]: I1001 15:44:19.696518 4949 patch_prober.go:28] interesting pod/router-default-5444994796-58ggj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 01 15:44:19 crc kubenswrapper[4949]: [-]has-synced failed: reason withheld Oct 01 15:44:19 crc kubenswrapper[4949]: [+]process-running ok Oct 01 15:44:19 crc kubenswrapper[4949]: healthz check failed Oct 01 15:44:19 crc kubenswrapper[4949]: I1001 15:44:19.696632 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-58ggj" podUID="0dede0f5-5799-426f-94dc-cba3a14494fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 01 15:44:20 crc kubenswrapper[4949]: I1001 15:44:20.695395 4949 patch_prober.go:28] interesting pod/router-default-5444994796-58ggj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 01 15:44:20 crc kubenswrapper[4949]: [-]has-synced failed: reason withheld Oct 01 15:44:20 crc kubenswrapper[4949]: [+]process-running ok Oct 01 15:44:20 crc kubenswrapper[4949]: healthz check failed Oct 01 15:44:20 crc kubenswrapper[4949]: I1001 15:44:20.695442 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-58ggj" podUID="0dede0f5-5799-426f-94dc-cba3a14494fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 01 15:44:21 crc kubenswrapper[4949]: I1001 15:44:21.696358 4949 patch_prober.go:28] interesting pod/router-default-5444994796-58ggj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 01 15:44:21 crc kubenswrapper[4949]: [-]has-synced failed: reason withheld Oct 01 15:44:21 crc kubenswrapper[4949]: [+]process-running ok Oct 01 15:44:21 crc kubenswrapper[4949]: healthz check failed Oct 01 15:44:21 crc kubenswrapper[4949]: I1001 15:44:21.696722 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-58ggj" podUID="0dede0f5-5799-426f-94dc-cba3a14494fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 01 15:44:22 crc kubenswrapper[4949]: I1001 15:44:22.297319 4949 patch_prober.go:28] interesting pod/downloads-7954f5f757-p2vmt container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Oct 01 15:44:22 crc kubenswrapper[4949]: I1001 15:44:22.297394 4949 patch_prober.go:28] interesting pod/downloads-7954f5f757-p2vmt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Oct 01 15:44:22 crc kubenswrapper[4949]: I1001 15:44:22.297460 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-p2vmt" podUID="c04759f6-7a1a-43cb-a705-41d2319646b6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" Oct 01 15:44:22 crc kubenswrapper[4949]: I1001 15:44:22.297398 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-p2vmt" podUID="c04759f6-7a1a-43cb-a705-41d2319646b6" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.31:8080/\": dial tcp 10.217.0.31:8080: connect: connection refused" Oct 01 15:44:22 crc kubenswrapper[4949]: I1001 15:44:22.300069 4949 patch_prober.go:28] interesting pod/console-f9d7485db-xlwdp container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Oct 01 15:44:22 crc kubenswrapper[4949]: I1001 15:44:22.300177 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-xlwdp" podUID="62b77904-e0d8-4a98-b6e0-49b2c18821db" containerName="console" probeResult="failure" output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" Oct 01 15:44:22 crc kubenswrapper[4949]: I1001 15:44:22.694172 4949 patch_prober.go:28] interesting pod/router-default-5444994796-58ggj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 01 15:44:22 crc kubenswrapper[4949]: [-]has-synced failed: reason withheld Oct 01 15:44:22 crc kubenswrapper[4949]: [+]process-running ok Oct 01 15:44:22 crc kubenswrapper[4949]: healthz check failed Oct 01 15:44:22 crc kubenswrapper[4949]: I1001 15:44:22.694237 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-58ggj" podUID="0dede0f5-5799-426f-94dc-cba3a14494fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 01 15:44:23 crc kubenswrapper[4949]: I1001 15:44:23.706323 4949 patch_prober.go:28] interesting pod/router-default-5444994796-58ggj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 01 15:44:23 crc kubenswrapper[4949]: [-]has-synced failed: reason withheld Oct 01 15:44:23 crc kubenswrapper[4949]: [+]process-running ok Oct 01 15:44:23 crc kubenswrapper[4949]: healthz check failed Oct 01 15:44:23 crc kubenswrapper[4949]: I1001 15:44:23.706634 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-58ggj" podUID="0dede0f5-5799-426f-94dc-cba3a14494fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 01 15:44:24 crc kubenswrapper[4949]: I1001 15:44:24.526714 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 01 15:44:24 crc kubenswrapper[4949]: I1001 15:44:24.634208 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e8b8282-5372-428c-b11c-94f059bc9a0e-kube-api-access\") pod \"2e8b8282-5372-428c-b11c-94f059bc9a0e\" (UID: \"2e8b8282-5372-428c-b11c-94f059bc9a0e\") " Oct 01 15:44:24 crc kubenswrapper[4949]: I1001 15:44:24.634358 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2e8b8282-5372-428c-b11c-94f059bc9a0e-kubelet-dir\") pod \"2e8b8282-5372-428c-b11c-94f059bc9a0e\" (UID: \"2e8b8282-5372-428c-b11c-94f059bc9a0e\") " Oct 01 15:44:24 crc kubenswrapper[4949]: I1001 15:44:24.634513 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2e8b8282-5372-428c-b11c-94f059bc9a0e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "2e8b8282-5372-428c-b11c-94f059bc9a0e" (UID: "2e8b8282-5372-428c-b11c-94f059bc9a0e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 01 15:44:24 crc kubenswrapper[4949]: I1001 15:44:24.634760 4949 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2e8b8282-5372-428c-b11c-94f059bc9a0e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Oct 01 15:44:24 crc kubenswrapper[4949]: I1001 15:44:24.641684 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e8b8282-5372-428c-b11c-94f059bc9a0e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "2e8b8282-5372-428c-b11c-94f059bc9a0e" (UID: "2e8b8282-5372-428c-b11c-94f059bc9a0e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:44:24 crc kubenswrapper[4949]: I1001 15:44:24.696566 4949 patch_prober.go:28] interesting pod/router-default-5444994796-58ggj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 01 15:44:24 crc kubenswrapper[4949]: [-]has-synced failed: reason withheld Oct 01 15:44:24 crc kubenswrapper[4949]: [+]process-running ok Oct 01 15:44:24 crc kubenswrapper[4949]: healthz check failed Oct 01 15:44:24 crc kubenswrapper[4949]: I1001 15:44:24.696638 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-58ggj" podUID="0dede0f5-5799-426f-94dc-cba3a14494fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 01 15:44:24 crc kubenswrapper[4949]: I1001 15:44:24.736475 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/2e8b8282-5372-428c-b11c-94f059bc9a0e-kube-api-access\") on node \"crc\" DevicePath \"\"" Oct 01 15:44:24 crc kubenswrapper[4949]: I1001 15:44:24.852040 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"2e8b8282-5372-428c-b11c-94f059bc9a0e","Type":"ContainerDied","Data":"328bda78434b22bd55371fba476a71aa0096d768d90d2357426f3ee67f6e9db2"} Oct 01 15:44:24 crc kubenswrapper[4949]: I1001 15:44:24.852095 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="328bda78434b22bd55371fba476a71aa0096d768d90d2357426f3ee67f6e9db2" Oct 01 15:44:24 crc kubenswrapper[4949]: I1001 15:44:24.852197 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Oct 01 15:44:25 crc kubenswrapper[4949]: I1001 15:44:25.694707 4949 patch_prober.go:28] interesting pod/router-default-5444994796-58ggj container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Oct 01 15:44:25 crc kubenswrapper[4949]: [-]has-synced failed: reason withheld Oct 01 15:44:25 crc kubenswrapper[4949]: [+]process-running ok Oct 01 15:44:25 crc kubenswrapper[4949]: healthz check failed Oct 01 15:44:25 crc kubenswrapper[4949]: I1001 15:44:25.694767 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-58ggj" podUID="0dede0f5-5799-426f-94dc-cba3a14494fb" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Oct 01 15:44:26 crc kubenswrapper[4949]: I1001 15:44:26.695794 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-58ggj" Oct 01 15:44:26 crc kubenswrapper[4949]: I1001 15:44:26.698389 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-58ggj" Oct 01 15:44:27 crc kubenswrapper[4949]: I1001 15:44:27.068887 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd-metrics-certs\") pod \"network-metrics-daemon-kfx8b\" (UID: \"d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd\") " pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:44:27 crc kubenswrapper[4949]: I1001 15:44:27.100288 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd-metrics-certs\") pod \"network-metrics-daemon-kfx8b\" (UID: \"d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd\") " pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:44:27 crc kubenswrapper[4949]: I1001 15:44:27.324692 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-kfx8b" Oct 01 15:44:32 crc kubenswrapper[4949]: I1001 15:44:32.305493 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-p2vmt" Oct 01 15:44:32 crc kubenswrapper[4949]: I1001 15:44:32.313634 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-xlwdp" Oct 01 15:44:32 crc kubenswrapper[4949]: I1001 15:44:32.320058 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-xlwdp" Oct 01 15:44:33 crc kubenswrapper[4949]: I1001 15:44:33.983205 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:44:43 crc kubenswrapper[4949]: I1001 15:44:43.052870 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-5hfpc" Oct 01 15:44:46 crc kubenswrapper[4949]: E1001 15:44:46.387165 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Oct 01 15:44:46 crc kubenswrapper[4949]: E1001 15:44:46.387807 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-66p4p,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-84mb5_openshift-marketplace(a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 01 15:44:46 crc kubenswrapper[4949]: E1001 15:44:46.389062 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-84mb5" podUID="a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c" Oct 01 15:44:47 crc kubenswrapper[4949]: E1001 15:44:47.336769 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-84mb5" podUID="a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c" Oct 01 15:44:47 crc kubenswrapper[4949]: E1001 15:44:47.394212 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Oct 01 15:44:47 crc kubenswrapper[4949]: E1001 15:44:47.394358 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gc8nl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-zq8nf_openshift-marketplace(46c5822e-cd0c-4c66-828c-a0f9a50879de): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 01 15:44:47 crc kubenswrapper[4949]: E1001 15:44:47.395514 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-zq8nf" podUID="46c5822e-cd0c-4c66-828c-a0f9a50879de" Oct 01 15:44:48 crc kubenswrapper[4949]: E1001 15:44:48.008901 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Oct 01 15:44:48 crc kubenswrapper[4949]: E1001 15:44:48.009162 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7gm8k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-cllpk_openshift-marketplace(2fce939d-7a50-418c-876e-05cc8619a809): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 01 15:44:48 crc kubenswrapper[4949]: E1001 15:44:48.010492 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-cllpk" podUID="2fce939d-7a50-418c-876e-05cc8619a809" Oct 01 15:44:48 crc kubenswrapper[4949]: I1001 15:44:48.038781 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 15:44:48 crc kubenswrapper[4949]: I1001 15:44:48.038855 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 15:44:48 crc kubenswrapper[4949]: I1001 15:44:48.747285 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Oct 01 15:44:50 crc kubenswrapper[4949]: E1001 15:44:50.503266 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-cllpk" podUID="2fce939d-7a50-418c-876e-05cc8619a809" Oct 01 15:44:50 crc kubenswrapper[4949]: E1001 15:44:50.581446 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Oct 01 15:44:50 crc kubenswrapper[4949]: E1001 15:44:50.581721 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cbl65,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-6s25h_openshift-marketplace(afd2fd7e-b101-41fa-bc35-87cdf06d791a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 01 15:44:50 crc kubenswrapper[4949]: E1001 15:44:50.582908 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-6s25h" podUID="afd2fd7e-b101-41fa-bc35-87cdf06d791a" Oct 01 15:44:50 crc kubenswrapper[4949]: E1001 15:44:50.585164 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Oct 01 15:44:50 crc kubenswrapper[4949]: E1001 15:44:50.585310 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ssfnc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-jvj9j_openshift-marketplace(a935699b-031c-41c8-ae81-e631cfb6d465): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 01 15:44:50 crc kubenswrapper[4949]: E1001 15:44:50.587297 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-jvj9j" podUID="a935699b-031c-41c8-ae81-e631cfb6d465" Oct 01 15:44:50 crc kubenswrapper[4949]: E1001 15:44:50.609229 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Oct 01 15:44:50 crc kubenswrapper[4949]: E1001 15:44:50.609371 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zdqdr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-2vm94_openshift-marketplace(505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 01 15:44:50 crc kubenswrapper[4949]: E1001 15:44:50.611116 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-2vm94" podUID="505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9" Oct 01 15:44:50 crc kubenswrapper[4949]: E1001 15:44:50.655252 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Oct 01 15:44:50 crc kubenswrapper[4949]: E1001 15:44:50.655421 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6t5ql,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-h9w85_openshift-marketplace(48941e1e-2481-47ca-832d-047a6d3220c8): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 01 15:44:50 crc kubenswrapper[4949]: E1001 15:44:50.658364 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-h9w85" podUID="48941e1e-2481-47ca-832d-047a6d3220c8" Oct 01 15:44:50 crc kubenswrapper[4949]: E1001 15:44:50.660382 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Oct 01 15:44:50 crc kubenswrapper[4949]: E1001 15:44:50.660476 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v2pk7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-ltmmw_openshift-marketplace(470a3d16-cc7d-4824-a36e-a6004d9b530f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Oct 01 15:44:50 crc kubenswrapper[4949]: E1001 15:44:50.662347 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-ltmmw" podUID="470a3d16-cc7d-4824-a36e-a6004d9b530f" Oct 01 15:44:50 crc kubenswrapper[4949]: I1001 15:44:50.893766 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-kfx8b"] Oct 01 15:44:50 crc kubenswrapper[4949]: W1001 15:44:50.900147 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd7ae63df_6ab6_40e9_bbaf_9d1f49f752bd.slice/crio-085d502a8c6a392e21d1f951dce83eb02c14e4843cbb3943a1c1d8e7832bae71 WatchSource:0}: Error finding container 085d502a8c6a392e21d1f951dce83eb02c14e4843cbb3943a1c1d8e7832bae71: Status 404 returned error can't find the container with id 085d502a8c6a392e21d1f951dce83eb02c14e4843cbb3943a1c1d8e7832bae71 Oct 01 15:44:51 crc kubenswrapper[4949]: I1001 15:44:51.000254 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kfx8b" event={"ID":"d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd","Type":"ContainerStarted","Data":"085d502a8c6a392e21d1f951dce83eb02c14e4843cbb3943a1c1d8e7832bae71"} Oct 01 15:44:51 crc kubenswrapper[4949]: E1001 15:44:51.002323 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-h9w85" podUID="48941e1e-2481-47ca-832d-047a6d3220c8" Oct 01 15:44:51 crc kubenswrapper[4949]: E1001 15:44:51.003317 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-ltmmw" podUID="470a3d16-cc7d-4824-a36e-a6004d9b530f" Oct 01 15:44:51 crc kubenswrapper[4949]: E1001 15:44:51.003946 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-6s25h" podUID="afd2fd7e-b101-41fa-bc35-87cdf06d791a" Oct 01 15:44:51 crc kubenswrapper[4949]: E1001 15:44:51.004384 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-2vm94" podUID="505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9" Oct 01 15:44:51 crc kubenswrapper[4949]: E1001 15:44:51.005542 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-jvj9j" podUID="a935699b-031c-41c8-ae81-e631cfb6d465" Oct 01 15:44:52 crc kubenswrapper[4949]: I1001 15:44:52.007944 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kfx8b" event={"ID":"d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd","Type":"ContainerStarted","Data":"e402b772d1c09c231bcb0c8ac39b23621881979c6af84615d03211d34bed07b2"} Oct 01 15:44:52 crc kubenswrapper[4949]: I1001 15:44:52.008504 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-kfx8b" event={"ID":"d7ae63df-6ab6-40e9-bbaf-9d1f49f752bd","Type":"ContainerStarted","Data":"2ba161e637d7e3527059462b73ac0ab1b7de550991ba41ae4553a0278ed89f49"} Oct 01 15:44:52 crc kubenswrapper[4949]: I1001 15:44:52.023265 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-kfx8b" podStartSLOduration=168.023248775 podStartE2EDuration="2m48.023248775s" podCreationTimestamp="2025-10-01 15:42:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:44:52.022486583 +0000 UTC m=+191.328092774" watchObservedRunningTime="2025-10-01 15:44:52.023248775 +0000 UTC m=+191.328854966" Oct 01 15:45:00 crc kubenswrapper[4949]: I1001 15:45:00.143485 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29322225-2kx5d"] Oct 01 15:45:00 crc kubenswrapper[4949]: E1001 15:45:00.144119 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f942e26-b3f7-4cea-9d31-b4e882f1250a" containerName="pruner" Oct 01 15:45:00 crc kubenswrapper[4949]: I1001 15:45:00.144152 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f942e26-b3f7-4cea-9d31-b4e882f1250a" containerName="pruner" Oct 01 15:45:00 crc kubenswrapper[4949]: E1001 15:45:00.144170 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98af3efd-3e5b-4bfd-96ae-f3629aa18f43" containerName="collect-profiles" Oct 01 15:45:00 crc kubenswrapper[4949]: I1001 15:45:00.144178 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="98af3efd-3e5b-4bfd-96ae-f3629aa18f43" containerName="collect-profiles" Oct 01 15:45:00 crc kubenswrapper[4949]: E1001 15:45:00.144198 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e8b8282-5372-428c-b11c-94f059bc9a0e" containerName="pruner" Oct 01 15:45:00 crc kubenswrapper[4949]: I1001 15:45:00.144208 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e8b8282-5372-428c-b11c-94f059bc9a0e" containerName="pruner" Oct 01 15:45:00 crc kubenswrapper[4949]: I1001 15:45:00.144319 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f942e26-b3f7-4cea-9d31-b4e882f1250a" containerName="pruner" Oct 01 15:45:00 crc kubenswrapper[4949]: I1001 15:45:00.144339 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="98af3efd-3e5b-4bfd-96ae-f3629aa18f43" containerName="collect-profiles" Oct 01 15:45:00 crc kubenswrapper[4949]: I1001 15:45:00.144351 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e8b8282-5372-428c-b11c-94f059bc9a0e" containerName="pruner" Oct 01 15:45:00 crc kubenswrapper[4949]: I1001 15:45:00.144814 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29322225-2kx5d" Oct 01 15:45:00 crc kubenswrapper[4949]: I1001 15:45:00.151695 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 01 15:45:00 crc kubenswrapper[4949]: I1001 15:45:00.151871 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 01 15:45:00 crc kubenswrapper[4949]: I1001 15:45:00.158657 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29322225-2kx5d"] Oct 01 15:45:00 crc kubenswrapper[4949]: I1001 15:45:00.253982 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w4mbs\" (UniqueName: \"kubernetes.io/projected/ca85de63-357f-46ae-8559-88caf4cf27f3-kube-api-access-w4mbs\") pod \"collect-profiles-29322225-2kx5d\" (UID: \"ca85de63-357f-46ae-8559-88caf4cf27f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322225-2kx5d" Oct 01 15:45:00 crc kubenswrapper[4949]: I1001 15:45:00.254080 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ca85de63-357f-46ae-8559-88caf4cf27f3-secret-volume\") pod \"collect-profiles-29322225-2kx5d\" (UID: \"ca85de63-357f-46ae-8559-88caf4cf27f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322225-2kx5d" Oct 01 15:45:00 crc kubenswrapper[4949]: I1001 15:45:00.254357 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca85de63-357f-46ae-8559-88caf4cf27f3-config-volume\") pod \"collect-profiles-29322225-2kx5d\" (UID: \"ca85de63-357f-46ae-8559-88caf4cf27f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322225-2kx5d" Oct 01 15:45:00 crc kubenswrapper[4949]: I1001 15:45:00.355719 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca85de63-357f-46ae-8559-88caf4cf27f3-config-volume\") pod \"collect-profiles-29322225-2kx5d\" (UID: \"ca85de63-357f-46ae-8559-88caf4cf27f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322225-2kx5d" Oct 01 15:45:00 crc kubenswrapper[4949]: I1001 15:45:00.355809 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w4mbs\" (UniqueName: \"kubernetes.io/projected/ca85de63-357f-46ae-8559-88caf4cf27f3-kube-api-access-w4mbs\") pod \"collect-profiles-29322225-2kx5d\" (UID: \"ca85de63-357f-46ae-8559-88caf4cf27f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322225-2kx5d" Oct 01 15:45:00 crc kubenswrapper[4949]: I1001 15:45:00.355834 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ca85de63-357f-46ae-8559-88caf4cf27f3-secret-volume\") pod \"collect-profiles-29322225-2kx5d\" (UID: \"ca85de63-357f-46ae-8559-88caf4cf27f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322225-2kx5d" Oct 01 15:45:00 crc kubenswrapper[4949]: I1001 15:45:00.356727 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca85de63-357f-46ae-8559-88caf4cf27f3-config-volume\") pod \"collect-profiles-29322225-2kx5d\" (UID: \"ca85de63-357f-46ae-8559-88caf4cf27f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322225-2kx5d" Oct 01 15:45:00 crc kubenswrapper[4949]: I1001 15:45:00.361791 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ca85de63-357f-46ae-8559-88caf4cf27f3-secret-volume\") pod \"collect-profiles-29322225-2kx5d\" (UID: \"ca85de63-357f-46ae-8559-88caf4cf27f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322225-2kx5d" Oct 01 15:45:00 crc kubenswrapper[4949]: I1001 15:45:00.376770 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w4mbs\" (UniqueName: \"kubernetes.io/projected/ca85de63-357f-46ae-8559-88caf4cf27f3-kube-api-access-w4mbs\") pod \"collect-profiles-29322225-2kx5d\" (UID: \"ca85de63-357f-46ae-8559-88caf4cf27f3\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322225-2kx5d" Oct 01 15:45:00 crc kubenswrapper[4949]: I1001 15:45:00.478609 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29322225-2kx5d" Oct 01 15:45:06 crc kubenswrapper[4949]: I1001 15:45:06.252704 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29322225-2kx5d"] Oct 01 15:45:07 crc kubenswrapper[4949]: I1001 15:45:07.094153 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84mb5" event={"ID":"a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c","Type":"ContainerStarted","Data":"fbd568c6dfd82b7e74a2794889c83cd9cbe608954c5c83d56e6f2b4911052685"} Oct 01 15:45:07 crc kubenswrapper[4949]: I1001 15:45:07.096919 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29322225-2kx5d" event={"ID":"ca85de63-357f-46ae-8559-88caf4cf27f3","Type":"ContainerStarted","Data":"b9d979121aacc0de55c1025b22100430f3a5abb58daafe1d06d01c94c0852cac"} Oct 01 15:45:07 crc kubenswrapper[4949]: I1001 15:45:07.096977 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29322225-2kx5d" event={"ID":"ca85de63-357f-46ae-8559-88caf4cf27f3","Type":"ContainerStarted","Data":"cc441f13911bd33eb0388594c53804c7fa723d24ff3692d975ab6e2acdd9568b"} Oct 01 15:45:08 crc kubenswrapper[4949]: I1001 15:45:08.106340 4949 generic.go:334] "Generic (PLEG): container finished" podID="46c5822e-cd0c-4c66-828c-a0f9a50879de" containerID="94654bafe6e17ba09c266df79c94ed2f7c08afb95ef52ea434cd5aa92f51e335" exitCode=0 Oct 01 15:45:08 crc kubenswrapper[4949]: I1001 15:45:08.106403 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zq8nf" event={"ID":"46c5822e-cd0c-4c66-828c-a0f9a50879de","Type":"ContainerDied","Data":"94654bafe6e17ba09c266df79c94ed2f7c08afb95ef52ea434cd5aa92f51e335"} Oct 01 15:45:08 crc kubenswrapper[4949]: I1001 15:45:08.108339 4949 generic.go:334] "Generic (PLEG): container finished" podID="ca85de63-357f-46ae-8559-88caf4cf27f3" containerID="b9d979121aacc0de55c1025b22100430f3a5abb58daafe1d06d01c94c0852cac" exitCode=0 Oct 01 15:45:08 crc kubenswrapper[4949]: I1001 15:45:08.108409 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29322225-2kx5d" event={"ID":"ca85de63-357f-46ae-8559-88caf4cf27f3","Type":"ContainerDied","Data":"b9d979121aacc0de55c1025b22100430f3a5abb58daafe1d06d01c94c0852cac"} Oct 01 15:45:08 crc kubenswrapper[4949]: I1001 15:45:08.111164 4949 generic.go:334] "Generic (PLEG): container finished" podID="a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c" containerID="fbd568c6dfd82b7e74a2794889c83cd9cbe608954c5c83d56e6f2b4911052685" exitCode=0 Oct 01 15:45:08 crc kubenswrapper[4949]: I1001 15:45:08.111223 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84mb5" event={"ID":"a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c","Type":"ContainerDied","Data":"fbd568c6dfd82b7e74a2794889c83cd9cbe608954c5c83d56e6f2b4911052685"} Oct 01 15:45:09 crc kubenswrapper[4949]: I1001 15:45:09.422949 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29322225-2kx5d" Oct 01 15:45:09 crc kubenswrapper[4949]: I1001 15:45:09.485414 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ca85de63-357f-46ae-8559-88caf4cf27f3-secret-volume\") pod \"ca85de63-357f-46ae-8559-88caf4cf27f3\" (UID: \"ca85de63-357f-46ae-8559-88caf4cf27f3\") " Oct 01 15:45:09 crc kubenswrapper[4949]: I1001 15:45:09.485490 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca85de63-357f-46ae-8559-88caf4cf27f3-config-volume\") pod \"ca85de63-357f-46ae-8559-88caf4cf27f3\" (UID: \"ca85de63-357f-46ae-8559-88caf4cf27f3\") " Oct 01 15:45:09 crc kubenswrapper[4949]: I1001 15:45:09.485529 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4mbs\" (UniqueName: \"kubernetes.io/projected/ca85de63-357f-46ae-8559-88caf4cf27f3-kube-api-access-w4mbs\") pod \"ca85de63-357f-46ae-8559-88caf4cf27f3\" (UID: \"ca85de63-357f-46ae-8559-88caf4cf27f3\") " Oct 01 15:45:09 crc kubenswrapper[4949]: I1001 15:45:09.486370 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ca85de63-357f-46ae-8559-88caf4cf27f3-config-volume" (OuterVolumeSpecName: "config-volume") pod "ca85de63-357f-46ae-8559-88caf4cf27f3" (UID: "ca85de63-357f-46ae-8559-88caf4cf27f3"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:45:09 crc kubenswrapper[4949]: I1001 15:45:09.490962 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ca85de63-357f-46ae-8559-88caf4cf27f3-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ca85de63-357f-46ae-8559-88caf4cf27f3" (UID: "ca85de63-357f-46ae-8559-88caf4cf27f3"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:45:09 crc kubenswrapper[4949]: I1001 15:45:09.491146 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca85de63-357f-46ae-8559-88caf4cf27f3-kube-api-access-w4mbs" (OuterVolumeSpecName: "kube-api-access-w4mbs") pod "ca85de63-357f-46ae-8559-88caf4cf27f3" (UID: "ca85de63-357f-46ae-8559-88caf4cf27f3"). InnerVolumeSpecName "kube-api-access-w4mbs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:45:09 crc kubenswrapper[4949]: I1001 15:45:09.587694 4949 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ca85de63-357f-46ae-8559-88caf4cf27f3-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 01 15:45:09 crc kubenswrapper[4949]: I1001 15:45:09.587723 4949 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca85de63-357f-46ae-8559-88caf4cf27f3-config-volume\") on node \"crc\" DevicePath \"\"" Oct 01 15:45:09 crc kubenswrapper[4949]: I1001 15:45:09.587732 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4mbs\" (UniqueName: \"kubernetes.io/projected/ca85de63-357f-46ae-8559-88caf4cf27f3-kube-api-access-w4mbs\") on node \"crc\" DevicePath \"\"" Oct 01 15:45:10 crc kubenswrapper[4949]: I1001 15:45:10.125380 4949 generic.go:334] "Generic (PLEG): container finished" podID="48941e1e-2481-47ca-832d-047a6d3220c8" containerID="ec9cf555692688ab9a723d4c843ea0c526c4c7db76e4800e1117c07dcdd347ea" exitCode=0 Oct 01 15:45:10 crc kubenswrapper[4949]: I1001 15:45:10.125581 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h9w85" event={"ID":"48941e1e-2481-47ca-832d-047a6d3220c8","Type":"ContainerDied","Data":"ec9cf555692688ab9a723d4c843ea0c526c4c7db76e4800e1117c07dcdd347ea"} Oct 01 15:45:10 crc kubenswrapper[4949]: I1001 15:45:10.127691 4949 generic.go:334] "Generic (PLEG): container finished" podID="505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9" containerID="8ee7f371307521f7aa33fe41ac76c228da96c3b042fa783b6d6d90f6d8a3b09d" exitCode=0 Oct 01 15:45:10 crc kubenswrapper[4949]: I1001 15:45:10.127760 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2vm94" event={"ID":"505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9","Type":"ContainerDied","Data":"8ee7f371307521f7aa33fe41ac76c228da96c3b042fa783b6d6d90f6d8a3b09d"} Oct 01 15:45:10 crc kubenswrapper[4949]: I1001 15:45:10.133151 4949 generic.go:334] "Generic (PLEG): container finished" podID="afd2fd7e-b101-41fa-bc35-87cdf06d791a" containerID="76dfbbf9404c1093d263e5dbc6b6298476b728c13f3a5dd04e97f4465d2d1aca" exitCode=0 Oct 01 15:45:10 crc kubenswrapper[4949]: I1001 15:45:10.133197 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6s25h" event={"ID":"afd2fd7e-b101-41fa-bc35-87cdf06d791a","Type":"ContainerDied","Data":"76dfbbf9404c1093d263e5dbc6b6298476b728c13f3a5dd04e97f4465d2d1aca"} Oct 01 15:45:10 crc kubenswrapper[4949]: I1001 15:45:10.149548 4949 generic.go:334] "Generic (PLEG): container finished" podID="a935699b-031c-41c8-ae81-e631cfb6d465" containerID="790e00e89741c9ca4138f77fb99d79d553e7fa472236944751c79fe7eb5277bb" exitCode=0 Oct 01 15:45:10 crc kubenswrapper[4949]: I1001 15:45:10.149630 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jvj9j" event={"ID":"a935699b-031c-41c8-ae81-e631cfb6d465","Type":"ContainerDied","Data":"790e00e89741c9ca4138f77fb99d79d553e7fa472236944751c79fe7eb5277bb"} Oct 01 15:45:10 crc kubenswrapper[4949]: I1001 15:45:10.155099 4949 generic.go:334] "Generic (PLEG): container finished" podID="2fce939d-7a50-418c-876e-05cc8619a809" containerID="f069c3286180b74905742927ce58572eb089462eeeaad356ec996f61a33372e0" exitCode=0 Oct 01 15:45:10 crc kubenswrapper[4949]: I1001 15:45:10.155188 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cllpk" event={"ID":"2fce939d-7a50-418c-876e-05cc8619a809","Type":"ContainerDied","Data":"f069c3286180b74905742927ce58572eb089462eeeaad356ec996f61a33372e0"} Oct 01 15:45:10 crc kubenswrapper[4949]: I1001 15:45:10.158564 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84mb5" event={"ID":"a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c","Type":"ContainerStarted","Data":"e4f8aa2fd39e0015ff355a5c0ff4b5c9c019cf2b1e32fefdd345dba8a069c365"} Oct 01 15:45:10 crc kubenswrapper[4949]: I1001 15:45:10.162792 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zq8nf" event={"ID":"46c5822e-cd0c-4c66-828c-a0f9a50879de","Type":"ContainerStarted","Data":"8eb22fa102e7146cdac577d89d90ce1d18bc0d28daa180a5496053049aa33e35"} Oct 01 15:45:10 crc kubenswrapper[4949]: I1001 15:45:10.166506 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29322225-2kx5d" event={"ID":"ca85de63-357f-46ae-8559-88caf4cf27f3","Type":"ContainerDied","Data":"cc441f13911bd33eb0388594c53804c7fa723d24ff3692d975ab6e2acdd9568b"} Oct 01 15:45:10 crc kubenswrapper[4949]: I1001 15:45:10.166553 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cc441f13911bd33eb0388594c53804c7fa723d24ff3692d975ab6e2acdd9568b" Oct 01 15:45:10 crc kubenswrapper[4949]: I1001 15:45:10.166637 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29322225-2kx5d" Oct 01 15:45:10 crc kubenswrapper[4949]: I1001 15:45:10.169511 4949 generic.go:334] "Generic (PLEG): container finished" podID="470a3d16-cc7d-4824-a36e-a6004d9b530f" containerID="c244fb525c7480c1961abc3add069d89fc58e6085d7abb36f8585f2ecf509dc2" exitCode=0 Oct 01 15:45:10 crc kubenswrapper[4949]: I1001 15:45:10.169573 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ltmmw" event={"ID":"470a3d16-cc7d-4824-a36e-a6004d9b530f","Type":"ContainerDied","Data":"c244fb525c7480c1961abc3add069d89fc58e6085d7abb36f8585f2ecf509dc2"} Oct 01 15:45:10 crc kubenswrapper[4949]: I1001 15:45:10.265403 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zq8nf" podStartSLOduration=4.18661811 podStartE2EDuration="1m5.265372343s" podCreationTimestamp="2025-10-01 15:44:05 +0000 UTC" firstStartedPulling="2025-10-01 15:44:08.238032722 +0000 UTC m=+147.543638913" lastFinishedPulling="2025-10-01 15:45:09.316786955 +0000 UTC m=+208.622393146" observedRunningTime="2025-10-01 15:45:10.263115517 +0000 UTC m=+209.568721708" watchObservedRunningTime="2025-10-01 15:45:10.265372343 +0000 UTC m=+209.570978534" Oct 01 15:45:10 crc kubenswrapper[4949]: I1001 15:45:10.280025 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-84mb5" podStartSLOduration=3.018327382 podStartE2EDuration="1m4.280000696s" podCreationTimestamp="2025-10-01 15:44:06 +0000 UTC" firstStartedPulling="2025-10-01 15:44:08.216036081 +0000 UTC m=+147.521642272" lastFinishedPulling="2025-10-01 15:45:09.477709395 +0000 UTC m=+208.783315586" observedRunningTime="2025-10-01 15:45:10.278662538 +0000 UTC m=+209.584268749" watchObservedRunningTime="2025-10-01 15:45:10.280000696 +0000 UTC m=+209.585606907" Oct 01 15:45:11 crc kubenswrapper[4949]: I1001 15:45:11.177696 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ltmmw" event={"ID":"470a3d16-cc7d-4824-a36e-a6004d9b530f","Type":"ContainerStarted","Data":"8c7d2bf4162d305f77c8e22dd7d3dea44b555c917c6cc8e934ee01bcddb19489"} Oct 01 15:45:11 crc kubenswrapper[4949]: I1001 15:45:11.180116 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6s25h" event={"ID":"afd2fd7e-b101-41fa-bc35-87cdf06d791a","Type":"ContainerStarted","Data":"1346fe2835bf6d37b720c4da8785a4ab92143d5ddf234d63003dba746e29ffdc"} Oct 01 15:45:11 crc kubenswrapper[4949]: I1001 15:45:11.182103 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2vm94" event={"ID":"505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9","Type":"ContainerStarted","Data":"cef4dc7cdeb60f285e769f2e617c87606658bd7a09505f38d66d1ad57428834e"} Oct 01 15:45:11 crc kubenswrapper[4949]: I1001 15:45:11.184100 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jvj9j" event={"ID":"a935699b-031c-41c8-ae81-e631cfb6d465","Type":"ContainerStarted","Data":"3098626f5c14e10fe3f0717e9a9e52aba5c768bafa9804feaea7758d723cf45a"} Oct 01 15:45:11 crc kubenswrapper[4949]: I1001 15:45:11.186349 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cllpk" event={"ID":"2fce939d-7a50-418c-876e-05cc8619a809","Type":"ContainerStarted","Data":"9f3203b7af2adf19a72ab6e27cb78484cd635248bd7a96f827115fc0667ad893"} Oct 01 15:45:11 crc kubenswrapper[4949]: I1001 15:45:11.189565 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h9w85" event={"ID":"48941e1e-2481-47ca-832d-047a6d3220c8","Type":"ContainerStarted","Data":"868d655affdbf712314d3e063d4392374e779886b8deb04b6e211d65ab65d895"} Oct 01 15:45:11 crc kubenswrapper[4949]: I1001 15:45:11.196762 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ltmmw" podStartSLOduration=2.812598852 podStartE2EDuration="1m3.196744121s" podCreationTimestamp="2025-10-01 15:44:08 +0000 UTC" firstStartedPulling="2025-10-01 15:44:10.40019441 +0000 UTC m=+149.705800611" lastFinishedPulling="2025-10-01 15:45:10.784339689 +0000 UTC m=+210.089945880" observedRunningTime="2025-10-01 15:45:11.195563657 +0000 UTC m=+210.501169858" watchObservedRunningTime="2025-10-01 15:45:11.196744121 +0000 UTC m=+210.502350312" Oct 01 15:45:11 crc kubenswrapper[4949]: I1001 15:45:11.217130 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6s25h" podStartSLOduration=3.676743204 podStartE2EDuration="1m6.217104071s" podCreationTimestamp="2025-10-01 15:44:05 +0000 UTC" firstStartedPulling="2025-10-01 15:44:08.205341974 +0000 UTC m=+147.510948165" lastFinishedPulling="2025-10-01 15:45:10.745702841 +0000 UTC m=+210.051309032" observedRunningTime="2025-10-01 15:45:11.214951968 +0000 UTC m=+210.520558159" watchObservedRunningTime="2025-10-01 15:45:11.217104071 +0000 UTC m=+210.522710262" Oct 01 15:45:11 crc kubenswrapper[4949]: I1001 15:45:11.253821 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-cllpk" podStartSLOduration=4.16522138 podStartE2EDuration="1m4.253805864s" podCreationTimestamp="2025-10-01 15:44:07 +0000 UTC" firstStartedPulling="2025-10-01 15:44:10.442157933 +0000 UTC m=+149.747764124" lastFinishedPulling="2025-10-01 15:45:10.530742417 +0000 UTC m=+209.836348608" observedRunningTime="2025-10-01 15:45:11.251744204 +0000 UTC m=+210.557350415" watchObservedRunningTime="2025-10-01 15:45:11.253805864 +0000 UTC m=+210.559412055" Oct 01 15:45:11 crc kubenswrapper[4949]: I1001 15:45:11.254426 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-h9w85" podStartSLOduration=3.850344481 podStartE2EDuration="1m6.254423071s" podCreationTimestamp="2025-10-01 15:44:05 +0000 UTC" firstStartedPulling="2025-10-01 15:44:08.22038108 +0000 UTC m=+147.525987271" lastFinishedPulling="2025-10-01 15:45:10.62445967 +0000 UTC m=+209.930065861" observedRunningTime="2025-10-01 15:45:11.233704681 +0000 UTC m=+210.539310872" watchObservedRunningTime="2025-10-01 15:45:11.254423071 +0000 UTC m=+210.560029262" Oct 01 15:45:11 crc kubenswrapper[4949]: I1001 15:45:11.273288 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-2vm94" podStartSLOduration=2.912654733 podStartE2EDuration="1m4.273267437s" podCreationTimestamp="2025-10-01 15:44:07 +0000 UTC" firstStartedPulling="2025-10-01 15:44:09.340953368 +0000 UTC m=+148.646559559" lastFinishedPulling="2025-10-01 15:45:10.701566072 +0000 UTC m=+210.007172263" observedRunningTime="2025-10-01 15:45:11.269988091 +0000 UTC m=+210.575594282" watchObservedRunningTime="2025-10-01 15:45:11.273267437 +0000 UTC m=+210.578873628" Oct 01 15:45:11 crc kubenswrapper[4949]: I1001 15:45:11.292646 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jvj9j" podStartSLOduration=3.17363467 podStartE2EDuration="1m2.292623037s" podCreationTimestamp="2025-10-01 15:44:09 +0000 UTC" firstStartedPulling="2025-10-01 15:44:11.533299209 +0000 UTC m=+150.838905400" lastFinishedPulling="2025-10-01 15:45:10.652287576 +0000 UTC m=+209.957893767" observedRunningTime="2025-10-01 15:45:11.286963773 +0000 UTC m=+210.592569974" watchObservedRunningTime="2025-10-01 15:45:11.292623037 +0000 UTC m=+210.598229228" Oct 01 15:45:15 crc kubenswrapper[4949]: I1001 15:45:15.804495 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zq8nf" Oct 01 15:45:15 crc kubenswrapper[4949]: I1001 15:45:15.804996 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zq8nf" Oct 01 15:45:16 crc kubenswrapper[4949]: I1001 15:45:16.104919 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-h9w85" Oct 01 15:45:16 crc kubenswrapper[4949]: I1001 15:45:16.104989 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-h9w85" Oct 01 15:45:16 crc kubenswrapper[4949]: I1001 15:45:16.203178 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zq8nf" Oct 01 15:45:16 crc kubenswrapper[4949]: I1001 15:45:16.207557 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-h9w85" Oct 01 15:45:16 crc kubenswrapper[4949]: I1001 15:45:16.277289 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-h9w85" Oct 01 15:45:16 crc kubenswrapper[4949]: I1001 15:45:16.287254 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zq8nf" Oct 01 15:45:16 crc kubenswrapper[4949]: I1001 15:45:16.324820 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6s25h" Oct 01 15:45:16 crc kubenswrapper[4949]: I1001 15:45:16.324874 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6s25h" Oct 01 15:45:16 crc kubenswrapper[4949]: I1001 15:45:16.354979 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wjx4c"] Oct 01 15:45:16 crc kubenswrapper[4949]: I1001 15:45:16.397829 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6s25h" Oct 01 15:45:16 crc kubenswrapper[4949]: I1001 15:45:16.502837 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-84mb5" Oct 01 15:45:16 crc kubenswrapper[4949]: I1001 15:45:16.502905 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-84mb5" Oct 01 15:45:16 crc kubenswrapper[4949]: I1001 15:45:16.546966 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-84mb5" Oct 01 15:45:17 crc kubenswrapper[4949]: I1001 15:45:17.231824 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h9w85"] Oct 01 15:45:17 crc kubenswrapper[4949]: I1001 15:45:17.261346 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6s25h" Oct 01 15:45:17 crc kubenswrapper[4949]: I1001 15:45:17.266904 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-84mb5" Oct 01 15:45:17 crc kubenswrapper[4949]: I1001 15:45:17.907583 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-2vm94" Oct 01 15:45:17 crc kubenswrapper[4949]: I1001 15:45:17.907627 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-2vm94" Oct 01 15:45:17 crc kubenswrapper[4949]: I1001 15:45:17.945902 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-2vm94" Oct 01 15:45:18 crc kubenswrapper[4949]: I1001 15:45:18.038921 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 15:45:18 crc kubenswrapper[4949]: I1001 15:45:18.038984 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 15:45:18 crc kubenswrapper[4949]: I1001 15:45:18.039029 4949 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l6287" Oct 01 15:45:18 crc kubenswrapper[4949]: I1001 15:45:18.039583 4949 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"a85a240a198e037219428db0de4c633c5e99064f7f13a6a8922eba7fa3fac633"} pod="openshift-machine-config-operator/machine-config-daemon-l6287" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 01 15:45:18 crc kubenswrapper[4949]: I1001 15:45:18.039681 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" containerID="cri-o://a85a240a198e037219428db0de4c633c5e99064f7f13a6a8922eba7fa3fac633" gracePeriod=600 Oct 01 15:45:18 crc kubenswrapper[4949]: I1001 15:45:18.225467 4949 generic.go:334] "Generic (PLEG): container finished" podID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerID="a85a240a198e037219428db0de4c633c5e99064f7f13a6a8922eba7fa3fac633" exitCode=0 Oct 01 15:45:18 crc kubenswrapper[4949]: I1001 15:45:18.225558 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" event={"ID":"0e15cd67-d4ad-49b8-96a6-da114105e558","Type":"ContainerDied","Data":"a85a240a198e037219428db0de4c633c5e99064f7f13a6a8922eba7fa3fac633"} Oct 01 15:45:18 crc kubenswrapper[4949]: I1001 15:45:18.226020 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-h9w85" podUID="48941e1e-2481-47ca-832d-047a6d3220c8" containerName="registry-server" containerID="cri-o://868d655affdbf712314d3e063d4392374e779886b8deb04b6e211d65ab65d895" gracePeriod=2 Oct 01 15:45:18 crc kubenswrapper[4949]: I1001 15:45:18.277344 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-2vm94" Oct 01 15:45:18 crc kubenswrapper[4949]: I1001 15:45:18.302951 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-cllpk" Oct 01 15:45:18 crc kubenswrapper[4949]: I1001 15:45:18.303018 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-cllpk" Oct 01 15:45:18 crc kubenswrapper[4949]: I1001 15:45:18.343821 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-cllpk" Oct 01 15:45:18 crc kubenswrapper[4949]: I1001 15:45:18.636255 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-84mb5"] Oct 01 15:45:18 crc kubenswrapper[4949]: I1001 15:45:18.889597 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h9w85" Oct 01 15:45:19 crc kubenswrapper[4949]: I1001 15:45:19.006041 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48941e1e-2481-47ca-832d-047a6d3220c8-utilities\") pod \"48941e1e-2481-47ca-832d-047a6d3220c8\" (UID: \"48941e1e-2481-47ca-832d-047a6d3220c8\") " Oct 01 15:45:19 crc kubenswrapper[4949]: I1001 15:45:19.006192 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6t5ql\" (UniqueName: \"kubernetes.io/projected/48941e1e-2481-47ca-832d-047a6d3220c8-kube-api-access-6t5ql\") pod \"48941e1e-2481-47ca-832d-047a6d3220c8\" (UID: \"48941e1e-2481-47ca-832d-047a6d3220c8\") " Oct 01 15:45:19 crc kubenswrapper[4949]: I1001 15:45:19.006247 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48941e1e-2481-47ca-832d-047a6d3220c8-catalog-content\") pod \"48941e1e-2481-47ca-832d-047a6d3220c8\" (UID: \"48941e1e-2481-47ca-832d-047a6d3220c8\") " Oct 01 15:45:19 crc kubenswrapper[4949]: I1001 15:45:19.006980 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48941e1e-2481-47ca-832d-047a6d3220c8-utilities" (OuterVolumeSpecName: "utilities") pod "48941e1e-2481-47ca-832d-047a6d3220c8" (UID: "48941e1e-2481-47ca-832d-047a6d3220c8"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 15:45:19 crc kubenswrapper[4949]: I1001 15:45:19.013473 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/48941e1e-2481-47ca-832d-047a6d3220c8-kube-api-access-6t5ql" (OuterVolumeSpecName: "kube-api-access-6t5ql") pod "48941e1e-2481-47ca-832d-047a6d3220c8" (UID: "48941e1e-2481-47ca-832d-047a6d3220c8"). InnerVolumeSpecName "kube-api-access-6t5ql". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:45:19 crc kubenswrapper[4949]: I1001 15:45:19.055774 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/48941e1e-2481-47ca-832d-047a6d3220c8-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "48941e1e-2481-47ca-832d-047a6d3220c8" (UID: "48941e1e-2481-47ca-832d-047a6d3220c8"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 15:45:19 crc kubenswrapper[4949]: I1001 15:45:19.107283 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6t5ql\" (UniqueName: \"kubernetes.io/projected/48941e1e-2481-47ca-832d-047a6d3220c8-kube-api-access-6t5ql\") on node \"crc\" DevicePath \"\"" Oct 01 15:45:19 crc kubenswrapper[4949]: I1001 15:45:19.107318 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/48941e1e-2481-47ca-832d-047a6d3220c8-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 01 15:45:19 crc kubenswrapper[4949]: I1001 15:45:19.107330 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/48941e1e-2481-47ca-832d-047a6d3220c8-utilities\") on node \"crc\" DevicePath \"\"" Oct 01 15:45:19 crc kubenswrapper[4949]: I1001 15:45:19.231404 4949 generic.go:334] "Generic (PLEG): container finished" podID="48941e1e-2481-47ca-832d-047a6d3220c8" containerID="868d655affdbf712314d3e063d4392374e779886b8deb04b6e211d65ab65d895" exitCode=0 Oct 01 15:45:19 crc kubenswrapper[4949]: I1001 15:45:19.231456 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-h9w85" Oct 01 15:45:19 crc kubenswrapper[4949]: I1001 15:45:19.231472 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h9w85" event={"ID":"48941e1e-2481-47ca-832d-047a6d3220c8","Type":"ContainerDied","Data":"868d655affdbf712314d3e063d4392374e779886b8deb04b6e211d65ab65d895"} Oct 01 15:45:19 crc kubenswrapper[4949]: I1001 15:45:19.231504 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-h9w85" event={"ID":"48941e1e-2481-47ca-832d-047a6d3220c8","Type":"ContainerDied","Data":"d22e45720f9122e21622e804b56e93b62a66aa097ea6bb08759e84ca6a88f201"} Oct 01 15:45:19 crc kubenswrapper[4949]: I1001 15:45:19.231519 4949 scope.go:117] "RemoveContainer" containerID="868d655affdbf712314d3e063d4392374e779886b8deb04b6e211d65ab65d895" Oct 01 15:45:19 crc kubenswrapper[4949]: I1001 15:45:19.233946 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" event={"ID":"0e15cd67-d4ad-49b8-96a6-da114105e558","Type":"ContainerStarted","Data":"66ef7ab6a5e7fcf3b79f687490573d82ab228e320149f8042b009a1c999806ee"} Oct 01 15:45:19 crc kubenswrapper[4949]: I1001 15:45:19.234057 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-84mb5" podUID="a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c" containerName="registry-server" containerID="cri-o://e4f8aa2fd39e0015ff355a5c0ff4b5c9c019cf2b1e32fefdd345dba8a069c365" gracePeriod=2 Oct 01 15:45:19 crc kubenswrapper[4949]: I1001 15:45:19.251832 4949 scope.go:117] "RemoveContainer" containerID="ec9cf555692688ab9a723d4c843ea0c526c4c7db76e4800e1117c07dcdd347ea" Oct 01 15:45:19 crc kubenswrapper[4949]: I1001 15:45:19.278532 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-h9w85"] Oct 01 15:45:19 crc kubenswrapper[4949]: I1001 15:45:19.282103 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-h9w85"] Oct 01 15:45:19 crc kubenswrapper[4949]: I1001 15:45:19.284747 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-cllpk" Oct 01 15:45:19 crc kubenswrapper[4949]: I1001 15:45:19.292085 4949 scope.go:117] "RemoveContainer" containerID="cb688aecf38a364dabe74046126fad8c15ff3914925dd3f93d22fb322595e15d" Oct 01 15:45:19 crc kubenswrapper[4949]: I1001 15:45:19.380079 4949 scope.go:117] "RemoveContainer" containerID="868d655affdbf712314d3e063d4392374e779886b8deb04b6e211d65ab65d895" Oct 01 15:45:19 crc kubenswrapper[4949]: E1001 15:45:19.382906 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"868d655affdbf712314d3e063d4392374e779886b8deb04b6e211d65ab65d895\": container with ID starting with 868d655affdbf712314d3e063d4392374e779886b8deb04b6e211d65ab65d895 not found: ID does not exist" containerID="868d655affdbf712314d3e063d4392374e779886b8deb04b6e211d65ab65d895" Oct 01 15:45:19 crc kubenswrapper[4949]: I1001 15:45:19.382966 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"868d655affdbf712314d3e063d4392374e779886b8deb04b6e211d65ab65d895"} err="failed to get container status \"868d655affdbf712314d3e063d4392374e779886b8deb04b6e211d65ab65d895\": rpc error: code = NotFound desc = could not find container \"868d655affdbf712314d3e063d4392374e779886b8deb04b6e211d65ab65d895\": container with ID starting with 868d655affdbf712314d3e063d4392374e779886b8deb04b6e211d65ab65d895 not found: ID does not exist" Oct 01 15:45:19 crc kubenswrapper[4949]: I1001 15:45:19.382999 4949 scope.go:117] "RemoveContainer" containerID="ec9cf555692688ab9a723d4c843ea0c526c4c7db76e4800e1117c07dcdd347ea" Oct 01 15:45:19 crc kubenswrapper[4949]: E1001 15:45:19.384420 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec9cf555692688ab9a723d4c843ea0c526c4c7db76e4800e1117c07dcdd347ea\": container with ID starting with ec9cf555692688ab9a723d4c843ea0c526c4c7db76e4800e1117c07dcdd347ea not found: ID does not exist" containerID="ec9cf555692688ab9a723d4c843ea0c526c4c7db76e4800e1117c07dcdd347ea" Oct 01 15:45:19 crc kubenswrapper[4949]: I1001 15:45:19.384462 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec9cf555692688ab9a723d4c843ea0c526c4c7db76e4800e1117c07dcdd347ea"} err="failed to get container status \"ec9cf555692688ab9a723d4c843ea0c526c4c7db76e4800e1117c07dcdd347ea\": rpc error: code = NotFound desc = could not find container \"ec9cf555692688ab9a723d4c843ea0c526c4c7db76e4800e1117c07dcdd347ea\": container with ID starting with ec9cf555692688ab9a723d4c843ea0c526c4c7db76e4800e1117c07dcdd347ea not found: ID does not exist" Oct 01 15:45:19 crc kubenswrapper[4949]: I1001 15:45:19.384488 4949 scope.go:117] "RemoveContainer" containerID="cb688aecf38a364dabe74046126fad8c15ff3914925dd3f93d22fb322595e15d" Oct 01 15:45:19 crc kubenswrapper[4949]: E1001 15:45:19.384741 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb688aecf38a364dabe74046126fad8c15ff3914925dd3f93d22fb322595e15d\": container with ID starting with cb688aecf38a364dabe74046126fad8c15ff3914925dd3f93d22fb322595e15d not found: ID does not exist" containerID="cb688aecf38a364dabe74046126fad8c15ff3914925dd3f93d22fb322595e15d" Oct 01 15:45:19 crc kubenswrapper[4949]: I1001 15:45:19.384764 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb688aecf38a364dabe74046126fad8c15ff3914925dd3f93d22fb322595e15d"} err="failed to get container status \"cb688aecf38a364dabe74046126fad8c15ff3914925dd3f93d22fb322595e15d\": rpc error: code = NotFound desc = could not find container \"cb688aecf38a364dabe74046126fad8c15ff3914925dd3f93d22fb322595e15d\": container with ID starting with cb688aecf38a364dabe74046126fad8c15ff3914925dd3f93d22fb322595e15d not found: ID does not exist" Oct 01 15:45:19 crc kubenswrapper[4949]: I1001 15:45:19.463567 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ltmmw" Oct 01 15:45:19 crc kubenswrapper[4949]: I1001 15:45:19.463627 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ltmmw" Oct 01 15:45:19 crc kubenswrapper[4949]: I1001 15:45:19.500486 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ltmmw" Oct 01 15:45:19 crc kubenswrapper[4949]: I1001 15:45:19.616888 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="48941e1e-2481-47ca-832d-047a6d3220c8" path="/var/lib/kubelet/pods/48941e1e-2481-47ca-832d-047a6d3220c8/volumes" Oct 01 15:45:19 crc kubenswrapper[4949]: I1001 15:45:19.708353 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-84mb5" Oct 01 15:45:19 crc kubenswrapper[4949]: I1001 15:45:19.815520 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c-utilities\") pod \"a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c\" (UID: \"a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c\") " Oct 01 15:45:19 crc kubenswrapper[4949]: I1001 15:45:19.815560 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c-catalog-content\") pod \"a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c\" (UID: \"a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c\") " Oct 01 15:45:19 crc kubenswrapper[4949]: I1001 15:45:19.815583 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66p4p\" (UniqueName: \"kubernetes.io/projected/a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c-kube-api-access-66p4p\") pod \"a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c\" (UID: \"a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c\") " Oct 01 15:45:19 crc kubenswrapper[4949]: I1001 15:45:19.816368 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c-utilities" (OuterVolumeSpecName: "utilities") pod "a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c" (UID: "a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 15:45:19 crc kubenswrapper[4949]: I1001 15:45:19.819353 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c-kube-api-access-66p4p" (OuterVolumeSpecName: "kube-api-access-66p4p") pod "a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c" (UID: "a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c"). InnerVolumeSpecName "kube-api-access-66p4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:45:19 crc kubenswrapper[4949]: I1001 15:45:19.853473 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jvj9j" Oct 01 15:45:19 crc kubenswrapper[4949]: I1001 15:45:19.853836 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jvj9j" Oct 01 15:45:19 crc kubenswrapper[4949]: I1001 15:45:19.882907 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c" (UID: "a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 15:45:19 crc kubenswrapper[4949]: I1001 15:45:19.897105 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jvj9j" Oct 01 15:45:19 crc kubenswrapper[4949]: I1001 15:45:19.916676 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c-utilities\") on node \"crc\" DevicePath \"\"" Oct 01 15:45:19 crc kubenswrapper[4949]: I1001 15:45:19.916705 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 01 15:45:19 crc kubenswrapper[4949]: I1001 15:45:19.916734 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66p4p\" (UniqueName: \"kubernetes.io/projected/a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c-kube-api-access-66p4p\") on node \"crc\" DevicePath \"\"" Oct 01 15:45:20 crc kubenswrapper[4949]: I1001 15:45:20.241874 4949 generic.go:334] "Generic (PLEG): container finished" podID="a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c" containerID="e4f8aa2fd39e0015ff355a5c0ff4b5c9c019cf2b1e32fefdd345dba8a069c365" exitCode=0 Oct 01 15:45:20 crc kubenswrapper[4949]: I1001 15:45:20.242735 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-84mb5" Oct 01 15:45:20 crc kubenswrapper[4949]: I1001 15:45:20.252451 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84mb5" event={"ID":"a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c","Type":"ContainerDied","Data":"e4f8aa2fd39e0015ff355a5c0ff4b5c9c019cf2b1e32fefdd345dba8a069c365"} Oct 01 15:45:20 crc kubenswrapper[4949]: I1001 15:45:20.252649 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-84mb5" event={"ID":"a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c","Type":"ContainerDied","Data":"d3840491823d0bf6c6bcf44456c0377a6286f3ca064c6410b7d994d452b3e6f6"} Oct 01 15:45:20 crc kubenswrapper[4949]: I1001 15:45:20.252727 4949 scope.go:117] "RemoveContainer" containerID="e4f8aa2fd39e0015ff355a5c0ff4b5c9c019cf2b1e32fefdd345dba8a069c365" Oct 01 15:45:20 crc kubenswrapper[4949]: I1001 15:45:20.273651 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-84mb5"] Oct 01 15:45:20 crc kubenswrapper[4949]: I1001 15:45:20.278048 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-84mb5"] Oct 01 15:45:20 crc kubenswrapper[4949]: I1001 15:45:20.285695 4949 scope.go:117] "RemoveContainer" containerID="fbd568c6dfd82b7e74a2794889c83cd9cbe608954c5c83d56e6f2b4911052685" Oct 01 15:45:20 crc kubenswrapper[4949]: I1001 15:45:20.314864 4949 scope.go:117] "RemoveContainer" containerID="454f0a7ffe07950d33c793c2537bdb84dada942c7bc74dd5d60d009f8634b1bd" Oct 01 15:45:20 crc kubenswrapper[4949]: I1001 15:45:20.315937 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ltmmw" Oct 01 15:45:20 crc kubenswrapper[4949]: I1001 15:45:20.328106 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jvj9j" Oct 01 15:45:20 crc kubenswrapper[4949]: I1001 15:45:20.332733 4949 scope.go:117] "RemoveContainer" containerID="e4f8aa2fd39e0015ff355a5c0ff4b5c9c019cf2b1e32fefdd345dba8a069c365" Oct 01 15:45:20 crc kubenswrapper[4949]: E1001 15:45:20.333170 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e4f8aa2fd39e0015ff355a5c0ff4b5c9c019cf2b1e32fefdd345dba8a069c365\": container with ID starting with e4f8aa2fd39e0015ff355a5c0ff4b5c9c019cf2b1e32fefdd345dba8a069c365 not found: ID does not exist" containerID="e4f8aa2fd39e0015ff355a5c0ff4b5c9c019cf2b1e32fefdd345dba8a069c365" Oct 01 15:45:20 crc kubenswrapper[4949]: I1001 15:45:20.333276 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e4f8aa2fd39e0015ff355a5c0ff4b5c9c019cf2b1e32fefdd345dba8a069c365"} err="failed to get container status \"e4f8aa2fd39e0015ff355a5c0ff4b5c9c019cf2b1e32fefdd345dba8a069c365\": rpc error: code = NotFound desc = could not find container \"e4f8aa2fd39e0015ff355a5c0ff4b5c9c019cf2b1e32fefdd345dba8a069c365\": container with ID starting with e4f8aa2fd39e0015ff355a5c0ff4b5c9c019cf2b1e32fefdd345dba8a069c365 not found: ID does not exist" Oct 01 15:45:20 crc kubenswrapper[4949]: I1001 15:45:20.333361 4949 scope.go:117] "RemoveContainer" containerID="fbd568c6dfd82b7e74a2794889c83cd9cbe608954c5c83d56e6f2b4911052685" Oct 01 15:45:20 crc kubenswrapper[4949]: E1001 15:45:20.333763 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbd568c6dfd82b7e74a2794889c83cd9cbe608954c5c83d56e6f2b4911052685\": container with ID starting with fbd568c6dfd82b7e74a2794889c83cd9cbe608954c5c83d56e6f2b4911052685 not found: ID does not exist" containerID="fbd568c6dfd82b7e74a2794889c83cd9cbe608954c5c83d56e6f2b4911052685" Oct 01 15:45:20 crc kubenswrapper[4949]: I1001 15:45:20.333856 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbd568c6dfd82b7e74a2794889c83cd9cbe608954c5c83d56e6f2b4911052685"} err="failed to get container status \"fbd568c6dfd82b7e74a2794889c83cd9cbe608954c5c83d56e6f2b4911052685\": rpc error: code = NotFound desc = could not find container \"fbd568c6dfd82b7e74a2794889c83cd9cbe608954c5c83d56e6f2b4911052685\": container with ID starting with fbd568c6dfd82b7e74a2794889c83cd9cbe608954c5c83d56e6f2b4911052685 not found: ID does not exist" Oct 01 15:45:20 crc kubenswrapper[4949]: I1001 15:45:20.333949 4949 scope.go:117] "RemoveContainer" containerID="454f0a7ffe07950d33c793c2537bdb84dada942c7bc74dd5d60d009f8634b1bd" Oct 01 15:45:20 crc kubenswrapper[4949]: E1001 15:45:20.337462 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"454f0a7ffe07950d33c793c2537bdb84dada942c7bc74dd5d60d009f8634b1bd\": container with ID starting with 454f0a7ffe07950d33c793c2537bdb84dada942c7bc74dd5d60d009f8634b1bd not found: ID does not exist" containerID="454f0a7ffe07950d33c793c2537bdb84dada942c7bc74dd5d60d009f8634b1bd" Oct 01 15:45:20 crc kubenswrapper[4949]: I1001 15:45:20.337516 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"454f0a7ffe07950d33c793c2537bdb84dada942c7bc74dd5d60d009f8634b1bd"} err="failed to get container status \"454f0a7ffe07950d33c793c2537bdb84dada942c7bc74dd5d60d009f8634b1bd\": rpc error: code = NotFound desc = could not find container \"454f0a7ffe07950d33c793c2537bdb84dada942c7bc74dd5d60d009f8634b1bd\": container with ID starting with 454f0a7ffe07950d33c793c2537bdb84dada942c7bc74dd5d60d009f8634b1bd not found: ID does not exist" Oct 01 15:45:21 crc kubenswrapper[4949]: I1001 15:45:21.032551 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cllpk"] Oct 01 15:45:21 crc kubenswrapper[4949]: I1001 15:45:21.250649 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-cllpk" podUID="2fce939d-7a50-418c-876e-05cc8619a809" containerName="registry-server" containerID="cri-o://9f3203b7af2adf19a72ab6e27cb78484cd635248bd7a96f827115fc0667ad893" gracePeriod=2 Oct 01 15:45:21 crc kubenswrapper[4949]: I1001 15:45:21.608858 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c" path="/var/lib/kubelet/pods/a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c/volumes" Oct 01 15:45:21 crc kubenswrapper[4949]: I1001 15:45:21.641405 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cllpk" Oct 01 15:45:21 crc kubenswrapper[4949]: I1001 15:45:21.741184 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7gm8k\" (UniqueName: \"kubernetes.io/projected/2fce939d-7a50-418c-876e-05cc8619a809-kube-api-access-7gm8k\") pod \"2fce939d-7a50-418c-876e-05cc8619a809\" (UID: \"2fce939d-7a50-418c-876e-05cc8619a809\") " Oct 01 15:45:21 crc kubenswrapper[4949]: I1001 15:45:21.741276 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fce939d-7a50-418c-876e-05cc8619a809-catalog-content\") pod \"2fce939d-7a50-418c-876e-05cc8619a809\" (UID: \"2fce939d-7a50-418c-876e-05cc8619a809\") " Oct 01 15:45:21 crc kubenswrapper[4949]: I1001 15:45:21.741314 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fce939d-7a50-418c-876e-05cc8619a809-utilities\") pod \"2fce939d-7a50-418c-876e-05cc8619a809\" (UID: \"2fce939d-7a50-418c-876e-05cc8619a809\") " Oct 01 15:45:21 crc kubenswrapper[4949]: I1001 15:45:21.744813 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fce939d-7a50-418c-876e-05cc8619a809-utilities" (OuterVolumeSpecName: "utilities") pod "2fce939d-7a50-418c-876e-05cc8619a809" (UID: "2fce939d-7a50-418c-876e-05cc8619a809"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 15:45:21 crc kubenswrapper[4949]: I1001 15:45:21.746673 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2fce939d-7a50-418c-876e-05cc8619a809-kube-api-access-7gm8k" (OuterVolumeSpecName: "kube-api-access-7gm8k") pod "2fce939d-7a50-418c-876e-05cc8619a809" (UID: "2fce939d-7a50-418c-876e-05cc8619a809"). InnerVolumeSpecName "kube-api-access-7gm8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:45:21 crc kubenswrapper[4949]: I1001 15:45:21.760972 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2fce939d-7a50-418c-876e-05cc8619a809-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2fce939d-7a50-418c-876e-05cc8619a809" (UID: "2fce939d-7a50-418c-876e-05cc8619a809"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 15:45:21 crc kubenswrapper[4949]: I1001 15:45:21.842847 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2fce939d-7a50-418c-876e-05cc8619a809-utilities\") on node \"crc\" DevicePath \"\"" Oct 01 15:45:21 crc kubenswrapper[4949]: I1001 15:45:21.842887 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7gm8k\" (UniqueName: \"kubernetes.io/projected/2fce939d-7a50-418c-876e-05cc8619a809-kube-api-access-7gm8k\") on node \"crc\" DevicePath \"\"" Oct 01 15:45:21 crc kubenswrapper[4949]: I1001 15:45:21.842900 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2fce939d-7a50-418c-876e-05cc8619a809-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 01 15:45:22 crc kubenswrapper[4949]: I1001 15:45:22.256657 4949 generic.go:334] "Generic (PLEG): container finished" podID="2fce939d-7a50-418c-876e-05cc8619a809" containerID="9f3203b7af2adf19a72ab6e27cb78484cd635248bd7a96f827115fc0667ad893" exitCode=0 Oct 01 15:45:22 crc kubenswrapper[4949]: I1001 15:45:22.256706 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cllpk" event={"ID":"2fce939d-7a50-418c-876e-05cc8619a809","Type":"ContainerDied","Data":"9f3203b7af2adf19a72ab6e27cb78484cd635248bd7a96f827115fc0667ad893"} Oct 01 15:45:22 crc kubenswrapper[4949]: I1001 15:45:22.256738 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-cllpk" event={"ID":"2fce939d-7a50-418c-876e-05cc8619a809","Type":"ContainerDied","Data":"4a913027f74f0c671308cae5906c5a8d36c9c7aff0ef84329292e7857098d1bc"} Oct 01 15:45:22 crc kubenswrapper[4949]: I1001 15:45:22.256759 4949 scope.go:117] "RemoveContainer" containerID="9f3203b7af2adf19a72ab6e27cb78484cd635248bd7a96f827115fc0667ad893" Oct 01 15:45:22 crc kubenswrapper[4949]: I1001 15:45:22.256892 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-cllpk" Oct 01 15:45:22 crc kubenswrapper[4949]: I1001 15:45:22.290345 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-cllpk"] Oct 01 15:45:22 crc kubenswrapper[4949]: I1001 15:45:22.291743 4949 scope.go:117] "RemoveContainer" containerID="f069c3286180b74905742927ce58572eb089462eeeaad356ec996f61a33372e0" Oct 01 15:45:22 crc kubenswrapper[4949]: I1001 15:45:22.293214 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-cllpk"] Oct 01 15:45:22 crc kubenswrapper[4949]: I1001 15:45:22.311274 4949 scope.go:117] "RemoveContainer" containerID="111fa9828ed33f70d91234ec54572d82481ea11932d35fcef2df5492b54d6bd6" Oct 01 15:45:22 crc kubenswrapper[4949]: I1001 15:45:22.329220 4949 scope.go:117] "RemoveContainer" containerID="9f3203b7af2adf19a72ab6e27cb78484cd635248bd7a96f827115fc0667ad893" Oct 01 15:45:22 crc kubenswrapper[4949]: E1001 15:45:22.331304 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f3203b7af2adf19a72ab6e27cb78484cd635248bd7a96f827115fc0667ad893\": container with ID starting with 9f3203b7af2adf19a72ab6e27cb78484cd635248bd7a96f827115fc0667ad893 not found: ID does not exist" containerID="9f3203b7af2adf19a72ab6e27cb78484cd635248bd7a96f827115fc0667ad893" Oct 01 15:45:22 crc kubenswrapper[4949]: I1001 15:45:22.331353 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f3203b7af2adf19a72ab6e27cb78484cd635248bd7a96f827115fc0667ad893"} err="failed to get container status \"9f3203b7af2adf19a72ab6e27cb78484cd635248bd7a96f827115fc0667ad893\": rpc error: code = NotFound desc = could not find container \"9f3203b7af2adf19a72ab6e27cb78484cd635248bd7a96f827115fc0667ad893\": container with ID starting with 9f3203b7af2adf19a72ab6e27cb78484cd635248bd7a96f827115fc0667ad893 not found: ID does not exist" Oct 01 15:45:22 crc kubenswrapper[4949]: I1001 15:45:22.331388 4949 scope.go:117] "RemoveContainer" containerID="f069c3286180b74905742927ce58572eb089462eeeaad356ec996f61a33372e0" Oct 01 15:45:22 crc kubenswrapper[4949]: E1001 15:45:22.331922 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f069c3286180b74905742927ce58572eb089462eeeaad356ec996f61a33372e0\": container with ID starting with f069c3286180b74905742927ce58572eb089462eeeaad356ec996f61a33372e0 not found: ID does not exist" containerID="f069c3286180b74905742927ce58572eb089462eeeaad356ec996f61a33372e0" Oct 01 15:45:22 crc kubenswrapper[4949]: I1001 15:45:22.331951 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f069c3286180b74905742927ce58572eb089462eeeaad356ec996f61a33372e0"} err="failed to get container status \"f069c3286180b74905742927ce58572eb089462eeeaad356ec996f61a33372e0\": rpc error: code = NotFound desc = could not find container \"f069c3286180b74905742927ce58572eb089462eeeaad356ec996f61a33372e0\": container with ID starting with f069c3286180b74905742927ce58572eb089462eeeaad356ec996f61a33372e0 not found: ID does not exist" Oct 01 15:45:22 crc kubenswrapper[4949]: I1001 15:45:22.331969 4949 scope.go:117] "RemoveContainer" containerID="111fa9828ed33f70d91234ec54572d82481ea11932d35fcef2df5492b54d6bd6" Oct 01 15:45:22 crc kubenswrapper[4949]: E1001 15:45:22.332428 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"111fa9828ed33f70d91234ec54572d82481ea11932d35fcef2df5492b54d6bd6\": container with ID starting with 111fa9828ed33f70d91234ec54572d82481ea11932d35fcef2df5492b54d6bd6 not found: ID does not exist" containerID="111fa9828ed33f70d91234ec54572d82481ea11932d35fcef2df5492b54d6bd6" Oct 01 15:45:22 crc kubenswrapper[4949]: I1001 15:45:22.332471 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"111fa9828ed33f70d91234ec54572d82481ea11932d35fcef2df5492b54d6bd6"} err="failed to get container status \"111fa9828ed33f70d91234ec54572d82481ea11932d35fcef2df5492b54d6bd6\": rpc error: code = NotFound desc = could not find container \"111fa9828ed33f70d91234ec54572d82481ea11932d35fcef2df5492b54d6bd6\": container with ID starting with 111fa9828ed33f70d91234ec54572d82481ea11932d35fcef2df5492b54d6bd6 not found: ID does not exist" Oct 01 15:45:23 crc kubenswrapper[4949]: I1001 15:45:23.608080 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2fce939d-7a50-418c-876e-05cc8619a809" path="/var/lib/kubelet/pods/2fce939d-7a50-418c-876e-05cc8619a809/volumes" Oct 01 15:45:23 crc kubenswrapper[4949]: I1001 15:45:23.631988 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jvj9j"] Oct 01 15:45:23 crc kubenswrapper[4949]: I1001 15:45:23.632489 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jvj9j" podUID="a935699b-031c-41c8-ae81-e631cfb6d465" containerName="registry-server" containerID="cri-o://3098626f5c14e10fe3f0717e9a9e52aba5c768bafa9804feaea7758d723cf45a" gracePeriod=2 Oct 01 15:45:25 crc kubenswrapper[4949]: I1001 15:45:25.221441 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jvj9j" Oct 01 15:45:25 crc kubenswrapper[4949]: I1001 15:45:25.274905 4949 generic.go:334] "Generic (PLEG): container finished" podID="a935699b-031c-41c8-ae81-e631cfb6d465" containerID="3098626f5c14e10fe3f0717e9a9e52aba5c768bafa9804feaea7758d723cf45a" exitCode=0 Oct 01 15:45:25 crc kubenswrapper[4949]: I1001 15:45:25.274951 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jvj9j" event={"ID":"a935699b-031c-41c8-ae81-e631cfb6d465","Type":"ContainerDied","Data":"3098626f5c14e10fe3f0717e9a9e52aba5c768bafa9804feaea7758d723cf45a"} Oct 01 15:45:25 crc kubenswrapper[4949]: I1001 15:45:25.274993 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jvj9j" Oct 01 15:45:25 crc kubenswrapper[4949]: I1001 15:45:25.275013 4949 scope.go:117] "RemoveContainer" containerID="3098626f5c14e10fe3f0717e9a9e52aba5c768bafa9804feaea7758d723cf45a" Oct 01 15:45:25 crc kubenswrapper[4949]: I1001 15:45:25.274998 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jvj9j" event={"ID":"a935699b-031c-41c8-ae81-e631cfb6d465","Type":"ContainerDied","Data":"d65c9443d26eaeca923df1986010a516fe1522a81f2b78ab276a6f1b2d5423fe"} Oct 01 15:45:25 crc kubenswrapper[4949]: I1001 15:45:25.293015 4949 scope.go:117] "RemoveContainer" containerID="790e00e89741c9ca4138f77fb99d79d553e7fa472236944751c79fe7eb5277bb" Oct 01 15:45:25 crc kubenswrapper[4949]: I1001 15:45:25.310890 4949 scope.go:117] "RemoveContainer" containerID="ce16896e65d62fb65450d2c9c66a1055ad70726405b58032edbfa1072224f0ac" Oct 01 15:45:25 crc kubenswrapper[4949]: I1001 15:45:25.334630 4949 scope.go:117] "RemoveContainer" containerID="3098626f5c14e10fe3f0717e9a9e52aba5c768bafa9804feaea7758d723cf45a" Oct 01 15:45:25 crc kubenswrapper[4949]: E1001 15:45:25.335337 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3098626f5c14e10fe3f0717e9a9e52aba5c768bafa9804feaea7758d723cf45a\": container with ID starting with 3098626f5c14e10fe3f0717e9a9e52aba5c768bafa9804feaea7758d723cf45a not found: ID does not exist" containerID="3098626f5c14e10fe3f0717e9a9e52aba5c768bafa9804feaea7758d723cf45a" Oct 01 15:45:25 crc kubenswrapper[4949]: I1001 15:45:25.335376 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3098626f5c14e10fe3f0717e9a9e52aba5c768bafa9804feaea7758d723cf45a"} err="failed to get container status \"3098626f5c14e10fe3f0717e9a9e52aba5c768bafa9804feaea7758d723cf45a\": rpc error: code = NotFound desc = could not find container \"3098626f5c14e10fe3f0717e9a9e52aba5c768bafa9804feaea7758d723cf45a\": container with ID starting with 3098626f5c14e10fe3f0717e9a9e52aba5c768bafa9804feaea7758d723cf45a not found: ID does not exist" Oct 01 15:45:25 crc kubenswrapper[4949]: I1001 15:45:25.335403 4949 scope.go:117] "RemoveContainer" containerID="790e00e89741c9ca4138f77fb99d79d553e7fa472236944751c79fe7eb5277bb" Oct 01 15:45:25 crc kubenswrapper[4949]: E1001 15:45:25.335816 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"790e00e89741c9ca4138f77fb99d79d553e7fa472236944751c79fe7eb5277bb\": container with ID starting with 790e00e89741c9ca4138f77fb99d79d553e7fa472236944751c79fe7eb5277bb not found: ID does not exist" containerID="790e00e89741c9ca4138f77fb99d79d553e7fa472236944751c79fe7eb5277bb" Oct 01 15:45:25 crc kubenswrapper[4949]: I1001 15:45:25.335869 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"790e00e89741c9ca4138f77fb99d79d553e7fa472236944751c79fe7eb5277bb"} err="failed to get container status \"790e00e89741c9ca4138f77fb99d79d553e7fa472236944751c79fe7eb5277bb\": rpc error: code = NotFound desc = could not find container \"790e00e89741c9ca4138f77fb99d79d553e7fa472236944751c79fe7eb5277bb\": container with ID starting with 790e00e89741c9ca4138f77fb99d79d553e7fa472236944751c79fe7eb5277bb not found: ID does not exist" Oct 01 15:45:25 crc kubenswrapper[4949]: I1001 15:45:25.335904 4949 scope.go:117] "RemoveContainer" containerID="ce16896e65d62fb65450d2c9c66a1055ad70726405b58032edbfa1072224f0ac" Oct 01 15:45:25 crc kubenswrapper[4949]: E1001 15:45:25.336336 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ce16896e65d62fb65450d2c9c66a1055ad70726405b58032edbfa1072224f0ac\": container with ID starting with ce16896e65d62fb65450d2c9c66a1055ad70726405b58032edbfa1072224f0ac not found: ID does not exist" containerID="ce16896e65d62fb65450d2c9c66a1055ad70726405b58032edbfa1072224f0ac" Oct 01 15:45:25 crc kubenswrapper[4949]: I1001 15:45:25.336369 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ce16896e65d62fb65450d2c9c66a1055ad70726405b58032edbfa1072224f0ac"} err="failed to get container status \"ce16896e65d62fb65450d2c9c66a1055ad70726405b58032edbfa1072224f0ac\": rpc error: code = NotFound desc = could not find container \"ce16896e65d62fb65450d2c9c66a1055ad70726405b58032edbfa1072224f0ac\": container with ID starting with ce16896e65d62fb65450d2c9c66a1055ad70726405b58032edbfa1072224f0ac not found: ID does not exist" Oct 01 15:45:25 crc kubenswrapper[4949]: I1001 15:45:25.388820 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a935699b-031c-41c8-ae81-e631cfb6d465-catalog-content\") pod \"a935699b-031c-41c8-ae81-e631cfb6d465\" (UID: \"a935699b-031c-41c8-ae81-e631cfb6d465\") " Oct 01 15:45:25 crc kubenswrapper[4949]: I1001 15:45:25.388921 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a935699b-031c-41c8-ae81-e631cfb6d465-utilities\") pod \"a935699b-031c-41c8-ae81-e631cfb6d465\" (UID: \"a935699b-031c-41c8-ae81-e631cfb6d465\") " Oct 01 15:45:25 crc kubenswrapper[4949]: I1001 15:45:25.389009 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssfnc\" (UniqueName: \"kubernetes.io/projected/a935699b-031c-41c8-ae81-e631cfb6d465-kube-api-access-ssfnc\") pod \"a935699b-031c-41c8-ae81-e631cfb6d465\" (UID: \"a935699b-031c-41c8-ae81-e631cfb6d465\") " Oct 01 15:45:25 crc kubenswrapper[4949]: I1001 15:45:25.389766 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a935699b-031c-41c8-ae81-e631cfb6d465-utilities" (OuterVolumeSpecName: "utilities") pod "a935699b-031c-41c8-ae81-e631cfb6d465" (UID: "a935699b-031c-41c8-ae81-e631cfb6d465"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 15:45:25 crc kubenswrapper[4949]: I1001 15:45:25.393935 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a935699b-031c-41c8-ae81-e631cfb6d465-kube-api-access-ssfnc" (OuterVolumeSpecName: "kube-api-access-ssfnc") pod "a935699b-031c-41c8-ae81-e631cfb6d465" (UID: "a935699b-031c-41c8-ae81-e631cfb6d465"). InnerVolumeSpecName "kube-api-access-ssfnc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:45:25 crc kubenswrapper[4949]: I1001 15:45:25.476366 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a935699b-031c-41c8-ae81-e631cfb6d465-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a935699b-031c-41c8-ae81-e631cfb6d465" (UID: "a935699b-031c-41c8-ae81-e631cfb6d465"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 15:45:25 crc kubenswrapper[4949]: I1001 15:45:25.489941 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a935699b-031c-41c8-ae81-e631cfb6d465-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 01 15:45:25 crc kubenswrapper[4949]: I1001 15:45:25.489980 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a935699b-031c-41c8-ae81-e631cfb6d465-utilities\") on node \"crc\" DevicePath \"\"" Oct 01 15:45:25 crc kubenswrapper[4949]: I1001 15:45:25.489994 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssfnc\" (UniqueName: \"kubernetes.io/projected/a935699b-031c-41c8-ae81-e631cfb6d465-kube-api-access-ssfnc\") on node \"crc\" DevicePath \"\"" Oct 01 15:45:25 crc kubenswrapper[4949]: I1001 15:45:25.609826 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jvj9j"] Oct 01 15:45:25 crc kubenswrapper[4949]: I1001 15:45:25.609869 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jvj9j"] Oct 01 15:45:27 crc kubenswrapper[4949]: I1001 15:45:27.608177 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a935699b-031c-41c8-ae81-e631cfb6d465" path="/var/lib/kubelet/pods/a935699b-031c-41c8-ae81-e631cfb6d465/volumes" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.393079 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" podUID="2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8" containerName="oauth-openshift" containerID="cri-o://7ccdcafee2a482df43cbb2ab9fd1b8882c5e4bdfa09d3f095b796ca58e89ef7d" gracePeriod=15 Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.572443 4949 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-wjx4c container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.14:6443/healthz\": dial tcp 10.217.0.14:6443: connect: connection refused" start-of-body= Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.573157 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" podUID="2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.14:6443/healthz\": dial tcp 10.217.0.14:6443: connect: connection refused" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.786334 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.814693 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-user-idp-0-file-data\") pod \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.814737 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-system-ocp-branding-template\") pod \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.814756 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-system-service-ca\") pod \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.814802 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-audit-dir\") pod \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.814825 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-system-trusted-ca-bundle\") pod \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.814843 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-audit-policies\") pod \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.814863 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jchgl\" (UniqueName: \"kubernetes.io/projected/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-kube-api-access-jchgl\") pod \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.814880 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-system-session\") pod \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.814915 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-user-template-error\") pod \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.814946 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-system-router-certs\") pod \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.814995 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-user-template-provider-selection\") pod \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.815029 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-user-template-login\") pod \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.815043 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-system-serving-cert\") pod \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.815074 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-system-cliconfig\") pod \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\" (UID: \"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8\") " Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.815907 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8" (UID: "2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.816427 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8" (UID: "2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.816965 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8" (UID: "2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.819702 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8" (UID: "2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.819805 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8" (UID: "2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.825831 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8" (UID: "2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.826352 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8" (UID: "2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.826581 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8" (UID: "2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.827457 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8" (UID: "2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.830113 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8" (UID: "2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.830805 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8" (UID: "2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.836560 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8" (UID: "2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.836909 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8" (UID: "2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.837081 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-kube-api-access-jchgl" (OuterVolumeSpecName: "kube-api-access-jchgl") pod "2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8" (UID: "2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8"). InnerVolumeSpecName "kube-api-access-jchgl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.837686 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-86d85988f6-rcbp4"] Oct 01 15:45:41 crc kubenswrapper[4949]: E1001 15:45:41.837899 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a935699b-031c-41c8-ae81-e631cfb6d465" containerName="extract-utilities" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.837915 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="a935699b-031c-41c8-ae81-e631cfb6d465" containerName="extract-utilities" Oct 01 15:45:41 crc kubenswrapper[4949]: E1001 15:45:41.837925 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c" containerName="extract-content" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.837931 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c" containerName="extract-content" Oct 01 15:45:41 crc kubenswrapper[4949]: E1001 15:45:41.837944 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48941e1e-2481-47ca-832d-047a6d3220c8" containerName="registry-server" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.837950 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="48941e1e-2481-47ca-832d-047a6d3220c8" containerName="registry-server" Oct 01 15:45:41 crc kubenswrapper[4949]: E1001 15:45:41.837958 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48941e1e-2481-47ca-832d-047a6d3220c8" containerName="extract-content" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.837963 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="48941e1e-2481-47ca-832d-047a6d3220c8" containerName="extract-content" Oct 01 15:45:41 crc kubenswrapper[4949]: E1001 15:45:41.837971 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c" containerName="registry-server" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.837977 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c" containerName="registry-server" Oct 01 15:45:41 crc kubenswrapper[4949]: E1001 15:45:41.837984 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c" containerName="extract-utilities" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.837989 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c" containerName="extract-utilities" Oct 01 15:45:41 crc kubenswrapper[4949]: E1001 15:45:41.837998 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca85de63-357f-46ae-8559-88caf4cf27f3" containerName="collect-profiles" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.838003 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca85de63-357f-46ae-8559-88caf4cf27f3" containerName="collect-profiles" Oct 01 15:45:41 crc kubenswrapper[4949]: E1001 15:45:41.838010 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8" containerName="oauth-openshift" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.838016 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8" containerName="oauth-openshift" Oct 01 15:45:41 crc kubenswrapper[4949]: E1001 15:45:41.838022 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fce939d-7a50-418c-876e-05cc8619a809" containerName="registry-server" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.838027 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fce939d-7a50-418c-876e-05cc8619a809" containerName="registry-server" Oct 01 15:45:41 crc kubenswrapper[4949]: E1001 15:45:41.838035 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fce939d-7a50-418c-876e-05cc8619a809" containerName="extract-content" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.838040 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fce939d-7a50-418c-876e-05cc8619a809" containerName="extract-content" Oct 01 15:45:41 crc kubenswrapper[4949]: E1001 15:45:41.838047 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="48941e1e-2481-47ca-832d-047a6d3220c8" containerName="extract-utilities" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.838052 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="48941e1e-2481-47ca-832d-047a6d3220c8" containerName="extract-utilities" Oct 01 15:45:41 crc kubenswrapper[4949]: E1001 15:45:41.838061 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a935699b-031c-41c8-ae81-e631cfb6d465" containerName="extract-content" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.838067 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="a935699b-031c-41c8-ae81-e631cfb6d465" containerName="extract-content" Oct 01 15:45:41 crc kubenswrapper[4949]: E1001 15:45:41.838075 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a935699b-031c-41c8-ae81-e631cfb6d465" containerName="registry-server" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.838081 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="a935699b-031c-41c8-ae81-e631cfb6d465" containerName="registry-server" Oct 01 15:45:41 crc kubenswrapper[4949]: E1001 15:45:41.838088 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2fce939d-7a50-418c-876e-05cc8619a809" containerName="extract-utilities" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.838094 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="2fce939d-7a50-418c-876e-05cc8619a809" containerName="extract-utilities" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.838197 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="48941e1e-2481-47ca-832d-047a6d3220c8" containerName="registry-server" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.838208 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7e7a912-5eb3-47a0-99c8-69c5bfc5a37c" containerName="registry-server" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.838215 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="a935699b-031c-41c8-ae81-e631cfb6d465" containerName="registry-server" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.838224 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca85de63-357f-46ae-8559-88caf4cf27f3" containerName="collect-profiles" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.838232 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8" containerName="oauth-openshift" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.838239 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="2fce939d-7a50-418c-876e-05cc8619a809" containerName="registry-server" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.838612 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-86d85988f6-rcbp4" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.845111 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-86d85988f6-rcbp4"] Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.917015 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bffeafb5-249d-4573-bc86-9b94e32ad1ff-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-86d85988f6-rcbp4\" (UID: \"bffeafb5-249d-4573-bc86-9b94e32ad1ff\") " pod="openshift-authentication/oauth-openshift-86d85988f6-rcbp4" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.917078 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bffeafb5-249d-4573-bc86-9b94e32ad1ff-v4-0-config-system-router-certs\") pod \"oauth-openshift-86d85988f6-rcbp4\" (UID: \"bffeafb5-249d-4573-bc86-9b94e32ad1ff\") " pod="openshift-authentication/oauth-openshift-86d85988f6-rcbp4" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.917157 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bffeafb5-249d-4573-bc86-9b94e32ad1ff-audit-policies\") pod \"oauth-openshift-86d85988f6-rcbp4\" (UID: \"bffeafb5-249d-4573-bc86-9b94e32ad1ff\") " pod="openshift-authentication/oauth-openshift-86d85988f6-rcbp4" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.917187 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bffeafb5-249d-4573-bc86-9b94e32ad1ff-v4-0-config-system-session\") pod \"oauth-openshift-86d85988f6-rcbp4\" (UID: \"bffeafb5-249d-4573-bc86-9b94e32ad1ff\") " pod="openshift-authentication/oauth-openshift-86d85988f6-rcbp4" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.917211 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bffeafb5-249d-4573-bc86-9b94e32ad1ff-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-86d85988f6-rcbp4\" (UID: \"bffeafb5-249d-4573-bc86-9b94e32ad1ff\") " pod="openshift-authentication/oauth-openshift-86d85988f6-rcbp4" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.917246 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bffeafb5-249d-4573-bc86-9b94e32ad1ff-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-86d85988f6-rcbp4\" (UID: \"bffeafb5-249d-4573-bc86-9b94e32ad1ff\") " pod="openshift-authentication/oauth-openshift-86d85988f6-rcbp4" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.917270 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bffeafb5-249d-4573-bc86-9b94e32ad1ff-v4-0-config-user-template-error\") pod \"oauth-openshift-86d85988f6-rcbp4\" (UID: \"bffeafb5-249d-4573-bc86-9b94e32ad1ff\") " pod="openshift-authentication/oauth-openshift-86d85988f6-rcbp4" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.917299 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bffeafb5-249d-4573-bc86-9b94e32ad1ff-v4-0-config-system-service-ca\") pod \"oauth-openshift-86d85988f6-rcbp4\" (UID: \"bffeafb5-249d-4573-bc86-9b94e32ad1ff\") " pod="openshift-authentication/oauth-openshift-86d85988f6-rcbp4" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.917324 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bffeafb5-249d-4573-bc86-9b94e32ad1ff-v4-0-config-system-serving-cert\") pod \"oauth-openshift-86d85988f6-rcbp4\" (UID: \"bffeafb5-249d-4573-bc86-9b94e32ad1ff\") " pod="openshift-authentication/oauth-openshift-86d85988f6-rcbp4" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.917347 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bffeafb5-249d-4573-bc86-9b94e32ad1ff-v4-0-config-user-template-login\") pod \"oauth-openshift-86d85988f6-rcbp4\" (UID: \"bffeafb5-249d-4573-bc86-9b94e32ad1ff\") " pod="openshift-authentication/oauth-openshift-86d85988f6-rcbp4" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.917371 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bffeafb5-249d-4573-bc86-9b94e32ad1ff-v4-0-config-system-cliconfig\") pod \"oauth-openshift-86d85988f6-rcbp4\" (UID: \"bffeafb5-249d-4573-bc86-9b94e32ad1ff\") " pod="openshift-authentication/oauth-openshift-86d85988f6-rcbp4" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.917404 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bffeafb5-249d-4573-bc86-9b94e32ad1ff-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-86d85988f6-rcbp4\" (UID: \"bffeafb5-249d-4573-bc86-9b94e32ad1ff\") " pod="openshift-authentication/oauth-openshift-86d85988f6-rcbp4" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.917483 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjtnj\" (UniqueName: \"kubernetes.io/projected/bffeafb5-249d-4573-bc86-9b94e32ad1ff-kube-api-access-vjtnj\") pod \"oauth-openshift-86d85988f6-rcbp4\" (UID: \"bffeafb5-249d-4573-bc86-9b94e32ad1ff\") " pod="openshift-authentication/oauth-openshift-86d85988f6-rcbp4" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.917535 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bffeafb5-249d-4573-bc86-9b94e32ad1ff-audit-dir\") pod \"oauth-openshift-86d85988f6-rcbp4\" (UID: \"bffeafb5-249d-4573-bc86-9b94e32ad1ff\") " pod="openshift-authentication/oauth-openshift-86d85988f6-rcbp4" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.917597 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jchgl\" (UniqueName: \"kubernetes.io/projected/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-kube-api-access-jchgl\") on node \"crc\" DevicePath \"\"" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.917608 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.917619 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.917629 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.917641 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.917652 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.917683 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.917702 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.917719 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.917733 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.917746 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.917760 4949 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-audit-dir\") on node \"crc\" DevicePath \"\"" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.917774 4949 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 15:45:41 crc kubenswrapper[4949]: I1001 15:45:41.917786 4949 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8-audit-policies\") on node \"crc\" DevicePath \"\"" Oct 01 15:45:42 crc kubenswrapper[4949]: I1001 15:45:42.018762 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bffeafb5-249d-4573-bc86-9b94e32ad1ff-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-86d85988f6-rcbp4\" (UID: \"bffeafb5-249d-4573-bc86-9b94e32ad1ff\") " pod="openshift-authentication/oauth-openshift-86d85988f6-rcbp4" Oct 01 15:45:42 crc kubenswrapper[4949]: I1001 15:45:42.018849 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vjtnj\" (UniqueName: \"kubernetes.io/projected/bffeafb5-249d-4573-bc86-9b94e32ad1ff-kube-api-access-vjtnj\") pod \"oauth-openshift-86d85988f6-rcbp4\" (UID: \"bffeafb5-249d-4573-bc86-9b94e32ad1ff\") " pod="openshift-authentication/oauth-openshift-86d85988f6-rcbp4" Oct 01 15:45:42 crc kubenswrapper[4949]: I1001 15:45:42.018904 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bffeafb5-249d-4573-bc86-9b94e32ad1ff-audit-dir\") pod \"oauth-openshift-86d85988f6-rcbp4\" (UID: \"bffeafb5-249d-4573-bc86-9b94e32ad1ff\") " pod="openshift-authentication/oauth-openshift-86d85988f6-rcbp4" Oct 01 15:45:42 crc kubenswrapper[4949]: I1001 15:45:42.018972 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bffeafb5-249d-4573-bc86-9b94e32ad1ff-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-86d85988f6-rcbp4\" (UID: \"bffeafb5-249d-4573-bc86-9b94e32ad1ff\") " pod="openshift-authentication/oauth-openshift-86d85988f6-rcbp4" Oct 01 15:45:42 crc kubenswrapper[4949]: I1001 15:45:42.019013 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bffeafb5-249d-4573-bc86-9b94e32ad1ff-v4-0-config-system-router-certs\") pod \"oauth-openshift-86d85988f6-rcbp4\" (UID: \"bffeafb5-249d-4573-bc86-9b94e32ad1ff\") " pod="openshift-authentication/oauth-openshift-86d85988f6-rcbp4" Oct 01 15:45:42 crc kubenswrapper[4949]: I1001 15:45:42.019086 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bffeafb5-249d-4573-bc86-9b94e32ad1ff-audit-policies\") pod \"oauth-openshift-86d85988f6-rcbp4\" (UID: \"bffeafb5-249d-4573-bc86-9b94e32ad1ff\") " pod="openshift-authentication/oauth-openshift-86d85988f6-rcbp4" Oct 01 15:45:42 crc kubenswrapper[4949]: I1001 15:45:42.019170 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bffeafb5-249d-4573-bc86-9b94e32ad1ff-v4-0-config-system-session\") pod \"oauth-openshift-86d85988f6-rcbp4\" (UID: \"bffeafb5-249d-4573-bc86-9b94e32ad1ff\") " pod="openshift-authentication/oauth-openshift-86d85988f6-rcbp4" Oct 01 15:45:42 crc kubenswrapper[4949]: I1001 15:45:42.019224 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bffeafb5-249d-4573-bc86-9b94e32ad1ff-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-86d85988f6-rcbp4\" (UID: \"bffeafb5-249d-4573-bc86-9b94e32ad1ff\") " pod="openshift-authentication/oauth-openshift-86d85988f6-rcbp4" Oct 01 15:45:42 crc kubenswrapper[4949]: I1001 15:45:42.019298 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bffeafb5-249d-4573-bc86-9b94e32ad1ff-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-86d85988f6-rcbp4\" (UID: \"bffeafb5-249d-4573-bc86-9b94e32ad1ff\") " pod="openshift-authentication/oauth-openshift-86d85988f6-rcbp4" Oct 01 15:45:42 crc kubenswrapper[4949]: I1001 15:45:42.019330 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bffeafb5-249d-4573-bc86-9b94e32ad1ff-v4-0-config-user-template-error\") pod \"oauth-openshift-86d85988f6-rcbp4\" (UID: \"bffeafb5-249d-4573-bc86-9b94e32ad1ff\") " pod="openshift-authentication/oauth-openshift-86d85988f6-rcbp4" Oct 01 15:45:42 crc kubenswrapper[4949]: I1001 15:45:42.019372 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bffeafb5-249d-4573-bc86-9b94e32ad1ff-v4-0-config-system-service-ca\") pod \"oauth-openshift-86d85988f6-rcbp4\" (UID: \"bffeafb5-249d-4573-bc86-9b94e32ad1ff\") " pod="openshift-authentication/oauth-openshift-86d85988f6-rcbp4" Oct 01 15:45:42 crc kubenswrapper[4949]: I1001 15:45:42.019404 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bffeafb5-249d-4573-bc86-9b94e32ad1ff-v4-0-config-system-serving-cert\") pod \"oauth-openshift-86d85988f6-rcbp4\" (UID: \"bffeafb5-249d-4573-bc86-9b94e32ad1ff\") " pod="openshift-authentication/oauth-openshift-86d85988f6-rcbp4" Oct 01 15:45:42 crc kubenswrapper[4949]: I1001 15:45:42.019438 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bffeafb5-249d-4573-bc86-9b94e32ad1ff-v4-0-config-user-template-login\") pod \"oauth-openshift-86d85988f6-rcbp4\" (UID: \"bffeafb5-249d-4573-bc86-9b94e32ad1ff\") " pod="openshift-authentication/oauth-openshift-86d85988f6-rcbp4" Oct 01 15:45:42 crc kubenswrapper[4949]: I1001 15:45:42.019476 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bffeafb5-249d-4573-bc86-9b94e32ad1ff-v4-0-config-system-cliconfig\") pod \"oauth-openshift-86d85988f6-rcbp4\" (UID: \"bffeafb5-249d-4573-bc86-9b94e32ad1ff\") " pod="openshift-authentication/oauth-openshift-86d85988f6-rcbp4" Oct 01 15:45:42 crc kubenswrapper[4949]: I1001 15:45:42.020019 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bffeafb5-249d-4573-bc86-9b94e32ad1ff-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-86d85988f6-rcbp4\" (UID: \"bffeafb5-249d-4573-bc86-9b94e32ad1ff\") " pod="openshift-authentication/oauth-openshift-86d85988f6-rcbp4" Oct 01 15:45:42 crc kubenswrapper[4949]: I1001 15:45:42.020279 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/bffeafb5-249d-4573-bc86-9b94e32ad1ff-audit-policies\") pod \"oauth-openshift-86d85988f6-rcbp4\" (UID: \"bffeafb5-249d-4573-bc86-9b94e32ad1ff\") " pod="openshift-authentication/oauth-openshift-86d85988f6-rcbp4" Oct 01 15:45:42 crc kubenswrapper[4949]: I1001 15:45:42.020520 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bffeafb5-249d-4573-bc86-9b94e32ad1ff-audit-dir\") pod \"oauth-openshift-86d85988f6-rcbp4\" (UID: \"bffeafb5-249d-4573-bc86-9b94e32ad1ff\") " pod="openshift-authentication/oauth-openshift-86d85988f6-rcbp4" Oct 01 15:45:42 crc kubenswrapper[4949]: I1001 15:45:42.020584 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/bffeafb5-249d-4573-bc86-9b94e32ad1ff-v4-0-config-system-service-ca\") pod \"oauth-openshift-86d85988f6-rcbp4\" (UID: \"bffeafb5-249d-4573-bc86-9b94e32ad1ff\") " pod="openshift-authentication/oauth-openshift-86d85988f6-rcbp4" Oct 01 15:45:42 crc kubenswrapper[4949]: I1001 15:45:42.021046 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/bffeafb5-249d-4573-bc86-9b94e32ad1ff-v4-0-config-system-cliconfig\") pod \"oauth-openshift-86d85988f6-rcbp4\" (UID: \"bffeafb5-249d-4573-bc86-9b94e32ad1ff\") " pod="openshift-authentication/oauth-openshift-86d85988f6-rcbp4" Oct 01 15:45:42 crc kubenswrapper[4949]: I1001 15:45:42.022631 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/bffeafb5-249d-4573-bc86-9b94e32ad1ff-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-86d85988f6-rcbp4\" (UID: \"bffeafb5-249d-4573-bc86-9b94e32ad1ff\") " pod="openshift-authentication/oauth-openshift-86d85988f6-rcbp4" Oct 01 15:45:42 crc kubenswrapper[4949]: I1001 15:45:42.024210 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/bffeafb5-249d-4573-bc86-9b94e32ad1ff-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-86d85988f6-rcbp4\" (UID: \"bffeafb5-249d-4573-bc86-9b94e32ad1ff\") " pod="openshift-authentication/oauth-openshift-86d85988f6-rcbp4" Oct 01 15:45:42 crc kubenswrapper[4949]: I1001 15:45:42.025791 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/bffeafb5-249d-4573-bc86-9b94e32ad1ff-v4-0-config-system-router-certs\") pod \"oauth-openshift-86d85988f6-rcbp4\" (UID: \"bffeafb5-249d-4573-bc86-9b94e32ad1ff\") " pod="openshift-authentication/oauth-openshift-86d85988f6-rcbp4" Oct 01 15:45:42 crc kubenswrapper[4949]: I1001 15:45:42.025878 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/bffeafb5-249d-4573-bc86-9b94e32ad1ff-v4-0-config-user-template-error\") pod \"oauth-openshift-86d85988f6-rcbp4\" (UID: \"bffeafb5-249d-4573-bc86-9b94e32ad1ff\") " pod="openshift-authentication/oauth-openshift-86d85988f6-rcbp4" Oct 01 15:45:42 crc kubenswrapper[4949]: I1001 15:45:42.025892 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/bffeafb5-249d-4573-bc86-9b94e32ad1ff-v4-0-config-system-session\") pod \"oauth-openshift-86d85988f6-rcbp4\" (UID: \"bffeafb5-249d-4573-bc86-9b94e32ad1ff\") " pod="openshift-authentication/oauth-openshift-86d85988f6-rcbp4" Oct 01 15:45:42 crc kubenswrapper[4949]: I1001 15:45:42.027203 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/bffeafb5-249d-4573-bc86-9b94e32ad1ff-v4-0-config-user-template-login\") pod \"oauth-openshift-86d85988f6-rcbp4\" (UID: \"bffeafb5-249d-4573-bc86-9b94e32ad1ff\") " pod="openshift-authentication/oauth-openshift-86d85988f6-rcbp4" Oct 01 15:45:42 crc kubenswrapper[4949]: I1001 15:45:42.027816 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/bffeafb5-249d-4573-bc86-9b94e32ad1ff-v4-0-config-system-serving-cert\") pod \"oauth-openshift-86d85988f6-rcbp4\" (UID: \"bffeafb5-249d-4573-bc86-9b94e32ad1ff\") " pod="openshift-authentication/oauth-openshift-86d85988f6-rcbp4" Oct 01 15:45:42 crc kubenswrapper[4949]: I1001 15:45:42.029104 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/bffeafb5-249d-4573-bc86-9b94e32ad1ff-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-86d85988f6-rcbp4\" (UID: \"bffeafb5-249d-4573-bc86-9b94e32ad1ff\") " pod="openshift-authentication/oauth-openshift-86d85988f6-rcbp4" Oct 01 15:45:42 crc kubenswrapper[4949]: I1001 15:45:42.037643 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vjtnj\" (UniqueName: \"kubernetes.io/projected/bffeafb5-249d-4573-bc86-9b94e32ad1ff-kube-api-access-vjtnj\") pod \"oauth-openshift-86d85988f6-rcbp4\" (UID: \"bffeafb5-249d-4573-bc86-9b94e32ad1ff\") " pod="openshift-authentication/oauth-openshift-86d85988f6-rcbp4" Oct 01 15:45:42 crc kubenswrapper[4949]: I1001 15:45:42.177729 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-86d85988f6-rcbp4" Oct 01 15:45:42 crc kubenswrapper[4949]: I1001 15:45:42.378557 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" Oct 01 15:45:42 crc kubenswrapper[4949]: I1001 15:45:42.378604 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" event={"ID":"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8","Type":"ContainerDied","Data":"7ccdcafee2a482df43cbb2ab9fd1b8882c5e4bdfa09d3f095b796ca58e89ef7d"} Oct 01 15:45:42 crc kubenswrapper[4949]: I1001 15:45:42.379328 4949 scope.go:117] "RemoveContainer" containerID="7ccdcafee2a482df43cbb2ab9fd1b8882c5e4bdfa09d3f095b796ca58e89ef7d" Oct 01 15:45:42 crc kubenswrapper[4949]: I1001 15:45:42.378466 4949 generic.go:334] "Generic (PLEG): container finished" podID="2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8" containerID="7ccdcafee2a482df43cbb2ab9fd1b8882c5e4bdfa09d3f095b796ca58e89ef7d" exitCode=0 Oct 01 15:45:42 crc kubenswrapper[4949]: I1001 15:45:42.381093 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-wjx4c" event={"ID":"2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8","Type":"ContainerDied","Data":"b54bf423cc69ab5f5907b845da2760e77f3e912cba66ed4bdb0f8e226c60272b"} Oct 01 15:45:42 crc kubenswrapper[4949]: I1001 15:45:42.385236 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-86d85988f6-rcbp4"] Oct 01 15:45:42 crc kubenswrapper[4949]: I1001 15:45:42.411726 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wjx4c"] Oct 01 15:45:42 crc kubenswrapper[4949]: I1001 15:45:42.412025 4949 scope.go:117] "RemoveContainer" containerID="7ccdcafee2a482df43cbb2ab9fd1b8882c5e4bdfa09d3f095b796ca58e89ef7d" Oct 01 15:45:42 crc kubenswrapper[4949]: E1001 15:45:42.412398 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7ccdcafee2a482df43cbb2ab9fd1b8882c5e4bdfa09d3f095b796ca58e89ef7d\": container with ID starting with 7ccdcafee2a482df43cbb2ab9fd1b8882c5e4bdfa09d3f095b796ca58e89ef7d not found: ID does not exist" containerID="7ccdcafee2a482df43cbb2ab9fd1b8882c5e4bdfa09d3f095b796ca58e89ef7d" Oct 01 15:45:42 crc kubenswrapper[4949]: I1001 15:45:42.412441 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7ccdcafee2a482df43cbb2ab9fd1b8882c5e4bdfa09d3f095b796ca58e89ef7d"} err="failed to get container status \"7ccdcafee2a482df43cbb2ab9fd1b8882c5e4bdfa09d3f095b796ca58e89ef7d\": rpc error: code = NotFound desc = could not find container \"7ccdcafee2a482df43cbb2ab9fd1b8882c5e4bdfa09d3f095b796ca58e89ef7d\": container with ID starting with 7ccdcafee2a482df43cbb2ab9fd1b8882c5e4bdfa09d3f095b796ca58e89ef7d not found: ID does not exist" Oct 01 15:45:42 crc kubenswrapper[4949]: I1001 15:45:42.418620 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-wjx4c"] Oct 01 15:45:43 crc kubenswrapper[4949]: I1001 15:45:43.390177 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-86d85988f6-rcbp4" event={"ID":"bffeafb5-249d-4573-bc86-9b94e32ad1ff","Type":"ContainerStarted","Data":"b6f651b8dd9a1a43527074f36748fd8a6e94409a6b78ccaa227e0f85e0e9e1f5"} Oct 01 15:45:43 crc kubenswrapper[4949]: I1001 15:45:43.390508 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-86d85988f6-rcbp4" event={"ID":"bffeafb5-249d-4573-bc86-9b94e32ad1ff","Type":"ContainerStarted","Data":"f42208c478086429edffda16d029f182c8c66b202c303a50618dec81053d4402"} Oct 01 15:45:43 crc kubenswrapper[4949]: I1001 15:45:43.391074 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-86d85988f6-rcbp4" Oct 01 15:45:43 crc kubenswrapper[4949]: I1001 15:45:43.398584 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-86d85988f6-rcbp4" Oct 01 15:45:43 crc kubenswrapper[4949]: I1001 15:45:43.424314 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-86d85988f6-rcbp4" podStartSLOduration=27.424285751 podStartE2EDuration="27.424285751s" podCreationTimestamp="2025-10-01 15:45:16 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:45:43.422889701 +0000 UTC m=+242.728495922" watchObservedRunningTime="2025-10-01 15:45:43.424285751 +0000 UTC m=+242.729891982" Oct 01 15:45:43 crc kubenswrapper[4949]: I1001 15:45:43.607956 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8" path="/var/lib/kubelet/pods/2c825cac-dcf5-47c9-ab2e-6ff8b45ceda8/volumes" Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.325427 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zq8nf"] Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.326243 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zq8nf" podUID="46c5822e-cd0c-4c66-828c-a0f9a50879de" containerName="registry-server" containerID="cri-o://8eb22fa102e7146cdac577d89d90ce1d18bc0d28daa180a5496053049aa33e35" gracePeriod=30 Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.338383 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6s25h"] Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.338688 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6s25h" podUID="afd2fd7e-b101-41fa-bc35-87cdf06d791a" containerName="registry-server" containerID="cri-o://1346fe2835bf6d37b720c4da8785a4ab92143d5ddf234d63003dba746e29ffdc" gracePeriod=30 Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.355953 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nbrh4"] Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.356196 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-nbrh4" podUID="5dfe4211-4bb6-47e4-9797-652393f66bc5" containerName="marketplace-operator" containerID="cri-o://ceca7cd25d5788c1b882b4b53d8f231946357954b5c03b1f602eeb286ca7d13f" gracePeriod=30 Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.361530 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2vm94"] Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.361790 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-2vm94" podUID="505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9" containerName="registry-server" containerID="cri-o://cef4dc7cdeb60f285e769f2e617c87606658bd7a09505f38d66d1ad57428834e" gracePeriod=30 Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.372183 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ltmmw"] Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.372469 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ltmmw" podUID="470a3d16-cc7d-4824-a36e-a6004d9b530f" containerName="registry-server" containerID="cri-o://8c7d2bf4162d305f77c8e22dd7d3dea44b555c917c6cc8e934ee01bcddb19489" gracePeriod=30 Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.384458 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-smlrq"] Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.385371 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-smlrq" Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.392293 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-smlrq"] Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.455771 4949 generic.go:334] "Generic (PLEG): container finished" podID="afd2fd7e-b101-41fa-bc35-87cdf06d791a" containerID="1346fe2835bf6d37b720c4da8785a4ab92143d5ddf234d63003dba746e29ffdc" exitCode=0 Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.455868 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6s25h" event={"ID":"afd2fd7e-b101-41fa-bc35-87cdf06d791a","Type":"ContainerDied","Data":"1346fe2835bf6d37b720c4da8785a4ab92143d5ddf234d63003dba746e29ffdc"} Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.457616 4949 generic.go:334] "Generic (PLEG): container finished" podID="46c5822e-cd0c-4c66-828c-a0f9a50879de" containerID="8eb22fa102e7146cdac577d89d90ce1d18bc0d28daa180a5496053049aa33e35" exitCode=0 Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.457639 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zq8nf" event={"ID":"46c5822e-cd0c-4c66-828c-a0f9a50879de","Type":"ContainerDied","Data":"8eb22fa102e7146cdac577d89d90ce1d18bc0d28daa180a5496053049aa33e35"} Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.510259 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kwbr5\" (UniqueName: \"kubernetes.io/projected/46f6bb2f-7a68-4a9a-a9c9-0cb9a60d959b-kube-api-access-kwbr5\") pod \"marketplace-operator-79b997595-smlrq\" (UID: \"46f6bb2f-7a68-4a9a-a9c9-0cb9a60d959b\") " pod="openshift-marketplace/marketplace-operator-79b997595-smlrq" Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.510708 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/46f6bb2f-7a68-4a9a-a9c9-0cb9a60d959b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-smlrq\" (UID: \"46f6bb2f-7a68-4a9a-a9c9-0cb9a60d959b\") " pod="openshift-marketplace/marketplace-operator-79b997595-smlrq" Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.510735 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/46f6bb2f-7a68-4a9a-a9c9-0cb9a60d959b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-smlrq\" (UID: \"46f6bb2f-7a68-4a9a-a9c9-0cb9a60d959b\") " pod="openshift-marketplace/marketplace-operator-79b997595-smlrq" Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.611485 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/46f6bb2f-7a68-4a9a-a9c9-0cb9a60d959b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-smlrq\" (UID: \"46f6bb2f-7a68-4a9a-a9c9-0cb9a60d959b\") " pod="openshift-marketplace/marketplace-operator-79b997595-smlrq" Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.611533 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/46f6bb2f-7a68-4a9a-a9c9-0cb9a60d959b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-smlrq\" (UID: \"46f6bb2f-7a68-4a9a-a9c9-0cb9a60d959b\") " pod="openshift-marketplace/marketplace-operator-79b997595-smlrq" Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.611576 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kwbr5\" (UniqueName: \"kubernetes.io/projected/46f6bb2f-7a68-4a9a-a9c9-0cb9a60d959b-kube-api-access-kwbr5\") pod \"marketplace-operator-79b997595-smlrq\" (UID: \"46f6bb2f-7a68-4a9a-a9c9-0cb9a60d959b\") " pod="openshift-marketplace/marketplace-operator-79b997595-smlrq" Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.613036 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/46f6bb2f-7a68-4a9a-a9c9-0cb9a60d959b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-smlrq\" (UID: \"46f6bb2f-7a68-4a9a-a9c9-0cb9a60d959b\") " pod="openshift-marketplace/marketplace-operator-79b997595-smlrq" Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.623236 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/46f6bb2f-7a68-4a9a-a9c9-0cb9a60d959b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-smlrq\" (UID: \"46f6bb2f-7a68-4a9a-a9c9-0cb9a60d959b\") " pod="openshift-marketplace/marketplace-operator-79b997595-smlrq" Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.630777 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kwbr5\" (UniqueName: \"kubernetes.io/projected/46f6bb2f-7a68-4a9a-a9c9-0cb9a60d959b-kube-api-access-kwbr5\") pod \"marketplace-operator-79b997595-smlrq\" (UID: \"46f6bb2f-7a68-4a9a-a9c9-0cb9a60d959b\") " pod="openshift-marketplace/marketplace-operator-79b997595-smlrq" Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.785285 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-smlrq" Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.788377 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zq8nf" Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.804342 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ltmmw" Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.809809 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nbrh4" Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.813549 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46c5822e-cd0c-4c66-828c-a0f9a50879de-catalog-content\") pod \"46c5822e-cd0c-4c66-828c-a0f9a50879de\" (UID: \"46c5822e-cd0c-4c66-828c-a0f9a50879de\") " Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.813661 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gc8nl\" (UniqueName: \"kubernetes.io/projected/46c5822e-cd0c-4c66-828c-a0f9a50879de-kube-api-access-gc8nl\") pod \"46c5822e-cd0c-4c66-828c-a0f9a50879de\" (UID: \"46c5822e-cd0c-4c66-828c-a0f9a50879de\") " Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.813697 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46c5822e-cd0c-4c66-828c-a0f9a50879de-utilities\") pod \"46c5822e-cd0c-4c66-828c-a0f9a50879de\" (UID: \"46c5822e-cd0c-4c66-828c-a0f9a50879de\") " Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.822709 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46c5822e-cd0c-4c66-828c-a0f9a50879de-utilities" (OuterVolumeSpecName: "utilities") pod "46c5822e-cd0c-4c66-828c-a0f9a50879de" (UID: "46c5822e-cd0c-4c66-828c-a0f9a50879de"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.831824 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46c5822e-cd0c-4c66-828c-a0f9a50879de-kube-api-access-gc8nl" (OuterVolumeSpecName: "kube-api-access-gc8nl") pod "46c5822e-cd0c-4c66-828c-a0f9a50879de" (UID: "46c5822e-cd0c-4c66-828c-a0f9a50879de"). InnerVolumeSpecName "kube-api-access-gc8nl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.856157 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6s25h" Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.857251 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2vm94" Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.888060 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46c5822e-cd0c-4c66-828c-a0f9a50879de-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "46c5822e-cd0c-4c66-828c-a0f9a50879de" (UID: "46c5822e-cd0c-4c66-828c-a0f9a50879de"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.914468 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afd2fd7e-b101-41fa-bc35-87cdf06d791a-catalog-content\") pod \"afd2fd7e-b101-41fa-bc35-87cdf06d791a\" (UID: \"afd2fd7e-b101-41fa-bc35-87cdf06d791a\") " Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.914508 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9-utilities\") pod \"505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9\" (UID: \"505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9\") " Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.914528 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5dfe4211-4bb6-47e4-9797-652393f66bc5-marketplace-trusted-ca\") pod \"5dfe4211-4bb6-47e4-9797-652393f66bc5\" (UID: \"5dfe4211-4bb6-47e4-9797-652393f66bc5\") " Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.914546 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9-catalog-content\") pod \"505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9\" (UID: \"505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9\") " Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.914595 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5dfe4211-4bb6-47e4-9797-652393f66bc5-marketplace-operator-metrics\") pod \"5dfe4211-4bb6-47e4-9797-652393f66bc5\" (UID: \"5dfe4211-4bb6-47e4-9797-652393f66bc5\") " Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.914614 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbl65\" (UniqueName: \"kubernetes.io/projected/afd2fd7e-b101-41fa-bc35-87cdf06d791a-kube-api-access-cbl65\") pod \"afd2fd7e-b101-41fa-bc35-87cdf06d791a\" (UID: \"afd2fd7e-b101-41fa-bc35-87cdf06d791a\") " Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.914632 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5ms8k\" (UniqueName: \"kubernetes.io/projected/5dfe4211-4bb6-47e4-9797-652393f66bc5-kube-api-access-5ms8k\") pod \"5dfe4211-4bb6-47e4-9797-652393f66bc5\" (UID: \"5dfe4211-4bb6-47e4-9797-652393f66bc5\") " Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.914683 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zdqdr\" (UniqueName: \"kubernetes.io/projected/505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9-kube-api-access-zdqdr\") pod \"505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9\" (UID: \"505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9\") " Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.914712 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/470a3d16-cc7d-4824-a36e-a6004d9b530f-catalog-content\") pod \"470a3d16-cc7d-4824-a36e-a6004d9b530f\" (UID: \"470a3d16-cc7d-4824-a36e-a6004d9b530f\") " Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.914728 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afd2fd7e-b101-41fa-bc35-87cdf06d791a-utilities\") pod \"afd2fd7e-b101-41fa-bc35-87cdf06d791a\" (UID: \"afd2fd7e-b101-41fa-bc35-87cdf06d791a\") " Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.914762 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/470a3d16-cc7d-4824-a36e-a6004d9b530f-utilities\") pod \"470a3d16-cc7d-4824-a36e-a6004d9b530f\" (UID: \"470a3d16-cc7d-4824-a36e-a6004d9b530f\") " Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.914792 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2pk7\" (UniqueName: \"kubernetes.io/projected/470a3d16-cc7d-4824-a36e-a6004d9b530f-kube-api-access-v2pk7\") pod \"470a3d16-cc7d-4824-a36e-a6004d9b530f\" (UID: \"470a3d16-cc7d-4824-a36e-a6004d9b530f\") " Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.914964 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46c5822e-cd0c-4c66-828c-a0f9a50879de-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.914977 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gc8nl\" (UniqueName: \"kubernetes.io/projected/46c5822e-cd0c-4c66-828c-a0f9a50879de-kube-api-access-gc8nl\") on node \"crc\" DevicePath \"\"" Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.914988 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46c5822e-cd0c-4c66-828c-a0f9a50879de-utilities\") on node \"crc\" DevicePath \"\"" Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.915464 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5dfe4211-4bb6-47e4-9797-652393f66bc5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "5dfe4211-4bb6-47e4-9797-652393f66bc5" (UID: "5dfe4211-4bb6-47e4-9797-652393f66bc5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.915671 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afd2fd7e-b101-41fa-bc35-87cdf06d791a-utilities" (OuterVolumeSpecName: "utilities") pod "afd2fd7e-b101-41fa-bc35-87cdf06d791a" (UID: "afd2fd7e-b101-41fa-bc35-87cdf06d791a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.916104 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/470a3d16-cc7d-4824-a36e-a6004d9b530f-utilities" (OuterVolumeSpecName: "utilities") pod "470a3d16-cc7d-4824-a36e-a6004d9b530f" (UID: "470a3d16-cc7d-4824-a36e-a6004d9b530f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.917711 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dfe4211-4bb6-47e4-9797-652393f66bc5-kube-api-access-5ms8k" (OuterVolumeSpecName: "kube-api-access-5ms8k") pod "5dfe4211-4bb6-47e4-9797-652393f66bc5" (UID: "5dfe4211-4bb6-47e4-9797-652393f66bc5"). InnerVolumeSpecName "kube-api-access-5ms8k". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.920143 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5dfe4211-4bb6-47e4-9797-652393f66bc5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "5dfe4211-4bb6-47e4-9797-652393f66bc5" (UID: "5dfe4211-4bb6-47e4-9797-652393f66bc5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.921142 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9-utilities" (OuterVolumeSpecName: "utilities") pod "505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9" (UID: "505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.946051 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9-kube-api-access-zdqdr" (OuterVolumeSpecName: "kube-api-access-zdqdr") pod "505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9" (UID: "505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9"). InnerVolumeSpecName "kube-api-access-zdqdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.946619 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afd2fd7e-b101-41fa-bc35-87cdf06d791a-kube-api-access-cbl65" (OuterVolumeSpecName: "kube-api-access-cbl65") pod "afd2fd7e-b101-41fa-bc35-87cdf06d791a" (UID: "afd2fd7e-b101-41fa-bc35-87cdf06d791a"). InnerVolumeSpecName "kube-api-access-cbl65". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.946995 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9" (UID: "505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.947075 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/470a3d16-cc7d-4824-a36e-a6004d9b530f-kube-api-access-v2pk7" (OuterVolumeSpecName: "kube-api-access-v2pk7") pod "470a3d16-cc7d-4824-a36e-a6004d9b530f" (UID: "470a3d16-cc7d-4824-a36e-a6004d9b530f"). InnerVolumeSpecName "kube-api-access-v2pk7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:45:55 crc kubenswrapper[4949]: I1001 15:45:55.995419 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/afd2fd7e-b101-41fa-bc35-87cdf06d791a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "afd2fd7e-b101-41fa-bc35-87cdf06d791a" (UID: "afd2fd7e-b101-41fa-bc35-87cdf06d791a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.016171 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zdqdr\" (UniqueName: \"kubernetes.io/projected/505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9-kube-api-access-zdqdr\") on node \"crc\" DevicePath \"\"" Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.016223 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/afd2fd7e-b101-41fa-bc35-87cdf06d791a-utilities\") on node \"crc\" DevicePath \"\"" Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.016241 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/470a3d16-cc7d-4824-a36e-a6004d9b530f-utilities\") on node \"crc\" DevicePath \"\"" Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.016253 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v2pk7\" (UniqueName: \"kubernetes.io/projected/470a3d16-cc7d-4824-a36e-a6004d9b530f-kube-api-access-v2pk7\") on node \"crc\" DevicePath \"\"" Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.016265 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/afd2fd7e-b101-41fa-bc35-87cdf06d791a-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.016276 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9-utilities\") on node \"crc\" DevicePath \"\"" Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.016287 4949 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5dfe4211-4bb6-47e4-9797-652393f66bc5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.016296 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.016304 4949 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5dfe4211-4bb6-47e4-9797-652393f66bc5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.016313 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbl65\" (UniqueName: \"kubernetes.io/projected/afd2fd7e-b101-41fa-bc35-87cdf06d791a-kube-api-access-cbl65\") on node \"crc\" DevicePath \"\"" Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.016321 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5ms8k\" (UniqueName: \"kubernetes.io/projected/5dfe4211-4bb6-47e4-9797-652393f66bc5-kube-api-access-5ms8k\") on node \"crc\" DevicePath \"\"" Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.019742 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/470a3d16-cc7d-4824-a36e-a6004d9b530f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "470a3d16-cc7d-4824-a36e-a6004d9b530f" (UID: "470a3d16-cc7d-4824-a36e-a6004d9b530f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.117529 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/470a3d16-cc7d-4824-a36e-a6004d9b530f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.257968 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-smlrq"] Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.465143 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zq8nf" event={"ID":"46c5822e-cd0c-4c66-828c-a0f9a50879de","Type":"ContainerDied","Data":"20c23a00008a36a3933b89020c697db11ee2a0629cca84330c9d6c522f6c895f"} Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.465190 4949 scope.go:117] "RemoveContainer" containerID="8eb22fa102e7146cdac577d89d90ce1d18bc0d28daa180a5496053049aa33e35" Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.465298 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zq8nf" Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.476568 4949 generic.go:334] "Generic (PLEG): container finished" podID="470a3d16-cc7d-4824-a36e-a6004d9b530f" containerID="8c7d2bf4162d305f77c8e22dd7d3dea44b555c917c6cc8e934ee01bcddb19489" exitCode=0 Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.476635 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ltmmw" Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.476638 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ltmmw" event={"ID":"470a3d16-cc7d-4824-a36e-a6004d9b530f","Type":"ContainerDied","Data":"8c7d2bf4162d305f77c8e22dd7d3dea44b555c917c6cc8e934ee01bcddb19489"} Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.476681 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ltmmw" event={"ID":"470a3d16-cc7d-4824-a36e-a6004d9b530f","Type":"ContainerDied","Data":"2d87cbe69ac722606b3b33c0867feb83356ccd65686a71c23c1323296494b73e"} Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.480665 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6s25h" event={"ID":"afd2fd7e-b101-41fa-bc35-87cdf06d791a","Type":"ContainerDied","Data":"cfe976f41015920846695c1ac8679079c9f3c357cca837923733892fcdbc42ce"} Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.480693 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6s25h" Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.484340 4949 generic.go:334] "Generic (PLEG): container finished" podID="505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9" containerID="cef4dc7cdeb60f285e769f2e617c87606658bd7a09505f38d66d1ad57428834e" exitCode=0 Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.484400 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2vm94" event={"ID":"505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9","Type":"ContainerDied","Data":"cef4dc7cdeb60f285e769f2e617c87606658bd7a09505f38d66d1ad57428834e"} Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.484440 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-2vm94" event={"ID":"505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9","Type":"ContainerDied","Data":"e9f89d1056edd940d47e5489fcf2ab5fc92b064001eb81d1285215b738cde6f2"} Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.484506 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-2vm94" Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.490605 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-smlrq" event={"ID":"46f6bb2f-7a68-4a9a-a9c9-0cb9a60d959b","Type":"ContainerStarted","Data":"3f3bb14ad7b7880029a10eeab31686de3570fce88f9c37fbbed874543f92443c"} Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.490658 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-smlrq" event={"ID":"46f6bb2f-7a68-4a9a-a9c9-0cb9a60d959b","Type":"ContainerStarted","Data":"cb1f6bf1168b39bc571ea8a6d662857f6d149be79a0261b9f100bd7675f3a39e"} Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.491872 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-smlrq" Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.492513 4949 generic.go:334] "Generic (PLEG): container finished" podID="5dfe4211-4bb6-47e4-9797-652393f66bc5" containerID="ceca7cd25d5788c1b882b4b53d8f231946357954b5c03b1f602eeb286ca7d13f" exitCode=0 Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.492549 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nbrh4" event={"ID":"5dfe4211-4bb6-47e4-9797-652393f66bc5","Type":"ContainerDied","Data":"ceca7cd25d5788c1b882b4b53d8f231946357954b5c03b1f602eeb286ca7d13f"} Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.492573 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-nbrh4" event={"ID":"5dfe4211-4bb6-47e4-9797-652393f66bc5","Type":"ContainerDied","Data":"2733f566caeac0e0ae9cc726b4ac76dfdf19d147082ff0908667a3c686d7b036"} Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.492594 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-nbrh4" Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.492601 4949 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-smlrq container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.56:8080/healthz\": dial tcp 10.217.0.56:8080: connect: connection refused" start-of-body= Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.492636 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-smlrq" podUID="46f6bb2f-7a68-4a9a-a9c9-0cb9a60d959b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.56:8080/healthz\": dial tcp 10.217.0.56:8080: connect: connection refused" Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.506479 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-smlrq" podStartSLOduration=1.5064574240000002 podStartE2EDuration="1.506457424s" podCreationTimestamp="2025-10-01 15:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:45:56.504848148 +0000 UTC m=+255.810454339" watchObservedRunningTime="2025-10-01 15:45:56.506457424 +0000 UTC m=+255.812063615" Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.530291 4949 scope.go:117] "RemoveContainer" containerID="94654bafe6e17ba09c266df79c94ed2f7c08afb95ef52ea434cd5aa92f51e335" Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.543896 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-2vm94"] Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.547963 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-2vm94"] Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.558181 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ltmmw"] Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.563995 4949 scope.go:117] "RemoveContainer" containerID="44ebec05e05c4a73be29218a81b45c6388d6a3cc3338a94ecd3dbd446992b2d8" Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.570991 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ltmmw"] Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.577221 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zq8nf"] Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.580694 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zq8nf"] Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.591431 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6s25h"] Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.595366 4949 scope.go:117] "RemoveContainer" containerID="8c7d2bf4162d305f77c8e22dd7d3dea44b555c917c6cc8e934ee01bcddb19489" Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.595388 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6s25h"] Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.600204 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nbrh4"] Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.602515 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-nbrh4"] Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.611813 4949 scope.go:117] "RemoveContainer" containerID="c244fb525c7480c1961abc3add069d89fc58e6085d7abb36f8585f2ecf509dc2" Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.627550 4949 scope.go:117] "RemoveContainer" containerID="cdedce3ba48b13a5321aa560db01b76033274451a2462a953d15ac96fe30bec6" Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.641424 4949 scope.go:117] "RemoveContainer" containerID="8c7d2bf4162d305f77c8e22dd7d3dea44b555c917c6cc8e934ee01bcddb19489" Oct 01 15:45:56 crc kubenswrapper[4949]: E1001 15:45:56.641826 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8c7d2bf4162d305f77c8e22dd7d3dea44b555c917c6cc8e934ee01bcddb19489\": container with ID starting with 8c7d2bf4162d305f77c8e22dd7d3dea44b555c917c6cc8e934ee01bcddb19489 not found: ID does not exist" containerID="8c7d2bf4162d305f77c8e22dd7d3dea44b555c917c6cc8e934ee01bcddb19489" Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.641855 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8c7d2bf4162d305f77c8e22dd7d3dea44b555c917c6cc8e934ee01bcddb19489"} err="failed to get container status \"8c7d2bf4162d305f77c8e22dd7d3dea44b555c917c6cc8e934ee01bcddb19489\": rpc error: code = NotFound desc = could not find container \"8c7d2bf4162d305f77c8e22dd7d3dea44b555c917c6cc8e934ee01bcddb19489\": container with ID starting with 8c7d2bf4162d305f77c8e22dd7d3dea44b555c917c6cc8e934ee01bcddb19489 not found: ID does not exist" Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.641877 4949 scope.go:117] "RemoveContainer" containerID="c244fb525c7480c1961abc3add069d89fc58e6085d7abb36f8585f2ecf509dc2" Oct 01 15:45:56 crc kubenswrapper[4949]: E1001 15:45:56.642270 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c244fb525c7480c1961abc3add069d89fc58e6085d7abb36f8585f2ecf509dc2\": container with ID starting with c244fb525c7480c1961abc3add069d89fc58e6085d7abb36f8585f2ecf509dc2 not found: ID does not exist" containerID="c244fb525c7480c1961abc3add069d89fc58e6085d7abb36f8585f2ecf509dc2" Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.642319 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c244fb525c7480c1961abc3add069d89fc58e6085d7abb36f8585f2ecf509dc2"} err="failed to get container status \"c244fb525c7480c1961abc3add069d89fc58e6085d7abb36f8585f2ecf509dc2\": rpc error: code = NotFound desc = could not find container \"c244fb525c7480c1961abc3add069d89fc58e6085d7abb36f8585f2ecf509dc2\": container with ID starting with c244fb525c7480c1961abc3add069d89fc58e6085d7abb36f8585f2ecf509dc2 not found: ID does not exist" Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.642352 4949 scope.go:117] "RemoveContainer" containerID="cdedce3ba48b13a5321aa560db01b76033274451a2462a953d15ac96fe30bec6" Oct 01 15:45:56 crc kubenswrapper[4949]: E1001 15:45:56.642650 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cdedce3ba48b13a5321aa560db01b76033274451a2462a953d15ac96fe30bec6\": container with ID starting with cdedce3ba48b13a5321aa560db01b76033274451a2462a953d15ac96fe30bec6 not found: ID does not exist" containerID="cdedce3ba48b13a5321aa560db01b76033274451a2462a953d15ac96fe30bec6" Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.642688 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cdedce3ba48b13a5321aa560db01b76033274451a2462a953d15ac96fe30bec6"} err="failed to get container status \"cdedce3ba48b13a5321aa560db01b76033274451a2462a953d15ac96fe30bec6\": rpc error: code = NotFound desc = could not find container \"cdedce3ba48b13a5321aa560db01b76033274451a2462a953d15ac96fe30bec6\": container with ID starting with cdedce3ba48b13a5321aa560db01b76033274451a2462a953d15ac96fe30bec6 not found: ID does not exist" Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.642708 4949 scope.go:117] "RemoveContainer" containerID="1346fe2835bf6d37b720c4da8785a4ab92143d5ddf234d63003dba746e29ffdc" Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.657116 4949 scope.go:117] "RemoveContainer" containerID="76dfbbf9404c1093d263e5dbc6b6298476b728c13f3a5dd04e97f4465d2d1aca" Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.670457 4949 scope.go:117] "RemoveContainer" containerID="a39ccf65c6d5bb62356609ef55f26ecbe3832bb02f7be6e6c5b1839927333d6b" Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.683565 4949 scope.go:117] "RemoveContainer" containerID="cef4dc7cdeb60f285e769f2e617c87606658bd7a09505f38d66d1ad57428834e" Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.696620 4949 scope.go:117] "RemoveContainer" containerID="8ee7f371307521f7aa33fe41ac76c228da96c3b042fa783b6d6d90f6d8a3b09d" Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.718883 4949 scope.go:117] "RemoveContainer" containerID="8b8386ad119c1cf40d321913a8d81ce5676d70974948a5aaf9ab51fb61d82eb0" Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.734355 4949 scope.go:117] "RemoveContainer" containerID="cef4dc7cdeb60f285e769f2e617c87606658bd7a09505f38d66d1ad57428834e" Oct 01 15:45:56 crc kubenswrapper[4949]: E1001 15:45:56.734838 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cef4dc7cdeb60f285e769f2e617c87606658bd7a09505f38d66d1ad57428834e\": container with ID starting with cef4dc7cdeb60f285e769f2e617c87606658bd7a09505f38d66d1ad57428834e not found: ID does not exist" containerID="cef4dc7cdeb60f285e769f2e617c87606658bd7a09505f38d66d1ad57428834e" Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.734880 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cef4dc7cdeb60f285e769f2e617c87606658bd7a09505f38d66d1ad57428834e"} err="failed to get container status \"cef4dc7cdeb60f285e769f2e617c87606658bd7a09505f38d66d1ad57428834e\": rpc error: code = NotFound desc = could not find container \"cef4dc7cdeb60f285e769f2e617c87606658bd7a09505f38d66d1ad57428834e\": container with ID starting with cef4dc7cdeb60f285e769f2e617c87606658bd7a09505f38d66d1ad57428834e not found: ID does not exist" Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.734908 4949 scope.go:117] "RemoveContainer" containerID="8ee7f371307521f7aa33fe41ac76c228da96c3b042fa783b6d6d90f6d8a3b09d" Oct 01 15:45:56 crc kubenswrapper[4949]: E1001 15:45:56.735385 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ee7f371307521f7aa33fe41ac76c228da96c3b042fa783b6d6d90f6d8a3b09d\": container with ID starting with 8ee7f371307521f7aa33fe41ac76c228da96c3b042fa783b6d6d90f6d8a3b09d not found: ID does not exist" containerID="8ee7f371307521f7aa33fe41ac76c228da96c3b042fa783b6d6d90f6d8a3b09d" Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.735415 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ee7f371307521f7aa33fe41ac76c228da96c3b042fa783b6d6d90f6d8a3b09d"} err="failed to get container status \"8ee7f371307521f7aa33fe41ac76c228da96c3b042fa783b6d6d90f6d8a3b09d\": rpc error: code = NotFound desc = could not find container \"8ee7f371307521f7aa33fe41ac76c228da96c3b042fa783b6d6d90f6d8a3b09d\": container with ID starting with 8ee7f371307521f7aa33fe41ac76c228da96c3b042fa783b6d6d90f6d8a3b09d not found: ID does not exist" Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.735436 4949 scope.go:117] "RemoveContainer" containerID="8b8386ad119c1cf40d321913a8d81ce5676d70974948a5aaf9ab51fb61d82eb0" Oct 01 15:45:56 crc kubenswrapper[4949]: E1001 15:45:56.735784 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b8386ad119c1cf40d321913a8d81ce5676d70974948a5aaf9ab51fb61d82eb0\": container with ID starting with 8b8386ad119c1cf40d321913a8d81ce5676d70974948a5aaf9ab51fb61d82eb0 not found: ID does not exist" containerID="8b8386ad119c1cf40d321913a8d81ce5676d70974948a5aaf9ab51fb61d82eb0" Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.735802 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b8386ad119c1cf40d321913a8d81ce5676d70974948a5aaf9ab51fb61d82eb0"} err="failed to get container status \"8b8386ad119c1cf40d321913a8d81ce5676d70974948a5aaf9ab51fb61d82eb0\": rpc error: code = NotFound desc = could not find container \"8b8386ad119c1cf40d321913a8d81ce5676d70974948a5aaf9ab51fb61d82eb0\": container with ID starting with 8b8386ad119c1cf40d321913a8d81ce5676d70974948a5aaf9ab51fb61d82eb0 not found: ID does not exist" Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.735814 4949 scope.go:117] "RemoveContainer" containerID="ceca7cd25d5788c1b882b4b53d8f231946357954b5c03b1f602eeb286ca7d13f" Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.747347 4949 scope.go:117] "RemoveContainer" containerID="ceca7cd25d5788c1b882b4b53d8f231946357954b5c03b1f602eeb286ca7d13f" Oct 01 15:45:56 crc kubenswrapper[4949]: E1001 15:45:56.747758 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ceca7cd25d5788c1b882b4b53d8f231946357954b5c03b1f602eeb286ca7d13f\": container with ID starting with ceca7cd25d5788c1b882b4b53d8f231946357954b5c03b1f602eeb286ca7d13f not found: ID does not exist" containerID="ceca7cd25d5788c1b882b4b53d8f231946357954b5c03b1f602eeb286ca7d13f" Oct 01 15:45:56 crc kubenswrapper[4949]: I1001 15:45:56.747790 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ceca7cd25d5788c1b882b4b53d8f231946357954b5c03b1f602eeb286ca7d13f"} err="failed to get container status \"ceca7cd25d5788c1b882b4b53d8f231946357954b5c03b1f602eeb286ca7d13f\": rpc error: code = NotFound desc = could not find container \"ceca7cd25d5788c1b882b4b53d8f231946357954b5c03b1f602eeb286ca7d13f\": container with ID starting with ceca7cd25d5788c1b882b4b53d8f231946357954b5c03b1f602eeb286ca7d13f not found: ID does not exist" Oct 01 15:45:57 crc kubenswrapper[4949]: I1001 15:45:57.144361 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-4wgtk"] Oct 01 15:45:57 crc kubenswrapper[4949]: E1001 15:45:57.144588 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46c5822e-cd0c-4c66-828c-a0f9a50879de" containerName="extract-content" Oct 01 15:45:57 crc kubenswrapper[4949]: I1001 15:45:57.144604 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="46c5822e-cd0c-4c66-828c-a0f9a50879de" containerName="extract-content" Oct 01 15:45:57 crc kubenswrapper[4949]: E1001 15:45:57.144614 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afd2fd7e-b101-41fa-bc35-87cdf06d791a" containerName="registry-server" Oct 01 15:45:57 crc kubenswrapper[4949]: I1001 15:45:57.144622 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="afd2fd7e-b101-41fa-bc35-87cdf06d791a" containerName="registry-server" Oct 01 15:45:57 crc kubenswrapper[4949]: E1001 15:45:57.144640 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46c5822e-cd0c-4c66-828c-a0f9a50879de" containerName="registry-server" Oct 01 15:45:57 crc kubenswrapper[4949]: I1001 15:45:57.144648 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="46c5822e-cd0c-4c66-828c-a0f9a50879de" containerName="registry-server" Oct 01 15:45:57 crc kubenswrapper[4949]: E1001 15:45:57.144658 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46c5822e-cd0c-4c66-828c-a0f9a50879de" containerName="extract-utilities" Oct 01 15:45:57 crc kubenswrapper[4949]: I1001 15:45:57.144665 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="46c5822e-cd0c-4c66-828c-a0f9a50879de" containerName="extract-utilities" Oct 01 15:45:57 crc kubenswrapper[4949]: E1001 15:45:57.144676 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="470a3d16-cc7d-4824-a36e-a6004d9b530f" containerName="extract-utilities" Oct 01 15:45:57 crc kubenswrapper[4949]: I1001 15:45:57.144684 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="470a3d16-cc7d-4824-a36e-a6004d9b530f" containerName="extract-utilities" Oct 01 15:45:57 crc kubenswrapper[4949]: E1001 15:45:57.144696 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="470a3d16-cc7d-4824-a36e-a6004d9b530f" containerName="registry-server" Oct 01 15:45:57 crc kubenswrapper[4949]: I1001 15:45:57.144704 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="470a3d16-cc7d-4824-a36e-a6004d9b530f" containerName="registry-server" Oct 01 15:45:57 crc kubenswrapper[4949]: E1001 15:45:57.144714 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afd2fd7e-b101-41fa-bc35-87cdf06d791a" containerName="extract-utilities" Oct 01 15:45:57 crc kubenswrapper[4949]: I1001 15:45:57.144721 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="afd2fd7e-b101-41fa-bc35-87cdf06d791a" containerName="extract-utilities" Oct 01 15:45:57 crc kubenswrapper[4949]: E1001 15:45:57.144730 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dfe4211-4bb6-47e4-9797-652393f66bc5" containerName="marketplace-operator" Oct 01 15:45:57 crc kubenswrapper[4949]: I1001 15:45:57.144737 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dfe4211-4bb6-47e4-9797-652393f66bc5" containerName="marketplace-operator" Oct 01 15:45:57 crc kubenswrapper[4949]: E1001 15:45:57.144746 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9" containerName="registry-server" Oct 01 15:45:57 crc kubenswrapper[4949]: I1001 15:45:57.144753 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9" containerName="registry-server" Oct 01 15:45:57 crc kubenswrapper[4949]: E1001 15:45:57.144765 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9" containerName="extract-utilities" Oct 01 15:45:57 crc kubenswrapper[4949]: I1001 15:45:57.144773 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9" containerName="extract-utilities" Oct 01 15:45:57 crc kubenswrapper[4949]: E1001 15:45:57.144783 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9" containerName="extract-content" Oct 01 15:45:57 crc kubenswrapper[4949]: I1001 15:45:57.144790 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9" containerName="extract-content" Oct 01 15:45:57 crc kubenswrapper[4949]: E1001 15:45:57.144804 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="470a3d16-cc7d-4824-a36e-a6004d9b530f" containerName="extract-content" Oct 01 15:45:57 crc kubenswrapper[4949]: I1001 15:45:57.144810 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="470a3d16-cc7d-4824-a36e-a6004d9b530f" containerName="extract-content" Oct 01 15:45:57 crc kubenswrapper[4949]: E1001 15:45:57.144819 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="afd2fd7e-b101-41fa-bc35-87cdf06d791a" containerName="extract-content" Oct 01 15:45:57 crc kubenswrapper[4949]: I1001 15:45:57.144826 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="afd2fd7e-b101-41fa-bc35-87cdf06d791a" containerName="extract-content" Oct 01 15:45:57 crc kubenswrapper[4949]: I1001 15:45:57.144944 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dfe4211-4bb6-47e4-9797-652393f66bc5" containerName="marketplace-operator" Oct 01 15:45:57 crc kubenswrapper[4949]: I1001 15:45:57.144956 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="46c5822e-cd0c-4c66-828c-a0f9a50879de" containerName="registry-server" Oct 01 15:45:57 crc kubenswrapper[4949]: I1001 15:45:57.144966 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="470a3d16-cc7d-4824-a36e-a6004d9b530f" containerName="registry-server" Oct 01 15:45:57 crc kubenswrapper[4949]: I1001 15:45:57.144973 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="afd2fd7e-b101-41fa-bc35-87cdf06d791a" containerName="registry-server" Oct 01 15:45:57 crc kubenswrapper[4949]: I1001 15:45:57.144984 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9" containerName="registry-server" Oct 01 15:45:57 crc kubenswrapper[4949]: I1001 15:45:57.145723 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4wgtk" Oct 01 15:45:57 crc kubenswrapper[4949]: I1001 15:45:57.148878 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Oct 01 15:45:57 crc kubenswrapper[4949]: I1001 15:45:57.154814 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4wgtk"] Oct 01 15:45:57 crc kubenswrapper[4949]: I1001 15:45:57.231389 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00c41ac2-4e6b-4656-b240-03036d30778d-utilities\") pod \"redhat-marketplace-4wgtk\" (UID: \"00c41ac2-4e6b-4656-b240-03036d30778d\") " pod="openshift-marketplace/redhat-marketplace-4wgtk" Oct 01 15:45:57 crc kubenswrapper[4949]: I1001 15:45:57.231693 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvqdg\" (UniqueName: \"kubernetes.io/projected/00c41ac2-4e6b-4656-b240-03036d30778d-kube-api-access-pvqdg\") pod \"redhat-marketplace-4wgtk\" (UID: \"00c41ac2-4e6b-4656-b240-03036d30778d\") " pod="openshift-marketplace/redhat-marketplace-4wgtk" Oct 01 15:45:57 crc kubenswrapper[4949]: I1001 15:45:57.231801 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00c41ac2-4e6b-4656-b240-03036d30778d-catalog-content\") pod \"redhat-marketplace-4wgtk\" (UID: \"00c41ac2-4e6b-4656-b240-03036d30778d\") " pod="openshift-marketplace/redhat-marketplace-4wgtk" Oct 01 15:45:57 crc kubenswrapper[4949]: I1001 15:45:57.333174 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvqdg\" (UniqueName: \"kubernetes.io/projected/00c41ac2-4e6b-4656-b240-03036d30778d-kube-api-access-pvqdg\") pod \"redhat-marketplace-4wgtk\" (UID: \"00c41ac2-4e6b-4656-b240-03036d30778d\") " pod="openshift-marketplace/redhat-marketplace-4wgtk" Oct 01 15:45:57 crc kubenswrapper[4949]: I1001 15:45:57.333237 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00c41ac2-4e6b-4656-b240-03036d30778d-catalog-content\") pod \"redhat-marketplace-4wgtk\" (UID: \"00c41ac2-4e6b-4656-b240-03036d30778d\") " pod="openshift-marketplace/redhat-marketplace-4wgtk" Oct 01 15:45:57 crc kubenswrapper[4949]: I1001 15:45:57.333290 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00c41ac2-4e6b-4656-b240-03036d30778d-utilities\") pod \"redhat-marketplace-4wgtk\" (UID: \"00c41ac2-4e6b-4656-b240-03036d30778d\") " pod="openshift-marketplace/redhat-marketplace-4wgtk" Oct 01 15:45:57 crc kubenswrapper[4949]: I1001 15:45:57.333825 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/00c41ac2-4e6b-4656-b240-03036d30778d-utilities\") pod \"redhat-marketplace-4wgtk\" (UID: \"00c41ac2-4e6b-4656-b240-03036d30778d\") " pod="openshift-marketplace/redhat-marketplace-4wgtk" Oct 01 15:45:57 crc kubenswrapper[4949]: I1001 15:45:57.333947 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/00c41ac2-4e6b-4656-b240-03036d30778d-catalog-content\") pod \"redhat-marketplace-4wgtk\" (UID: \"00c41ac2-4e6b-4656-b240-03036d30778d\") " pod="openshift-marketplace/redhat-marketplace-4wgtk" Oct 01 15:45:57 crc kubenswrapper[4949]: I1001 15:45:57.349692 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvqdg\" (UniqueName: \"kubernetes.io/projected/00c41ac2-4e6b-4656-b240-03036d30778d-kube-api-access-pvqdg\") pod \"redhat-marketplace-4wgtk\" (UID: \"00c41ac2-4e6b-4656-b240-03036d30778d\") " pod="openshift-marketplace/redhat-marketplace-4wgtk" Oct 01 15:45:57 crc kubenswrapper[4949]: I1001 15:45:57.463209 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-4wgtk" Oct 01 15:45:57 crc kubenswrapper[4949]: I1001 15:45:57.510114 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-smlrq" Oct 01 15:45:57 crc kubenswrapper[4949]: I1001 15:45:57.609717 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46c5822e-cd0c-4c66-828c-a0f9a50879de" path="/var/lib/kubelet/pods/46c5822e-cd0c-4c66-828c-a0f9a50879de/volumes" Oct 01 15:45:57 crc kubenswrapper[4949]: I1001 15:45:57.610559 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="470a3d16-cc7d-4824-a36e-a6004d9b530f" path="/var/lib/kubelet/pods/470a3d16-cc7d-4824-a36e-a6004d9b530f/volumes" Oct 01 15:45:57 crc kubenswrapper[4949]: I1001 15:45:57.611313 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9" path="/var/lib/kubelet/pods/505154a2-7ec8-4803-bbe0-0bc5c3d0bdb9/volumes" Oct 01 15:45:57 crc kubenswrapper[4949]: I1001 15:45:57.612643 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dfe4211-4bb6-47e4-9797-652393f66bc5" path="/var/lib/kubelet/pods/5dfe4211-4bb6-47e4-9797-652393f66bc5/volumes" Oct 01 15:45:57 crc kubenswrapper[4949]: I1001 15:45:57.613215 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="afd2fd7e-b101-41fa-bc35-87cdf06d791a" path="/var/lib/kubelet/pods/afd2fd7e-b101-41fa-bc35-87cdf06d791a/volumes" Oct 01 15:45:57 crc kubenswrapper[4949]: I1001 15:45:57.858434 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-4wgtk"] Oct 01 15:45:57 crc kubenswrapper[4949]: W1001 15:45:57.862158 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00c41ac2_4e6b_4656_b240_03036d30778d.slice/crio-0b1211d1dbf1dbbbab7ef552015fea8c76b64bbec47f35a983bc03f5f09327ae WatchSource:0}: Error finding container 0b1211d1dbf1dbbbab7ef552015fea8c76b64bbec47f35a983bc03f5f09327ae: Status 404 returned error can't find the container with id 0b1211d1dbf1dbbbab7ef552015fea8c76b64bbec47f35a983bc03f5f09327ae Oct 01 15:45:58 crc kubenswrapper[4949]: I1001 15:45:58.146742 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-g84kx"] Oct 01 15:45:58 crc kubenswrapper[4949]: I1001 15:45:58.148152 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g84kx" Oct 01 15:45:58 crc kubenswrapper[4949]: I1001 15:45:58.150440 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Oct 01 15:45:58 crc kubenswrapper[4949]: I1001 15:45:58.152883 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g84kx"] Oct 01 15:45:58 crc kubenswrapper[4949]: I1001 15:45:58.245012 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08f8c987-cdde-4a43-8a61-e01d63fdb5e9-catalog-content\") pod \"certified-operators-g84kx\" (UID: \"08f8c987-cdde-4a43-8a61-e01d63fdb5e9\") " pod="openshift-marketplace/certified-operators-g84kx" Oct 01 15:45:58 crc kubenswrapper[4949]: I1001 15:45:58.245104 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l68cs\" (UniqueName: \"kubernetes.io/projected/08f8c987-cdde-4a43-8a61-e01d63fdb5e9-kube-api-access-l68cs\") pod \"certified-operators-g84kx\" (UID: \"08f8c987-cdde-4a43-8a61-e01d63fdb5e9\") " pod="openshift-marketplace/certified-operators-g84kx" Oct 01 15:45:58 crc kubenswrapper[4949]: I1001 15:45:58.245152 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08f8c987-cdde-4a43-8a61-e01d63fdb5e9-utilities\") pod \"certified-operators-g84kx\" (UID: \"08f8c987-cdde-4a43-8a61-e01d63fdb5e9\") " pod="openshift-marketplace/certified-operators-g84kx" Oct 01 15:45:58 crc kubenswrapper[4949]: I1001 15:45:58.345945 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08f8c987-cdde-4a43-8a61-e01d63fdb5e9-catalog-content\") pod \"certified-operators-g84kx\" (UID: \"08f8c987-cdde-4a43-8a61-e01d63fdb5e9\") " pod="openshift-marketplace/certified-operators-g84kx" Oct 01 15:45:58 crc kubenswrapper[4949]: I1001 15:45:58.346061 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l68cs\" (UniqueName: \"kubernetes.io/projected/08f8c987-cdde-4a43-8a61-e01d63fdb5e9-kube-api-access-l68cs\") pod \"certified-operators-g84kx\" (UID: \"08f8c987-cdde-4a43-8a61-e01d63fdb5e9\") " pod="openshift-marketplace/certified-operators-g84kx" Oct 01 15:45:58 crc kubenswrapper[4949]: I1001 15:45:58.346102 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08f8c987-cdde-4a43-8a61-e01d63fdb5e9-utilities\") pod \"certified-operators-g84kx\" (UID: \"08f8c987-cdde-4a43-8a61-e01d63fdb5e9\") " pod="openshift-marketplace/certified-operators-g84kx" Oct 01 15:45:58 crc kubenswrapper[4949]: I1001 15:45:58.346523 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08f8c987-cdde-4a43-8a61-e01d63fdb5e9-utilities\") pod \"certified-operators-g84kx\" (UID: \"08f8c987-cdde-4a43-8a61-e01d63fdb5e9\") " pod="openshift-marketplace/certified-operators-g84kx" Oct 01 15:45:58 crc kubenswrapper[4949]: I1001 15:45:58.346528 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08f8c987-cdde-4a43-8a61-e01d63fdb5e9-catalog-content\") pod \"certified-operators-g84kx\" (UID: \"08f8c987-cdde-4a43-8a61-e01d63fdb5e9\") " pod="openshift-marketplace/certified-operators-g84kx" Oct 01 15:45:58 crc kubenswrapper[4949]: I1001 15:45:58.363173 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l68cs\" (UniqueName: \"kubernetes.io/projected/08f8c987-cdde-4a43-8a61-e01d63fdb5e9-kube-api-access-l68cs\") pod \"certified-operators-g84kx\" (UID: \"08f8c987-cdde-4a43-8a61-e01d63fdb5e9\") " pod="openshift-marketplace/certified-operators-g84kx" Oct 01 15:45:58 crc kubenswrapper[4949]: I1001 15:45:58.471771 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g84kx" Oct 01 15:45:58 crc kubenswrapper[4949]: I1001 15:45:58.511424 4949 generic.go:334] "Generic (PLEG): container finished" podID="00c41ac2-4e6b-4656-b240-03036d30778d" containerID="8c24b82395ac0da2016c81ce00669658ca95ad783342ec8f9583559a1dd463e0" exitCode=0 Oct 01 15:45:58 crc kubenswrapper[4949]: I1001 15:45:58.511529 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4wgtk" event={"ID":"00c41ac2-4e6b-4656-b240-03036d30778d","Type":"ContainerDied","Data":"8c24b82395ac0da2016c81ce00669658ca95ad783342ec8f9583559a1dd463e0"} Oct 01 15:45:58 crc kubenswrapper[4949]: I1001 15:45:58.511577 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4wgtk" event={"ID":"00c41ac2-4e6b-4656-b240-03036d30778d","Type":"ContainerStarted","Data":"0b1211d1dbf1dbbbab7ef552015fea8c76b64bbec47f35a983bc03f5f09327ae"} Oct 01 15:45:58 crc kubenswrapper[4949]: I1001 15:45:58.863071 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-g84kx"] Oct 01 15:45:58 crc kubenswrapper[4949]: W1001 15:45:58.868447 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08f8c987_cdde_4a43_8a61_e01d63fdb5e9.slice/crio-09ebb9e1dd3c66719c441a22700fe4bf49222a0a8e5b0ca90ccb676dfaf4fa53 WatchSource:0}: Error finding container 09ebb9e1dd3c66719c441a22700fe4bf49222a0a8e5b0ca90ccb676dfaf4fa53: Status 404 returned error can't find the container with id 09ebb9e1dd3c66719c441a22700fe4bf49222a0a8e5b0ca90ccb676dfaf4fa53 Oct 01 15:45:59 crc kubenswrapper[4949]: I1001 15:45:59.523619 4949 generic.go:334] "Generic (PLEG): container finished" podID="08f8c987-cdde-4a43-8a61-e01d63fdb5e9" containerID="ebfd00ded25edc4969bd679c40d97d4895cdd10762e9a383c3cecdc08b1fd070" exitCode=0 Oct 01 15:45:59 crc kubenswrapper[4949]: I1001 15:45:59.523716 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g84kx" event={"ID":"08f8c987-cdde-4a43-8a61-e01d63fdb5e9","Type":"ContainerDied","Data":"ebfd00ded25edc4969bd679c40d97d4895cdd10762e9a383c3cecdc08b1fd070"} Oct 01 15:45:59 crc kubenswrapper[4949]: I1001 15:45:59.523983 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g84kx" event={"ID":"08f8c987-cdde-4a43-8a61-e01d63fdb5e9","Type":"ContainerStarted","Data":"09ebb9e1dd3c66719c441a22700fe4bf49222a0a8e5b0ca90ccb676dfaf4fa53"} Oct 01 15:45:59 crc kubenswrapper[4949]: I1001 15:45:59.549219 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zzv5b"] Oct 01 15:45:59 crc kubenswrapper[4949]: I1001 15:45:59.550879 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zzv5b" Oct 01 15:45:59 crc kubenswrapper[4949]: I1001 15:45:59.555119 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Oct 01 15:45:59 crc kubenswrapper[4949]: I1001 15:45:59.564843 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zzv5b"] Oct 01 15:45:59 crc kubenswrapper[4949]: I1001 15:45:59.658718 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6wkb\" (UniqueName: \"kubernetes.io/projected/46dbc1ee-6aff-455c-9f06-0a9dc719ce20-kube-api-access-n6wkb\") pod \"redhat-operators-zzv5b\" (UID: \"46dbc1ee-6aff-455c-9f06-0a9dc719ce20\") " pod="openshift-marketplace/redhat-operators-zzv5b" Oct 01 15:45:59 crc kubenswrapper[4949]: I1001 15:45:59.658819 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46dbc1ee-6aff-455c-9f06-0a9dc719ce20-catalog-content\") pod \"redhat-operators-zzv5b\" (UID: \"46dbc1ee-6aff-455c-9f06-0a9dc719ce20\") " pod="openshift-marketplace/redhat-operators-zzv5b" Oct 01 15:45:59 crc kubenswrapper[4949]: I1001 15:45:59.658871 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46dbc1ee-6aff-455c-9f06-0a9dc719ce20-utilities\") pod \"redhat-operators-zzv5b\" (UID: \"46dbc1ee-6aff-455c-9f06-0a9dc719ce20\") " pod="openshift-marketplace/redhat-operators-zzv5b" Oct 01 15:45:59 crc kubenswrapper[4949]: I1001 15:45:59.760034 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6wkb\" (UniqueName: \"kubernetes.io/projected/46dbc1ee-6aff-455c-9f06-0a9dc719ce20-kube-api-access-n6wkb\") pod \"redhat-operators-zzv5b\" (UID: \"46dbc1ee-6aff-455c-9f06-0a9dc719ce20\") " pod="openshift-marketplace/redhat-operators-zzv5b" Oct 01 15:45:59 crc kubenswrapper[4949]: I1001 15:45:59.760115 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46dbc1ee-6aff-455c-9f06-0a9dc719ce20-catalog-content\") pod \"redhat-operators-zzv5b\" (UID: \"46dbc1ee-6aff-455c-9f06-0a9dc719ce20\") " pod="openshift-marketplace/redhat-operators-zzv5b" Oct 01 15:45:59 crc kubenswrapper[4949]: I1001 15:45:59.760187 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46dbc1ee-6aff-455c-9f06-0a9dc719ce20-utilities\") pod \"redhat-operators-zzv5b\" (UID: \"46dbc1ee-6aff-455c-9f06-0a9dc719ce20\") " pod="openshift-marketplace/redhat-operators-zzv5b" Oct 01 15:45:59 crc kubenswrapper[4949]: I1001 15:45:59.760626 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46dbc1ee-6aff-455c-9f06-0a9dc719ce20-utilities\") pod \"redhat-operators-zzv5b\" (UID: \"46dbc1ee-6aff-455c-9f06-0a9dc719ce20\") " pod="openshift-marketplace/redhat-operators-zzv5b" Oct 01 15:45:59 crc kubenswrapper[4949]: I1001 15:45:59.760774 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46dbc1ee-6aff-455c-9f06-0a9dc719ce20-catalog-content\") pod \"redhat-operators-zzv5b\" (UID: \"46dbc1ee-6aff-455c-9f06-0a9dc719ce20\") " pod="openshift-marketplace/redhat-operators-zzv5b" Oct 01 15:45:59 crc kubenswrapper[4949]: I1001 15:45:59.777964 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6wkb\" (UniqueName: \"kubernetes.io/projected/46dbc1ee-6aff-455c-9f06-0a9dc719ce20-kube-api-access-n6wkb\") pod \"redhat-operators-zzv5b\" (UID: \"46dbc1ee-6aff-455c-9f06-0a9dc719ce20\") " pod="openshift-marketplace/redhat-operators-zzv5b" Oct 01 15:45:59 crc kubenswrapper[4949]: I1001 15:45:59.875347 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zzv5b" Oct 01 15:46:00 crc kubenswrapper[4949]: I1001 15:46:00.257212 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zzv5b"] Oct 01 15:46:00 crc kubenswrapper[4949]: W1001 15:46:00.270065 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod46dbc1ee_6aff_455c_9f06_0a9dc719ce20.slice/crio-5c3dcb641c531c097f628d9961b61a4166e31640cf6440032ae1555a668d030b WatchSource:0}: Error finding container 5c3dcb641c531c097f628d9961b61a4166e31640cf6440032ae1555a668d030b: Status 404 returned error can't find the container with id 5c3dcb641c531c097f628d9961b61a4166e31640cf6440032ae1555a668d030b Oct 01 15:46:00 crc kubenswrapper[4949]: I1001 15:46:00.530503 4949 generic.go:334] "Generic (PLEG): container finished" podID="46dbc1ee-6aff-455c-9f06-0a9dc719ce20" containerID="67338827caf75af88a51c34a6ffce8b5b2f5227e012bfb5905871a4cf443fc8f" exitCode=0 Oct 01 15:46:00 crc kubenswrapper[4949]: I1001 15:46:00.531822 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zzv5b" event={"ID":"46dbc1ee-6aff-455c-9f06-0a9dc719ce20","Type":"ContainerDied","Data":"67338827caf75af88a51c34a6ffce8b5b2f5227e012bfb5905871a4cf443fc8f"} Oct 01 15:46:00 crc kubenswrapper[4949]: I1001 15:46:00.531955 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zzv5b" event={"ID":"46dbc1ee-6aff-455c-9f06-0a9dc719ce20","Type":"ContainerStarted","Data":"5c3dcb641c531c097f628d9961b61a4166e31640cf6440032ae1555a668d030b"} Oct 01 15:46:00 crc kubenswrapper[4949]: I1001 15:46:00.549805 4949 generic.go:334] "Generic (PLEG): container finished" podID="00c41ac2-4e6b-4656-b240-03036d30778d" containerID="540dab21178b5e6e43899263a71ad366eca083bded43dc9102baf218193dd137" exitCode=0 Oct 01 15:46:00 crc kubenswrapper[4949]: I1001 15:46:00.551914 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4wgtk" event={"ID":"00c41ac2-4e6b-4656-b240-03036d30778d","Type":"ContainerDied","Data":"540dab21178b5e6e43899263a71ad366eca083bded43dc9102baf218193dd137"} Oct 01 15:46:00 crc kubenswrapper[4949]: I1001 15:46:00.551956 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-phqz6"] Oct 01 15:46:00 crc kubenswrapper[4949]: I1001 15:46:00.552818 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-phqz6" Oct 01 15:46:00 crc kubenswrapper[4949]: I1001 15:46:00.555932 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Oct 01 15:46:00 crc kubenswrapper[4949]: I1001 15:46:00.559748 4949 generic.go:334] "Generic (PLEG): container finished" podID="08f8c987-cdde-4a43-8a61-e01d63fdb5e9" containerID="9f97eecdaf2f752738f6634906ec9d79327018037cc8be23674890514415f6ae" exitCode=0 Oct 01 15:46:00 crc kubenswrapper[4949]: I1001 15:46:00.559918 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g84kx" event={"ID":"08f8c987-cdde-4a43-8a61-e01d63fdb5e9","Type":"ContainerDied","Data":"9f97eecdaf2f752738f6634906ec9d79327018037cc8be23674890514415f6ae"} Oct 01 15:46:00 crc kubenswrapper[4949]: I1001 15:46:00.561008 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-phqz6"] Oct 01 15:46:00 crc kubenswrapper[4949]: I1001 15:46:00.598681 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15e92864-5078-4780-ac9c-a5064ba66ada-catalog-content\") pod \"community-operators-phqz6\" (UID: \"15e92864-5078-4780-ac9c-a5064ba66ada\") " pod="openshift-marketplace/community-operators-phqz6" Oct 01 15:46:00 crc kubenswrapper[4949]: I1001 15:46:00.598780 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-97plr\" (UniqueName: \"kubernetes.io/projected/15e92864-5078-4780-ac9c-a5064ba66ada-kube-api-access-97plr\") pod \"community-operators-phqz6\" (UID: \"15e92864-5078-4780-ac9c-a5064ba66ada\") " pod="openshift-marketplace/community-operators-phqz6" Oct 01 15:46:00 crc kubenswrapper[4949]: I1001 15:46:00.598822 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15e92864-5078-4780-ac9c-a5064ba66ada-utilities\") pod \"community-operators-phqz6\" (UID: \"15e92864-5078-4780-ac9c-a5064ba66ada\") " pod="openshift-marketplace/community-operators-phqz6" Oct 01 15:46:00 crc kubenswrapper[4949]: I1001 15:46:00.700084 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15e92864-5078-4780-ac9c-a5064ba66ada-catalog-content\") pod \"community-operators-phqz6\" (UID: \"15e92864-5078-4780-ac9c-a5064ba66ada\") " pod="openshift-marketplace/community-operators-phqz6" Oct 01 15:46:00 crc kubenswrapper[4949]: I1001 15:46:00.700504 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/15e92864-5078-4780-ac9c-a5064ba66ada-catalog-content\") pod \"community-operators-phqz6\" (UID: \"15e92864-5078-4780-ac9c-a5064ba66ada\") " pod="openshift-marketplace/community-operators-phqz6" Oct 01 15:46:00 crc kubenswrapper[4949]: I1001 15:46:00.701083 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-97plr\" (UniqueName: \"kubernetes.io/projected/15e92864-5078-4780-ac9c-a5064ba66ada-kube-api-access-97plr\") pod \"community-operators-phqz6\" (UID: \"15e92864-5078-4780-ac9c-a5064ba66ada\") " pod="openshift-marketplace/community-operators-phqz6" Oct 01 15:46:00 crc kubenswrapper[4949]: I1001 15:46:00.701736 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15e92864-5078-4780-ac9c-a5064ba66ada-utilities\") pod \"community-operators-phqz6\" (UID: \"15e92864-5078-4780-ac9c-a5064ba66ada\") " pod="openshift-marketplace/community-operators-phqz6" Oct 01 15:46:00 crc kubenswrapper[4949]: I1001 15:46:00.701734 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/15e92864-5078-4780-ac9c-a5064ba66ada-utilities\") pod \"community-operators-phqz6\" (UID: \"15e92864-5078-4780-ac9c-a5064ba66ada\") " pod="openshift-marketplace/community-operators-phqz6" Oct 01 15:46:00 crc kubenswrapper[4949]: I1001 15:46:00.721895 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-97plr\" (UniqueName: \"kubernetes.io/projected/15e92864-5078-4780-ac9c-a5064ba66ada-kube-api-access-97plr\") pod \"community-operators-phqz6\" (UID: \"15e92864-5078-4780-ac9c-a5064ba66ada\") " pod="openshift-marketplace/community-operators-phqz6" Oct 01 15:46:00 crc kubenswrapper[4949]: I1001 15:46:00.924988 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-phqz6" Oct 01 15:46:01 crc kubenswrapper[4949]: I1001 15:46:01.295150 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-phqz6"] Oct 01 15:46:01 crc kubenswrapper[4949]: W1001 15:46:01.302284 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15e92864_5078_4780_ac9c_a5064ba66ada.slice/crio-f951ae5f67b520ba9a7945759b414d0651e1e67c73051493aa618ca507437cf4 WatchSource:0}: Error finding container f951ae5f67b520ba9a7945759b414d0651e1e67c73051493aa618ca507437cf4: Status 404 returned error can't find the container with id f951ae5f67b520ba9a7945759b414d0651e1e67c73051493aa618ca507437cf4 Oct 01 15:46:01 crc kubenswrapper[4949]: I1001 15:46:01.566431 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-4wgtk" event={"ID":"00c41ac2-4e6b-4656-b240-03036d30778d","Type":"ContainerStarted","Data":"1ea035283659c374f501c2f07280da17fbf582d01aaad5ee83e7c3dfa80d0f0e"} Oct 01 15:46:01 crc kubenswrapper[4949]: I1001 15:46:01.569019 4949 generic.go:334] "Generic (PLEG): container finished" podID="15e92864-5078-4780-ac9c-a5064ba66ada" containerID="a8972b393c0220d20813f1da987532aeb8b13cd42481354095f3961306acd5d2" exitCode=0 Oct 01 15:46:01 crc kubenswrapper[4949]: I1001 15:46:01.569065 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-phqz6" event={"ID":"15e92864-5078-4780-ac9c-a5064ba66ada","Type":"ContainerDied","Data":"a8972b393c0220d20813f1da987532aeb8b13cd42481354095f3961306acd5d2"} Oct 01 15:46:01 crc kubenswrapper[4949]: I1001 15:46:01.569112 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-phqz6" event={"ID":"15e92864-5078-4780-ac9c-a5064ba66ada","Type":"ContainerStarted","Data":"f951ae5f67b520ba9a7945759b414d0651e1e67c73051493aa618ca507437cf4"} Oct 01 15:46:01 crc kubenswrapper[4949]: I1001 15:46:01.571385 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g84kx" event={"ID":"08f8c987-cdde-4a43-8a61-e01d63fdb5e9","Type":"ContainerStarted","Data":"e18c5122584ca14a66e47ad33c947434a08901d5c407346b44bc76c7d578b553"} Oct 01 15:46:01 crc kubenswrapper[4949]: I1001 15:46:01.572612 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zzv5b" event={"ID":"46dbc1ee-6aff-455c-9f06-0a9dc719ce20","Type":"ContainerStarted","Data":"a658f66313914d504a77801496ecb735cbb513e45dce76b2c7abf5d07976290e"} Oct 01 15:46:01 crc kubenswrapper[4949]: I1001 15:46:01.614627 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-4wgtk" podStartSLOduration=1.954538791 podStartE2EDuration="4.614604595s" podCreationTimestamp="2025-10-01 15:45:57 +0000 UTC" firstStartedPulling="2025-10-01 15:45:58.513673895 +0000 UTC m=+257.819280086" lastFinishedPulling="2025-10-01 15:46:01.173739699 +0000 UTC m=+260.479345890" observedRunningTime="2025-10-01 15:46:01.595271385 +0000 UTC m=+260.900877606" watchObservedRunningTime="2025-10-01 15:46:01.614604595 +0000 UTC m=+260.920210806" Oct 01 15:46:01 crc kubenswrapper[4949]: I1001 15:46:01.652157 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-g84kx" podStartSLOduration=2.032607718 podStartE2EDuration="3.652139202s" podCreationTimestamp="2025-10-01 15:45:58 +0000 UTC" firstStartedPulling="2025-10-01 15:45:59.526248785 +0000 UTC m=+258.831854976" lastFinishedPulling="2025-10-01 15:46:01.145780269 +0000 UTC m=+260.451386460" observedRunningTime="2025-10-01 15:46:01.649807164 +0000 UTC m=+260.955413355" watchObservedRunningTime="2025-10-01 15:46:01.652139202 +0000 UTC m=+260.957745393" Oct 01 15:46:02 crc kubenswrapper[4949]: I1001 15:46:02.579256 4949 generic.go:334] "Generic (PLEG): container finished" podID="46dbc1ee-6aff-455c-9f06-0a9dc719ce20" containerID="a658f66313914d504a77801496ecb735cbb513e45dce76b2c7abf5d07976290e" exitCode=0 Oct 01 15:46:02 crc kubenswrapper[4949]: I1001 15:46:02.579356 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zzv5b" event={"ID":"46dbc1ee-6aff-455c-9f06-0a9dc719ce20","Type":"ContainerDied","Data":"a658f66313914d504a77801496ecb735cbb513e45dce76b2c7abf5d07976290e"} Oct 01 15:46:03 crc kubenswrapper[4949]: E1001 15:46:03.958580 4949 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15e92864_5078_4780_ac9c_a5064ba66ada.slice/crio-9bba2fc70839759aa0b8db0f461445667d2f826d7854be485d9e68ae18f7132f.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15e92864_5078_4780_ac9c_a5064ba66ada.slice/crio-conmon-9bba2fc70839759aa0b8db0f461445667d2f826d7854be485d9e68ae18f7132f.scope\": RecentStats: unable to find data in memory cache]" Oct 01 15:46:04 crc kubenswrapper[4949]: I1001 15:46:04.591921 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zzv5b" event={"ID":"46dbc1ee-6aff-455c-9f06-0a9dc719ce20","Type":"ContainerStarted","Data":"558991c982a624fe9e1f5636c6e4032ee37472e21ef1e3d5ada2e38b61e0e278"} Oct 01 15:46:04 crc kubenswrapper[4949]: I1001 15:46:04.593762 4949 generic.go:334] "Generic (PLEG): container finished" podID="15e92864-5078-4780-ac9c-a5064ba66ada" containerID="9bba2fc70839759aa0b8db0f461445667d2f826d7854be485d9e68ae18f7132f" exitCode=0 Oct 01 15:46:04 crc kubenswrapper[4949]: I1001 15:46:04.593796 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-phqz6" event={"ID":"15e92864-5078-4780-ac9c-a5064ba66ada","Type":"ContainerDied","Data":"9bba2fc70839759aa0b8db0f461445667d2f826d7854be485d9e68ae18f7132f"} Oct 01 15:46:04 crc kubenswrapper[4949]: I1001 15:46:04.607774 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zzv5b" podStartSLOduration=3.123582162 podStartE2EDuration="5.607743883s" podCreationTimestamp="2025-10-01 15:45:59 +0000 UTC" firstStartedPulling="2025-10-01 15:46:00.53382299 +0000 UTC m=+259.839429201" lastFinishedPulling="2025-10-01 15:46:03.017984741 +0000 UTC m=+262.323590922" observedRunningTime="2025-10-01 15:46:04.605628412 +0000 UTC m=+263.911234603" watchObservedRunningTime="2025-10-01 15:46:04.607743883 +0000 UTC m=+263.913350074" Oct 01 15:46:05 crc kubenswrapper[4949]: I1001 15:46:05.600361 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-phqz6" event={"ID":"15e92864-5078-4780-ac9c-a5064ba66ada","Type":"ContainerStarted","Data":"27e8fd88469420cb3f29396d86a88a264c75e93b0e5041d96d3e4eb69965fb36"} Oct 01 15:46:05 crc kubenswrapper[4949]: I1001 15:46:05.623392 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-phqz6" podStartSLOduration=2.092558976 podStartE2EDuration="5.623373642s" podCreationTimestamp="2025-10-01 15:46:00 +0000 UTC" firstStartedPulling="2025-10-01 15:46:01.570108907 +0000 UTC m=+260.875715098" lastFinishedPulling="2025-10-01 15:46:05.100923573 +0000 UTC m=+264.406529764" observedRunningTime="2025-10-01 15:46:05.620760476 +0000 UTC m=+264.926366667" watchObservedRunningTime="2025-10-01 15:46:05.623373642 +0000 UTC m=+264.928979833" Oct 01 15:46:07 crc kubenswrapper[4949]: I1001 15:46:07.464056 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-4wgtk" Oct 01 15:46:07 crc kubenswrapper[4949]: I1001 15:46:07.464367 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-4wgtk" Oct 01 15:46:07 crc kubenswrapper[4949]: I1001 15:46:07.512596 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-4wgtk" Oct 01 15:46:07 crc kubenswrapper[4949]: I1001 15:46:07.651952 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-4wgtk" Oct 01 15:46:08 crc kubenswrapper[4949]: I1001 15:46:08.472957 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-g84kx" Oct 01 15:46:08 crc kubenswrapper[4949]: I1001 15:46:08.473009 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-g84kx" Oct 01 15:46:08 crc kubenswrapper[4949]: I1001 15:46:08.517341 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-g84kx" Oct 01 15:46:08 crc kubenswrapper[4949]: I1001 15:46:08.651184 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-g84kx" Oct 01 15:46:09 crc kubenswrapper[4949]: I1001 15:46:09.875539 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zzv5b" Oct 01 15:46:09 crc kubenswrapper[4949]: I1001 15:46:09.875891 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zzv5b" Oct 01 15:46:09 crc kubenswrapper[4949]: I1001 15:46:09.916652 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zzv5b" Oct 01 15:46:10 crc kubenswrapper[4949]: I1001 15:46:10.659377 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zzv5b" Oct 01 15:46:10 crc kubenswrapper[4949]: I1001 15:46:10.925625 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-phqz6" Oct 01 15:46:10 crc kubenswrapper[4949]: I1001 15:46:10.925664 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-phqz6" Oct 01 15:46:10 crc kubenswrapper[4949]: I1001 15:46:10.963733 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-phqz6" Oct 01 15:46:11 crc kubenswrapper[4949]: I1001 15:46:11.667848 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-phqz6" Oct 01 15:47:18 crc kubenswrapper[4949]: I1001 15:47:18.038974 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 15:47:18 crc kubenswrapper[4949]: I1001 15:47:18.039579 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 15:47:48 crc kubenswrapper[4949]: I1001 15:47:48.039675 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 15:47:48 crc kubenswrapper[4949]: I1001 15:47:48.040377 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 15:48:18 crc kubenswrapper[4949]: I1001 15:48:18.038460 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 15:48:18 crc kubenswrapper[4949]: I1001 15:48:18.039022 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 15:48:18 crc kubenswrapper[4949]: I1001 15:48:18.039076 4949 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l6287" Oct 01 15:48:18 crc kubenswrapper[4949]: I1001 15:48:18.039721 4949 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"66ef7ab6a5e7fcf3b79f687490573d82ab228e320149f8042b009a1c999806ee"} pod="openshift-machine-config-operator/machine-config-daemon-l6287" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 01 15:48:18 crc kubenswrapper[4949]: I1001 15:48:18.039789 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" containerID="cri-o://66ef7ab6a5e7fcf3b79f687490573d82ab228e320149f8042b009a1c999806ee" gracePeriod=600 Oct 01 15:48:18 crc kubenswrapper[4949]: I1001 15:48:18.370244 4949 generic.go:334] "Generic (PLEG): container finished" podID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerID="66ef7ab6a5e7fcf3b79f687490573d82ab228e320149f8042b009a1c999806ee" exitCode=0 Oct 01 15:48:18 crc kubenswrapper[4949]: I1001 15:48:18.370436 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" event={"ID":"0e15cd67-d4ad-49b8-96a6-da114105e558","Type":"ContainerDied","Data":"66ef7ab6a5e7fcf3b79f687490573d82ab228e320149f8042b009a1c999806ee"} Oct 01 15:48:18 crc kubenswrapper[4949]: I1001 15:48:18.370574 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" event={"ID":"0e15cd67-d4ad-49b8-96a6-da114105e558","Type":"ContainerStarted","Data":"d65dc5a00a68ef848a51564a98d129e0f9a7891d85cd971faa8d65554abab984"} Oct 01 15:48:18 crc kubenswrapper[4949]: I1001 15:48:18.370597 4949 scope.go:117] "RemoveContainer" containerID="a85a240a198e037219428db0de4c633c5e99064f7f13a6a8922eba7fa3fac633" Oct 01 15:49:01 crc kubenswrapper[4949]: I1001 15:49:01.701792 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-6jllf"] Oct 01 15:49:01 crc kubenswrapper[4949]: I1001 15:49:01.703142 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-6jllf" Oct 01 15:49:01 crc kubenswrapper[4949]: I1001 15:49:01.717498 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-6jllf"] Oct 01 15:49:01 crc kubenswrapper[4949]: I1001 15:49:01.833112 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1d914cb3-d6bd-485a-b523-933025f39743-installation-pull-secrets\") pod \"image-registry-66df7c8f76-6jllf\" (UID: \"1d914cb3-d6bd-485a-b523-933025f39743\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jllf" Oct 01 15:49:01 crc kubenswrapper[4949]: I1001 15:49:01.833194 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d914cb3-d6bd-485a-b523-933025f39743-trusted-ca\") pod \"image-registry-66df7c8f76-6jllf\" (UID: \"1d914cb3-d6bd-485a-b523-933025f39743\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jllf" Oct 01 15:49:01 crc kubenswrapper[4949]: I1001 15:49:01.833243 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wnb2q\" (UniqueName: \"kubernetes.io/projected/1d914cb3-d6bd-485a-b523-933025f39743-kube-api-access-wnb2q\") pod \"image-registry-66df7c8f76-6jllf\" (UID: \"1d914cb3-d6bd-485a-b523-933025f39743\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jllf" Oct 01 15:49:01 crc kubenswrapper[4949]: I1001 15:49:01.833305 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1d914cb3-d6bd-485a-b523-933025f39743-registry-tls\") pod \"image-registry-66df7c8f76-6jllf\" (UID: \"1d914cb3-d6bd-485a-b523-933025f39743\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jllf" Oct 01 15:49:01 crc kubenswrapper[4949]: I1001 15:49:01.833328 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1d914cb3-d6bd-485a-b523-933025f39743-ca-trust-extracted\") pod \"image-registry-66df7c8f76-6jllf\" (UID: \"1d914cb3-d6bd-485a-b523-933025f39743\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jllf" Oct 01 15:49:01 crc kubenswrapper[4949]: I1001 15:49:01.833477 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1d914cb3-d6bd-485a-b523-933025f39743-bound-sa-token\") pod \"image-registry-66df7c8f76-6jllf\" (UID: \"1d914cb3-d6bd-485a-b523-933025f39743\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jllf" Oct 01 15:49:01 crc kubenswrapper[4949]: I1001 15:49:01.833554 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-6jllf\" (UID: \"1d914cb3-d6bd-485a-b523-933025f39743\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jllf" Oct 01 15:49:01 crc kubenswrapper[4949]: I1001 15:49:01.833586 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1d914cb3-d6bd-485a-b523-933025f39743-registry-certificates\") pod \"image-registry-66df7c8f76-6jllf\" (UID: \"1d914cb3-d6bd-485a-b523-933025f39743\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jllf" Oct 01 15:49:01 crc kubenswrapper[4949]: I1001 15:49:01.858708 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-6jllf\" (UID: \"1d914cb3-d6bd-485a-b523-933025f39743\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jllf" Oct 01 15:49:01 crc kubenswrapper[4949]: I1001 15:49:01.934985 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1d914cb3-d6bd-485a-b523-933025f39743-bound-sa-token\") pod \"image-registry-66df7c8f76-6jllf\" (UID: \"1d914cb3-d6bd-485a-b523-933025f39743\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jllf" Oct 01 15:49:01 crc kubenswrapper[4949]: I1001 15:49:01.935068 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1d914cb3-d6bd-485a-b523-933025f39743-registry-certificates\") pod \"image-registry-66df7c8f76-6jllf\" (UID: \"1d914cb3-d6bd-485a-b523-933025f39743\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jllf" Oct 01 15:49:01 crc kubenswrapper[4949]: I1001 15:49:01.935107 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1d914cb3-d6bd-485a-b523-933025f39743-installation-pull-secrets\") pod \"image-registry-66df7c8f76-6jllf\" (UID: \"1d914cb3-d6bd-485a-b523-933025f39743\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jllf" Oct 01 15:49:01 crc kubenswrapper[4949]: I1001 15:49:01.935160 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d914cb3-d6bd-485a-b523-933025f39743-trusted-ca\") pod \"image-registry-66df7c8f76-6jllf\" (UID: \"1d914cb3-d6bd-485a-b523-933025f39743\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jllf" Oct 01 15:49:01 crc kubenswrapper[4949]: I1001 15:49:01.935202 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wnb2q\" (UniqueName: \"kubernetes.io/projected/1d914cb3-d6bd-485a-b523-933025f39743-kube-api-access-wnb2q\") pod \"image-registry-66df7c8f76-6jllf\" (UID: \"1d914cb3-d6bd-485a-b523-933025f39743\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jllf" Oct 01 15:49:01 crc kubenswrapper[4949]: I1001 15:49:01.935223 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1d914cb3-d6bd-485a-b523-933025f39743-registry-tls\") pod \"image-registry-66df7c8f76-6jllf\" (UID: \"1d914cb3-d6bd-485a-b523-933025f39743\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jllf" Oct 01 15:49:01 crc kubenswrapper[4949]: I1001 15:49:01.935243 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1d914cb3-d6bd-485a-b523-933025f39743-ca-trust-extracted\") pod \"image-registry-66df7c8f76-6jllf\" (UID: \"1d914cb3-d6bd-485a-b523-933025f39743\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jllf" Oct 01 15:49:01 crc kubenswrapper[4949]: I1001 15:49:01.935956 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/1d914cb3-d6bd-485a-b523-933025f39743-ca-trust-extracted\") pod \"image-registry-66df7c8f76-6jllf\" (UID: \"1d914cb3-d6bd-485a-b523-933025f39743\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jllf" Oct 01 15:49:01 crc kubenswrapper[4949]: I1001 15:49:01.936587 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/1d914cb3-d6bd-485a-b523-933025f39743-registry-certificates\") pod \"image-registry-66df7c8f76-6jllf\" (UID: \"1d914cb3-d6bd-485a-b523-933025f39743\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jllf" Oct 01 15:49:01 crc kubenswrapper[4949]: I1001 15:49:01.936655 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/1d914cb3-d6bd-485a-b523-933025f39743-trusted-ca\") pod \"image-registry-66df7c8f76-6jllf\" (UID: \"1d914cb3-d6bd-485a-b523-933025f39743\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jllf" Oct 01 15:49:01 crc kubenswrapper[4949]: I1001 15:49:01.941597 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/1d914cb3-d6bd-485a-b523-933025f39743-installation-pull-secrets\") pod \"image-registry-66df7c8f76-6jllf\" (UID: \"1d914cb3-d6bd-485a-b523-933025f39743\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jllf" Oct 01 15:49:01 crc kubenswrapper[4949]: I1001 15:49:01.941926 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/1d914cb3-d6bd-485a-b523-933025f39743-registry-tls\") pod \"image-registry-66df7c8f76-6jllf\" (UID: \"1d914cb3-d6bd-485a-b523-933025f39743\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jllf" Oct 01 15:49:01 crc kubenswrapper[4949]: I1001 15:49:01.958949 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wnb2q\" (UniqueName: \"kubernetes.io/projected/1d914cb3-d6bd-485a-b523-933025f39743-kube-api-access-wnb2q\") pod \"image-registry-66df7c8f76-6jllf\" (UID: \"1d914cb3-d6bd-485a-b523-933025f39743\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jllf" Oct 01 15:49:01 crc kubenswrapper[4949]: I1001 15:49:01.959745 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/1d914cb3-d6bd-485a-b523-933025f39743-bound-sa-token\") pod \"image-registry-66df7c8f76-6jllf\" (UID: \"1d914cb3-d6bd-485a-b523-933025f39743\") " pod="openshift-image-registry/image-registry-66df7c8f76-6jllf" Oct 01 15:49:02 crc kubenswrapper[4949]: I1001 15:49:02.019097 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-6jllf" Oct 01 15:49:02 crc kubenswrapper[4949]: I1001 15:49:02.210851 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-6jllf"] Oct 01 15:49:02 crc kubenswrapper[4949]: I1001 15:49:02.619923 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-6jllf" event={"ID":"1d914cb3-d6bd-485a-b523-933025f39743","Type":"ContainerStarted","Data":"67ac9b62ed6a80b51b2c5d5a9fcd6b2a915c23c064db3dd407f10473affff7f5"} Oct 01 15:49:02 crc kubenswrapper[4949]: I1001 15:49:02.620287 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-6jllf" event={"ID":"1d914cb3-d6bd-485a-b523-933025f39743","Type":"ContainerStarted","Data":"e8acc304e2e28af8f135ac09d138964968e8b2eee70c7e9f8ceee1fbde75d8f1"} Oct 01 15:49:02 crc kubenswrapper[4949]: I1001 15:49:02.621711 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-6jllf" Oct 01 15:49:02 crc kubenswrapper[4949]: I1001 15:49:02.638146 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-6jllf" podStartSLOduration=1.6381064699999999 podStartE2EDuration="1.63810647s" podCreationTimestamp="2025-10-01 15:49:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:49:02.63708199 +0000 UTC m=+441.942688201" watchObservedRunningTime="2025-10-01 15:49:02.63810647 +0000 UTC m=+441.943712671" Oct 01 15:49:22 crc kubenswrapper[4949]: I1001 15:49:22.025869 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-6jllf" Oct 01 15:49:22 crc kubenswrapper[4949]: I1001 15:49:22.076382 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-47lbj"] Oct 01 15:49:47 crc kubenswrapper[4949]: I1001 15:49:47.111950 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" podUID="2e81ff5c-f656-4f24-bf49-33fbec1f7052" containerName="registry" containerID="cri-o://7b8a6c2394399b015d49a35cff7d8692b28191ee5c40aa0a9bc1f42f61d3e94b" gracePeriod=30 Oct 01 15:49:47 crc kubenswrapper[4949]: I1001 15:49:47.874767 4949 generic.go:334] "Generic (PLEG): container finished" podID="2e81ff5c-f656-4f24-bf49-33fbec1f7052" containerID="7b8a6c2394399b015d49a35cff7d8692b28191ee5c40aa0a9bc1f42f61d3e94b" exitCode=0 Oct 01 15:49:47 crc kubenswrapper[4949]: I1001 15:49:47.874870 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" event={"ID":"2e81ff5c-f656-4f24-bf49-33fbec1f7052","Type":"ContainerDied","Data":"7b8a6c2394399b015d49a35cff7d8692b28191ee5c40aa0a9bc1f42f61d3e94b"} Oct 01 15:49:48 crc kubenswrapper[4949]: I1001 15:49:48.107215 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:49:48 crc kubenswrapper[4949]: I1001 15:49:48.269833 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2e81ff5c-f656-4f24-bf49-33fbec1f7052-registry-certificates\") pod \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " Oct 01 15:49:48 crc kubenswrapper[4949]: I1001 15:49:48.269923 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2e81ff5c-f656-4f24-bf49-33fbec1f7052-bound-sa-token\") pod \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " Oct 01 15:49:48 crc kubenswrapper[4949]: I1001 15:49:48.269944 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2e81ff5c-f656-4f24-bf49-33fbec1f7052-registry-tls\") pod \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " Oct 01 15:49:48 crc kubenswrapper[4949]: I1001 15:49:48.270841 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e81ff5c-f656-4f24-bf49-33fbec1f7052-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "2e81ff5c-f656-4f24-bf49-33fbec1f7052" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:49:48 crc kubenswrapper[4949]: I1001 15:49:48.271092 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " Oct 01 15:49:48 crc kubenswrapper[4949]: I1001 15:49:48.271184 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2e81ff5c-f656-4f24-bf49-33fbec1f7052-ca-trust-extracted\") pod \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " Oct 01 15:49:48 crc kubenswrapper[4949]: I1001 15:49:48.271212 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e81ff5c-f656-4f24-bf49-33fbec1f7052-trusted-ca\") pod \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " Oct 01 15:49:48 crc kubenswrapper[4949]: I1001 15:49:48.271998 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2e81ff5c-f656-4f24-bf49-33fbec1f7052-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2e81ff5c-f656-4f24-bf49-33fbec1f7052" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:49:48 crc kubenswrapper[4949]: I1001 15:49:48.272212 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bpcjq\" (UniqueName: \"kubernetes.io/projected/2e81ff5c-f656-4f24-bf49-33fbec1f7052-kube-api-access-bpcjq\") pod \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " Oct 01 15:49:48 crc kubenswrapper[4949]: I1001 15:49:48.272266 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2e81ff5c-f656-4f24-bf49-33fbec1f7052-installation-pull-secrets\") pod \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\" (UID: \"2e81ff5c-f656-4f24-bf49-33fbec1f7052\") " Oct 01 15:49:48 crc kubenswrapper[4949]: I1001 15:49:48.272742 4949 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/2e81ff5c-f656-4f24-bf49-33fbec1f7052-registry-certificates\") on node \"crc\" DevicePath \"\"" Oct 01 15:49:48 crc kubenswrapper[4949]: I1001 15:49:48.272768 4949 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2e81ff5c-f656-4f24-bf49-33fbec1f7052-trusted-ca\") on node \"crc\" DevicePath \"\"" Oct 01 15:49:48 crc kubenswrapper[4949]: I1001 15:49:48.277973 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e81ff5c-f656-4f24-bf49-33fbec1f7052-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "2e81ff5c-f656-4f24-bf49-33fbec1f7052" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:49:48 crc kubenswrapper[4949]: I1001 15:49:48.278017 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e81ff5c-f656-4f24-bf49-33fbec1f7052-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "2e81ff5c-f656-4f24-bf49-33fbec1f7052" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:49:48 crc kubenswrapper[4949]: I1001 15:49:48.278116 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e81ff5c-f656-4f24-bf49-33fbec1f7052-kube-api-access-bpcjq" (OuterVolumeSpecName: "kube-api-access-bpcjq") pod "2e81ff5c-f656-4f24-bf49-33fbec1f7052" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052"). InnerVolumeSpecName "kube-api-access-bpcjq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:49:48 crc kubenswrapper[4949]: I1001 15:49:48.278403 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e81ff5c-f656-4f24-bf49-33fbec1f7052-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "2e81ff5c-f656-4f24-bf49-33fbec1f7052" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:49:48 crc kubenswrapper[4949]: I1001 15:49:48.289249 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2e81ff5c-f656-4f24-bf49-33fbec1f7052-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "2e81ff5c-f656-4f24-bf49-33fbec1f7052" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 15:49:48 crc kubenswrapper[4949]: I1001 15:49:48.296726 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "2e81ff5c-f656-4f24-bf49-33fbec1f7052" (UID: "2e81ff5c-f656-4f24-bf49-33fbec1f7052"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Oct 01 15:49:48 crc kubenswrapper[4949]: I1001 15:49:48.374376 4949 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2e81ff5c-f656-4f24-bf49-33fbec1f7052-bound-sa-token\") on node \"crc\" DevicePath \"\"" Oct 01 15:49:48 crc kubenswrapper[4949]: I1001 15:49:48.374418 4949 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/2e81ff5c-f656-4f24-bf49-33fbec1f7052-registry-tls\") on node \"crc\" DevicePath \"\"" Oct 01 15:49:48 crc kubenswrapper[4949]: I1001 15:49:48.374431 4949 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/2e81ff5c-f656-4f24-bf49-33fbec1f7052-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Oct 01 15:49:48 crc kubenswrapper[4949]: I1001 15:49:48.374444 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bpcjq\" (UniqueName: \"kubernetes.io/projected/2e81ff5c-f656-4f24-bf49-33fbec1f7052-kube-api-access-bpcjq\") on node \"crc\" DevicePath \"\"" Oct 01 15:49:48 crc kubenswrapper[4949]: I1001 15:49:48.374460 4949 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/2e81ff5c-f656-4f24-bf49-33fbec1f7052-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Oct 01 15:49:48 crc kubenswrapper[4949]: I1001 15:49:48.881261 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" event={"ID":"2e81ff5c-f656-4f24-bf49-33fbec1f7052","Type":"ContainerDied","Data":"75c4790252abdaafdd08e664512529488e29683c3a8549c2002436046be9ad70"} Oct 01 15:49:48 crc kubenswrapper[4949]: I1001 15:49:48.881317 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-47lbj" Oct 01 15:49:48 crc kubenswrapper[4949]: I1001 15:49:48.881330 4949 scope.go:117] "RemoveContainer" containerID="7b8a6c2394399b015d49a35cff7d8692b28191ee5c40aa0a9bc1f42f61d3e94b" Oct 01 15:49:48 crc kubenswrapper[4949]: I1001 15:49:48.910334 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-47lbj"] Oct 01 15:49:48 crc kubenswrapper[4949]: I1001 15:49:48.914025 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-47lbj"] Oct 01 15:49:49 crc kubenswrapper[4949]: I1001 15:49:49.623584 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e81ff5c-f656-4f24-bf49-33fbec1f7052" path="/var/lib/kubelet/pods/2e81ff5c-f656-4f24-bf49-33fbec1f7052/volumes" Oct 01 15:50:18 crc kubenswrapper[4949]: I1001 15:50:18.039026 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 15:50:18 crc kubenswrapper[4949]: I1001 15:50:18.039628 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 15:50:48 crc kubenswrapper[4949]: I1001 15:50:48.038710 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 15:50:48 crc kubenswrapper[4949]: I1001 15:50:48.039294 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 15:51:18 crc kubenswrapper[4949]: I1001 15:51:18.038356 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 15:51:18 crc kubenswrapper[4949]: I1001 15:51:18.038824 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 15:51:18 crc kubenswrapper[4949]: I1001 15:51:18.038886 4949 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l6287" Oct 01 15:51:18 crc kubenswrapper[4949]: I1001 15:51:18.039509 4949 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"d65dc5a00a68ef848a51564a98d129e0f9a7891d85cd971faa8d65554abab984"} pod="openshift-machine-config-operator/machine-config-daemon-l6287" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 01 15:51:18 crc kubenswrapper[4949]: I1001 15:51:18.039564 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" containerID="cri-o://d65dc5a00a68ef848a51564a98d129e0f9a7891d85cd971faa8d65554abab984" gracePeriod=600 Oct 01 15:51:18 crc kubenswrapper[4949]: I1001 15:51:18.347434 4949 generic.go:334] "Generic (PLEG): container finished" podID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerID="d65dc5a00a68ef848a51564a98d129e0f9a7891d85cd971faa8d65554abab984" exitCode=0 Oct 01 15:51:18 crc kubenswrapper[4949]: I1001 15:51:18.347505 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" event={"ID":"0e15cd67-d4ad-49b8-96a6-da114105e558","Type":"ContainerDied","Data":"d65dc5a00a68ef848a51564a98d129e0f9a7891d85cd971faa8d65554abab984"} Oct 01 15:51:18 crc kubenswrapper[4949]: I1001 15:51:18.348043 4949 scope.go:117] "RemoveContainer" containerID="66ef7ab6a5e7fcf3b79f687490573d82ab228e320149f8042b009a1c999806ee" Oct 01 15:51:19 crc kubenswrapper[4949]: I1001 15:51:19.354588 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" event={"ID":"0e15cd67-d4ad-49b8-96a6-da114105e558","Type":"ContainerStarted","Data":"8b48ddbdf5b95765cf3f08bfbf80fa29211dfe735cc809fa7f3ec31b955af407"} Oct 01 15:51:52 crc kubenswrapper[4949]: I1001 15:51:52.462531 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-lqrkp"] Oct 01 15:51:52 crc kubenswrapper[4949]: E1001 15:51:52.463148 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e81ff5c-f656-4f24-bf49-33fbec1f7052" containerName="registry" Oct 01 15:51:52 crc kubenswrapper[4949]: I1001 15:51:52.463159 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e81ff5c-f656-4f24-bf49-33fbec1f7052" containerName="registry" Oct 01 15:51:52 crc kubenswrapper[4949]: I1001 15:51:52.463256 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e81ff5c-f656-4f24-bf49-33fbec1f7052" containerName="registry" Oct 01 15:51:52 crc kubenswrapper[4949]: I1001 15:51:52.463695 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-lqrkp" Oct 01 15:51:52 crc kubenswrapper[4949]: I1001 15:51:52.465949 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Oct 01 15:51:52 crc kubenswrapper[4949]: I1001 15:51:52.466300 4949 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-mpwfn" Oct 01 15:51:52 crc kubenswrapper[4949]: I1001 15:51:52.466623 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Oct 01 15:51:52 crc kubenswrapper[4949]: I1001 15:51:52.473521 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-lqrkp"] Oct 01 15:51:52 crc kubenswrapper[4949]: I1001 15:51:52.496453 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-5b446d88c5-7vbbz"] Oct 01 15:51:52 crc kubenswrapper[4949]: I1001 15:51:52.497275 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-7vbbz" Oct 01 15:51:52 crc kubenswrapper[4949]: I1001 15:51:52.498845 4949 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-fkx4k" Oct 01 15:51:52 crc kubenswrapper[4949]: I1001 15:51:52.507348 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-7vbbz"] Oct 01 15:51:52 crc kubenswrapper[4949]: I1001 15:51:52.512803 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-4cjht"] Oct 01 15:51:52 crc kubenswrapper[4949]: I1001 15:51:52.513617 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-4cjht" Oct 01 15:51:52 crc kubenswrapper[4949]: I1001 15:51:52.517385 4949 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-5q6zn" Oct 01 15:51:52 crc kubenswrapper[4949]: I1001 15:51:52.525991 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-4cjht"] Oct 01 15:51:52 crc kubenswrapper[4949]: I1001 15:51:52.570700 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-th7wv\" (UniqueName: \"kubernetes.io/projected/ea0056e5-5d6c-4039-9891-175a1352ab9d-kube-api-access-th7wv\") pod \"cert-manager-cainjector-7f985d654d-lqrkp\" (UID: \"ea0056e5-5d6c-4039-9891-175a1352ab9d\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-lqrkp" Oct 01 15:51:52 crc kubenswrapper[4949]: I1001 15:51:52.570758 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwzrk\" (UniqueName: \"kubernetes.io/projected/79dab76c-1e2a-4bd3-9a7d-8efe7bfd104b-kube-api-access-mwzrk\") pod \"cert-manager-5b446d88c5-7vbbz\" (UID: \"79dab76c-1e2a-4bd3-9a7d-8efe7bfd104b\") " pod="cert-manager/cert-manager-5b446d88c5-7vbbz" Oct 01 15:51:52 crc kubenswrapper[4949]: I1001 15:51:52.671524 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9h8l\" (UniqueName: \"kubernetes.io/projected/0bc43872-8657-4c8b-be00-679944969a4d-kube-api-access-m9h8l\") pod \"cert-manager-webhook-5655c58dd6-4cjht\" (UID: \"0bc43872-8657-4c8b-be00-679944969a4d\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-4cjht" Oct 01 15:51:52 crc kubenswrapper[4949]: I1001 15:51:52.671818 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-th7wv\" (UniqueName: \"kubernetes.io/projected/ea0056e5-5d6c-4039-9891-175a1352ab9d-kube-api-access-th7wv\") pod \"cert-manager-cainjector-7f985d654d-lqrkp\" (UID: \"ea0056e5-5d6c-4039-9891-175a1352ab9d\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-lqrkp" Oct 01 15:51:52 crc kubenswrapper[4949]: I1001 15:51:52.671995 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mwzrk\" (UniqueName: \"kubernetes.io/projected/79dab76c-1e2a-4bd3-9a7d-8efe7bfd104b-kube-api-access-mwzrk\") pod \"cert-manager-5b446d88c5-7vbbz\" (UID: \"79dab76c-1e2a-4bd3-9a7d-8efe7bfd104b\") " pod="cert-manager/cert-manager-5b446d88c5-7vbbz" Oct 01 15:51:52 crc kubenswrapper[4949]: I1001 15:51:52.690953 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-th7wv\" (UniqueName: \"kubernetes.io/projected/ea0056e5-5d6c-4039-9891-175a1352ab9d-kube-api-access-th7wv\") pod \"cert-manager-cainjector-7f985d654d-lqrkp\" (UID: \"ea0056e5-5d6c-4039-9891-175a1352ab9d\") " pod="cert-manager/cert-manager-cainjector-7f985d654d-lqrkp" Oct 01 15:51:52 crc kubenswrapper[4949]: I1001 15:51:52.693811 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mwzrk\" (UniqueName: \"kubernetes.io/projected/79dab76c-1e2a-4bd3-9a7d-8efe7bfd104b-kube-api-access-mwzrk\") pod \"cert-manager-5b446d88c5-7vbbz\" (UID: \"79dab76c-1e2a-4bd3-9a7d-8efe7bfd104b\") " pod="cert-manager/cert-manager-5b446d88c5-7vbbz" Oct 01 15:51:52 crc kubenswrapper[4949]: I1001 15:51:52.773395 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9h8l\" (UniqueName: \"kubernetes.io/projected/0bc43872-8657-4c8b-be00-679944969a4d-kube-api-access-m9h8l\") pod \"cert-manager-webhook-5655c58dd6-4cjht\" (UID: \"0bc43872-8657-4c8b-be00-679944969a4d\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-4cjht" Oct 01 15:51:52 crc kubenswrapper[4949]: I1001 15:51:52.781352 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7f985d654d-lqrkp" Oct 01 15:51:52 crc kubenswrapper[4949]: I1001 15:51:52.793116 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9h8l\" (UniqueName: \"kubernetes.io/projected/0bc43872-8657-4c8b-be00-679944969a4d-kube-api-access-m9h8l\") pod \"cert-manager-webhook-5655c58dd6-4cjht\" (UID: \"0bc43872-8657-4c8b-be00-679944969a4d\") " pod="cert-manager/cert-manager-webhook-5655c58dd6-4cjht" Oct 01 15:51:52 crc kubenswrapper[4949]: I1001 15:51:52.814094 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-5b446d88c5-7vbbz" Oct 01 15:51:52 crc kubenswrapper[4949]: I1001 15:51:52.827570 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-5655c58dd6-4cjht" Oct 01 15:51:53 crc kubenswrapper[4949]: I1001 15:51:53.051781 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-5655c58dd6-4cjht"] Oct 01 15:51:53 crc kubenswrapper[4949]: W1001 15:51:53.071229 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0bc43872_8657_4c8b_be00_679944969a4d.slice/crio-ac10f8a984c294495eeea90aa599e853b75037cb1132c75bd3d3de1a3ec5b147 WatchSource:0}: Error finding container ac10f8a984c294495eeea90aa599e853b75037cb1132c75bd3d3de1a3ec5b147: Status 404 returned error can't find the container with id ac10f8a984c294495eeea90aa599e853b75037cb1132c75bd3d3de1a3ec5b147 Oct 01 15:51:53 crc kubenswrapper[4949]: I1001 15:51:53.075544 4949 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 01 15:51:53 crc kubenswrapper[4949]: I1001 15:51:53.207049 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-5b446d88c5-7vbbz"] Oct 01 15:51:53 crc kubenswrapper[4949]: I1001 15:51:53.210047 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7f985d654d-lqrkp"] Oct 01 15:51:53 crc kubenswrapper[4949]: W1001 15:51:53.211732 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod79dab76c_1e2a_4bd3_9a7d_8efe7bfd104b.slice/crio-211d9ac95ee8d2730ba6922f2c92cd105ee250b67480ac326cef58e66950bd9b WatchSource:0}: Error finding container 211d9ac95ee8d2730ba6922f2c92cd105ee250b67480ac326cef58e66950bd9b: Status 404 returned error can't find the container with id 211d9ac95ee8d2730ba6922f2c92cd105ee250b67480ac326cef58e66950bd9b Oct 01 15:51:53 crc kubenswrapper[4949]: W1001 15:51:53.213789 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podea0056e5_5d6c_4039_9891_175a1352ab9d.slice/crio-95a615f7a50bf7d6c9e73aa159042bc163ab486cf76b285b5659ffe43f0b33cb WatchSource:0}: Error finding container 95a615f7a50bf7d6c9e73aa159042bc163ab486cf76b285b5659ffe43f0b33cb: Status 404 returned error can't find the container with id 95a615f7a50bf7d6c9e73aa159042bc163ab486cf76b285b5659ffe43f0b33cb Oct 01 15:51:53 crc kubenswrapper[4949]: I1001 15:51:53.536674 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-7vbbz" event={"ID":"79dab76c-1e2a-4bd3-9a7d-8efe7bfd104b","Type":"ContainerStarted","Data":"211d9ac95ee8d2730ba6922f2c92cd105ee250b67480ac326cef58e66950bd9b"} Oct 01 15:51:53 crc kubenswrapper[4949]: I1001 15:51:53.537894 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-4cjht" event={"ID":"0bc43872-8657-4c8b-be00-679944969a4d","Type":"ContainerStarted","Data":"ac10f8a984c294495eeea90aa599e853b75037cb1132c75bd3d3de1a3ec5b147"} Oct 01 15:51:53 crc kubenswrapper[4949]: I1001 15:51:53.538899 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-lqrkp" event={"ID":"ea0056e5-5d6c-4039-9891-175a1352ab9d","Type":"ContainerStarted","Data":"95a615f7a50bf7d6c9e73aa159042bc163ab486cf76b285b5659ffe43f0b33cb"} Oct 01 15:51:56 crc kubenswrapper[4949]: I1001 15:51:56.556728 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-5655c58dd6-4cjht" event={"ID":"0bc43872-8657-4c8b-be00-679944969a4d","Type":"ContainerStarted","Data":"838954baff5b44e9b5cbe84d14a870cef9320d0a6fe75035efe50b0a8969b7c2"} Oct 01 15:51:56 crc kubenswrapper[4949]: I1001 15:51:56.557456 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-5655c58dd6-4cjht" Oct 01 15:51:56 crc kubenswrapper[4949]: I1001 15:51:56.575724 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-5655c58dd6-4cjht" podStartSLOduration=1.908227009 podStartE2EDuration="4.575704028s" podCreationTimestamp="2025-10-01 15:51:52 +0000 UTC" firstStartedPulling="2025-10-01 15:51:53.075286845 +0000 UTC m=+612.380893056" lastFinishedPulling="2025-10-01 15:51:55.742763884 +0000 UTC m=+615.048370075" observedRunningTime="2025-10-01 15:51:56.574910156 +0000 UTC m=+615.880516337" watchObservedRunningTime="2025-10-01 15:51:56.575704028 +0000 UTC m=+615.881310229" Oct 01 15:51:57 crc kubenswrapper[4949]: I1001 15:51:57.565056 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-5b446d88c5-7vbbz" event={"ID":"79dab76c-1e2a-4bd3-9a7d-8efe7bfd104b","Type":"ContainerStarted","Data":"dc7bdb86e1cbb8b82be65b987b739ba1394bbc48290da158f4fd07deeac00abf"} Oct 01 15:51:57 crc kubenswrapper[4949]: I1001 15:51:57.567074 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7f985d654d-lqrkp" event={"ID":"ea0056e5-5d6c-4039-9891-175a1352ab9d","Type":"ContainerStarted","Data":"3f1632f97a453e204102edadb9b0fb1860ad786abb09a8993de542fdfc0042fd"} Oct 01 15:51:57 crc kubenswrapper[4949]: I1001 15:51:57.585652 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-5b446d88c5-7vbbz" podStartSLOduration=2.291601456 podStartE2EDuration="5.585630972s" podCreationTimestamp="2025-10-01 15:51:52 +0000 UTC" firstStartedPulling="2025-10-01 15:51:53.222262244 +0000 UTC m=+612.527868435" lastFinishedPulling="2025-10-01 15:51:56.51629177 +0000 UTC m=+615.821897951" observedRunningTime="2025-10-01 15:51:57.583012702 +0000 UTC m=+616.888618903" watchObservedRunningTime="2025-10-01 15:51:57.585630972 +0000 UTC m=+616.891237183" Oct 01 15:51:57 crc kubenswrapper[4949]: I1001 15:51:57.599210 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7f985d654d-lqrkp" podStartSLOduration=2.354844307 podStartE2EDuration="5.599184965s" podCreationTimestamp="2025-10-01 15:51:52 +0000 UTC" firstStartedPulling="2025-10-01 15:51:53.216549431 +0000 UTC m=+612.522155622" lastFinishedPulling="2025-10-01 15:51:56.460890049 +0000 UTC m=+615.766496280" observedRunningTime="2025-10-01 15:51:57.594973533 +0000 UTC m=+616.900579734" watchObservedRunningTime="2025-10-01 15:51:57.599184965 +0000 UTC m=+616.904791156" Oct 01 15:52:02 crc kubenswrapper[4949]: I1001 15:52:02.832278 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-5655c58dd6-4cjht" Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.221150 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pppfm"] Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.221550 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerName="ovn-controller" containerID="cri-o://2eb7bdabc02a6c9442d7f842e078acef313e10cd6dbf7153b289ee93b99c9abc" gracePeriod=30 Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.221927 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerName="sbdb" containerID="cri-o://d08a6304f2c50f5e54adf26554bd2f81aba317d4ee153f55dc25123926bc380e" gracePeriod=30 Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.221979 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerName="nbdb" containerID="cri-o://728c9faef5d0d63e5dd9cd98520d62db3e75646be0aeec6ff20c71598e35b5a4" gracePeriod=30 Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.222023 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerName="northd" containerID="cri-o://35e991ed458eaa7ea90ab0ae0204b7c9b865ad5c5044709257614edda19865c0" gracePeriod=30 Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.222059 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://e0d4badfc0169a3e0cbd62d6661b7e542f43cbbe9005a56c68c7e0027235b64e" gracePeriod=30 Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.222090 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerName="kube-rbac-proxy-node" containerID="cri-o://2df7f85dd23b39e72771fddf6c1ab34df3b9fcea664dffe05a1b8859281fff26" gracePeriod=30 Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.222140 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerName="ovn-acl-logging" containerID="cri-o://fde65de2ce60422cb5ddb92562725b8dcd2f8b36bee19cc6e6de8aaab216c7ba" gracePeriod=30 Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.264029 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerName="ovnkube-controller" containerID="cri-o://38cbc33011ec57f66d2abf2dbe2f8c91a9857563b440dc7517bacd0666294750" gracePeriod=30 Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.608280 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pppfm_6b30af5f-469f-4bee-b77f-4b58edba325b/ovnkube-controller/3.log" Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.610305 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pppfm_6b30af5f-469f-4bee-b77f-4b58edba325b/ovn-acl-logging/0.log" Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.610730 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pppfm_6b30af5f-469f-4bee-b77f-4b58edba325b/ovn-controller/0.log" Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.611064 4949 generic.go:334] "Generic (PLEG): container finished" podID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerID="38cbc33011ec57f66d2abf2dbe2f8c91a9857563b440dc7517bacd0666294750" exitCode=0 Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.611100 4949 generic.go:334] "Generic (PLEG): container finished" podID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerID="d08a6304f2c50f5e54adf26554bd2f81aba317d4ee153f55dc25123926bc380e" exitCode=0 Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.611111 4949 generic.go:334] "Generic (PLEG): container finished" podID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerID="728c9faef5d0d63e5dd9cd98520d62db3e75646be0aeec6ff20c71598e35b5a4" exitCode=0 Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.611145 4949 generic.go:334] "Generic (PLEG): container finished" podID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerID="35e991ed458eaa7ea90ab0ae0204b7c9b865ad5c5044709257614edda19865c0" exitCode=0 Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.611155 4949 generic.go:334] "Generic (PLEG): container finished" podID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerID="e0d4badfc0169a3e0cbd62d6661b7e542f43cbbe9005a56c68c7e0027235b64e" exitCode=0 Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.611151 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" event={"ID":"6b30af5f-469f-4bee-b77f-4b58edba325b","Type":"ContainerDied","Data":"38cbc33011ec57f66d2abf2dbe2f8c91a9857563b440dc7517bacd0666294750"} Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.611163 4949 generic.go:334] "Generic (PLEG): container finished" podID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerID="2df7f85dd23b39e72771fddf6c1ab34df3b9fcea664dffe05a1b8859281fff26" exitCode=0 Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.611203 4949 generic.go:334] "Generic (PLEG): container finished" podID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerID="fde65de2ce60422cb5ddb92562725b8dcd2f8b36bee19cc6e6de8aaab216c7ba" exitCode=143 Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.611193 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" event={"ID":"6b30af5f-469f-4bee-b77f-4b58edba325b","Type":"ContainerDied","Data":"d08a6304f2c50f5e54adf26554bd2f81aba317d4ee153f55dc25123926bc380e"} Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.611256 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" event={"ID":"6b30af5f-469f-4bee-b77f-4b58edba325b","Type":"ContainerDied","Data":"728c9faef5d0d63e5dd9cd98520d62db3e75646be0aeec6ff20c71598e35b5a4"} Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.611283 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" event={"ID":"6b30af5f-469f-4bee-b77f-4b58edba325b","Type":"ContainerDied","Data":"35e991ed458eaa7ea90ab0ae0204b7c9b865ad5c5044709257614edda19865c0"} Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.611299 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" event={"ID":"6b30af5f-469f-4bee-b77f-4b58edba325b","Type":"ContainerDied","Data":"e0d4badfc0169a3e0cbd62d6661b7e542f43cbbe9005a56c68c7e0027235b64e"} Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.611308 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" event={"ID":"6b30af5f-469f-4bee-b77f-4b58edba325b","Type":"ContainerDied","Data":"2df7f85dd23b39e72771fddf6c1ab34df3b9fcea664dffe05a1b8859281fff26"} Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.611318 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" event={"ID":"6b30af5f-469f-4bee-b77f-4b58edba325b","Type":"ContainerDied","Data":"fde65de2ce60422cb5ddb92562725b8dcd2f8b36bee19cc6e6de8aaab216c7ba"} Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.611328 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" event={"ID":"6b30af5f-469f-4bee-b77f-4b58edba325b","Type":"ContainerDied","Data":"2eb7bdabc02a6c9442d7f842e078acef313e10cd6dbf7153b289ee93b99c9abc"} Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.611206 4949 scope.go:117] "RemoveContainer" containerID="1e7482d935a30cc08710ffb6a3f58cda0f46736eedf37f8852d2d027ef121e6a" Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.611214 4949 generic.go:334] "Generic (PLEG): container finished" podID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerID="2eb7bdabc02a6c9442d7f842e078acef313e10cd6dbf7153b289ee93b99c9abc" exitCode=143 Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.613177 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-s5r4m_ffe32683-6bbe-472a-811e-8fe0fd1d1bb6/kube-multus/2.log" Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.613642 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-s5r4m_ffe32683-6bbe-472a-811e-8fe0fd1d1bb6/kube-multus/1.log" Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.613732 4949 generic.go:334] "Generic (PLEG): container finished" podID="ffe32683-6bbe-472a-811e-8fe0fd1d1bb6" containerID="674ea8da82695405b8163ca176857f496a30efb1ab1f6e9e6485ce661af8216d" exitCode=2 Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.613764 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-s5r4m" event={"ID":"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6","Type":"ContainerDied","Data":"674ea8da82695405b8163ca176857f496a30efb1ab1f6e9e6485ce661af8216d"} Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.614237 4949 scope.go:117] "RemoveContainer" containerID="674ea8da82695405b8163ca176857f496a30efb1ab1f6e9e6485ce661af8216d" Oct 01 15:52:03 crc kubenswrapper[4949]: E1001 15:52:03.614465 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-s5r4m_openshift-multus(ffe32683-6bbe-472a-811e-8fe0fd1d1bb6)\"" pod="openshift-multus/multus-s5r4m" podUID="ffe32683-6bbe-472a-811e-8fe0fd1d1bb6" Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.643329 4949 scope.go:117] "RemoveContainer" containerID="b3d51987d4ca121795b1960ac1901175eb67d69f9380770888791f085e0f3bcc" Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.904078 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pppfm_6b30af5f-469f-4bee-b77f-4b58edba325b/ovn-acl-logging/0.log" Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.904571 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pppfm_6b30af5f-469f-4bee-b77f-4b58edba325b/ovn-controller/0.log" Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.904968 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.953219 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-4htfq"] Oct 01 15:52:03 crc kubenswrapper[4949]: E1001 15:52:03.953459 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerName="sbdb" Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.953476 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerName="sbdb" Oct 01 15:52:03 crc kubenswrapper[4949]: E1001 15:52:03.953488 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerName="nbdb" Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.953497 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerName="nbdb" Oct 01 15:52:03 crc kubenswrapper[4949]: E1001 15:52:03.953513 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerName="ovnkube-controller" Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.953521 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerName="ovnkube-controller" Oct 01 15:52:03 crc kubenswrapper[4949]: E1001 15:52:03.953537 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerName="kube-rbac-proxy-node" Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.953544 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerName="kube-rbac-proxy-node" Oct 01 15:52:03 crc kubenswrapper[4949]: E1001 15:52:03.953555 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerName="northd" Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.953562 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerName="northd" Oct 01 15:52:03 crc kubenswrapper[4949]: E1001 15:52:03.953573 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerName="kubecfg-setup" Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.953581 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerName="kubecfg-setup" Oct 01 15:52:03 crc kubenswrapper[4949]: E1001 15:52:03.953589 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerName="ovn-acl-logging" Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.953596 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerName="ovn-acl-logging" Oct 01 15:52:03 crc kubenswrapper[4949]: E1001 15:52:03.953605 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerName="ovnkube-controller" Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.953612 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerName="ovnkube-controller" Oct 01 15:52:03 crc kubenswrapper[4949]: E1001 15:52:03.953621 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerName="ovn-controller" Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.953628 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerName="ovn-controller" Oct 01 15:52:03 crc kubenswrapper[4949]: E1001 15:52:03.953638 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerName="ovnkube-controller" Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.953645 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerName="ovnkube-controller" Oct 01 15:52:03 crc kubenswrapper[4949]: E1001 15:52:03.953654 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerName="kube-rbac-proxy-ovn-metrics" Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.953662 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerName="kube-rbac-proxy-ovn-metrics" Oct 01 15:52:03 crc kubenswrapper[4949]: E1001 15:52:03.953702 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerName="ovnkube-controller" Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.953710 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerName="ovnkube-controller" Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.953844 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerName="nbdb" Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.953856 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerName="kube-rbac-proxy-ovn-metrics" Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.953866 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerName="sbdb" Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.953876 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerName="ovnkube-controller" Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.953884 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerName="ovnkube-controller" Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.953893 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerName="kube-rbac-proxy-node" Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.953906 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerName="ovnkube-controller" Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.953913 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerName="northd" Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.953920 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerName="ovn-acl-logging" Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.953930 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerName="ovn-controller" Oct 01 15:52:03 crc kubenswrapper[4949]: E1001 15:52:03.954042 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerName="ovnkube-controller" Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.954053 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerName="ovnkube-controller" Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.954249 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerName="ovnkube-controller" Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.954498 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" containerName="ovnkube-controller" Oct 01 15:52:03 crc kubenswrapper[4949]: I1001 15:52:03.956035 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.028636 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-var-lib-openvswitch\") pod \"6b30af5f-469f-4bee-b77f-4b58edba325b\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.028683 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-host-cni-netd\") pod \"6b30af5f-469f-4bee-b77f-4b58edba325b\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.028723 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-log-socket\") pod \"6b30af5f-469f-4bee-b77f-4b58edba325b\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.028742 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "6b30af5f-469f-4bee-b77f-4b58edba325b" (UID: "6b30af5f-469f-4bee-b77f-4b58edba325b"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.028766 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-node-log\") pod \"6b30af5f-469f-4bee-b77f-4b58edba325b\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.028783 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-run-openvswitch\") pod \"6b30af5f-469f-4bee-b77f-4b58edba325b\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.028788 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "6b30af5f-469f-4bee-b77f-4b58edba325b" (UID: "6b30af5f-469f-4bee-b77f-4b58edba325b"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.028805 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6b30af5f-469f-4bee-b77f-4b58edba325b-ovnkube-script-lib\") pod \"6b30af5f-469f-4bee-b77f-4b58edba325b\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.028816 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-node-log" (OuterVolumeSpecName: "node-log") pod "6b30af5f-469f-4bee-b77f-4b58edba325b" (UID: "6b30af5f-469f-4bee-b77f-4b58edba325b"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.028822 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f7t8d\" (UniqueName: \"kubernetes.io/projected/6b30af5f-469f-4bee-b77f-4b58edba325b-kube-api-access-f7t8d\") pod \"6b30af5f-469f-4bee-b77f-4b58edba325b\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.028838 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-host-run-netns\") pod \"6b30af5f-469f-4bee-b77f-4b58edba325b\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.028855 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-systemd-units\") pod \"6b30af5f-469f-4bee-b77f-4b58edba325b\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.028877 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-run-systemd\") pod \"6b30af5f-469f-4bee-b77f-4b58edba325b\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.028892 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6b30af5f-469f-4bee-b77f-4b58edba325b-env-overrides\") pod \"6b30af5f-469f-4bee-b77f-4b58edba325b\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.028911 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6b30af5f-469f-4bee-b77f-4b58edba325b-ovn-node-metrics-cert\") pod \"6b30af5f-469f-4bee-b77f-4b58edba325b\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.028927 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-run-ovn\") pod \"6b30af5f-469f-4bee-b77f-4b58edba325b\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.028946 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-host-run-ovn-kubernetes\") pod \"6b30af5f-469f-4bee-b77f-4b58edba325b\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.028965 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6b30af5f-469f-4bee-b77f-4b58edba325b-ovnkube-config\") pod \"6b30af5f-469f-4bee-b77f-4b58edba325b\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.028838 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-log-socket" (OuterVolumeSpecName: "log-socket") pod "6b30af5f-469f-4bee-b77f-4b58edba325b" (UID: "6b30af5f-469f-4bee-b77f-4b58edba325b"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.028997 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"6b30af5f-469f-4bee-b77f-4b58edba325b\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.029029 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-host-kubelet\") pod \"6b30af5f-469f-4bee-b77f-4b58edba325b\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.029047 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-host-cni-bin\") pod \"6b30af5f-469f-4bee-b77f-4b58edba325b\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.029060 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-etc-openvswitch\") pod \"6b30af5f-469f-4bee-b77f-4b58edba325b\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.029081 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-host-slash\") pod \"6b30af5f-469f-4bee-b77f-4b58edba325b\" (UID: \"6b30af5f-469f-4bee-b77f-4b58edba325b\") " Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.029212 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-env-overrides\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.029233 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.029251 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5rrq\" (UniqueName: \"kubernetes.io/projected/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-kube-api-access-j5rrq\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.029271 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-run-ovn\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.029287 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-var-lib-openvswitch\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.029303 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-run-systemd\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.029316 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-ovn-node-metrics-cert\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.029331 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-host-slash\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.029364 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-host-cni-netd\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.029381 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-ovnkube-config\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.029396 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-host-kubelet\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.029420 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-etc-openvswitch\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.029444 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-run-openvswitch\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.029463 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-ovnkube-script-lib\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.028914 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "6b30af5f-469f-4bee-b77f-4b58edba325b" (UID: "6b30af5f-469f-4bee-b77f-4b58edba325b"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.028944 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "6b30af5f-469f-4bee-b77f-4b58edba325b" (UID: "6b30af5f-469f-4bee-b77f-4b58edba325b"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.029087 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "6b30af5f-469f-4bee-b77f-4b58edba325b" (UID: "6b30af5f-469f-4bee-b77f-4b58edba325b"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.029162 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "6b30af5f-469f-4bee-b77f-4b58edba325b" (UID: "6b30af5f-469f-4bee-b77f-4b58edba325b"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.029187 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "6b30af5f-469f-4bee-b77f-4b58edba325b" (UID: "6b30af5f-469f-4bee-b77f-4b58edba325b"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.029520 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "6b30af5f-469f-4bee-b77f-4b58edba325b" (UID: "6b30af5f-469f-4bee-b77f-4b58edba325b"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.029480 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-node-log\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.029543 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "6b30af5f-469f-4bee-b77f-4b58edba325b" (UID: "6b30af5f-469f-4bee-b77f-4b58edba325b"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.029329 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "6b30af5f-469f-4bee-b77f-4b58edba325b" (UID: "6b30af5f-469f-4bee-b77f-4b58edba325b"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.029454 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b30af5f-469f-4bee-b77f-4b58edba325b-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6b30af5f-469f-4bee-b77f-4b58edba325b" (UID: "6b30af5f-469f-4bee-b77f-4b58edba325b"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.029468 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b30af5f-469f-4bee-b77f-4b58edba325b-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6b30af5f-469f-4bee-b77f-4b58edba325b" (UID: "6b30af5f-469f-4bee-b77f-4b58edba325b"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.029502 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-host-slash" (OuterVolumeSpecName: "host-slash") pod "6b30af5f-469f-4bee-b77f-4b58edba325b" (UID: "6b30af5f-469f-4bee-b77f-4b58edba325b"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.029557 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "6b30af5f-469f-4bee-b77f-4b58edba325b" (UID: "6b30af5f-469f-4bee-b77f-4b58edba325b"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.029611 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-systemd-units\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.029738 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b30af5f-469f-4bee-b77f-4b58edba325b-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6b30af5f-469f-4bee-b77f-4b58edba325b" (UID: "6b30af5f-469f-4bee-b77f-4b58edba325b"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.029775 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-host-cni-bin\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.029834 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-host-run-ovn-kubernetes\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.029905 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-log-socket\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.029974 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-host-run-netns\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.030093 4949 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.030106 4949 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-host-cni-netd\") on node \"crc\" DevicePath \"\"" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.030117 4949 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-log-socket\") on node \"crc\" DevicePath \"\"" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.030143 4949 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-node-log\") on node \"crc\" DevicePath \"\"" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.030152 4949 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-run-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.030160 4949 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6b30af5f-469f-4bee-b77f-4b58edba325b-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.030171 4949 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-host-run-netns\") on node \"crc\" DevicePath \"\"" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.030179 4949 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-systemd-units\") on node \"crc\" DevicePath \"\"" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.030188 4949 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6b30af5f-469f-4bee-b77f-4b58edba325b-env-overrides\") on node \"crc\" DevicePath \"\"" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.030197 4949 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.030206 4949 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.030215 4949 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6b30af5f-469f-4bee-b77f-4b58edba325b-ovnkube-config\") on node \"crc\" DevicePath \"\"" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.030224 4949 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.030235 4949 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-host-kubelet\") on node \"crc\" DevicePath \"\"" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.030244 4949 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-host-cni-bin\") on node \"crc\" DevicePath \"\"" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.030251 4949 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.030259 4949 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-host-slash\") on node \"crc\" DevicePath \"\"" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.033925 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b30af5f-469f-4bee-b77f-4b58edba325b-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6b30af5f-469f-4bee-b77f-4b58edba325b" (UID: "6b30af5f-469f-4bee-b77f-4b58edba325b"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.034345 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b30af5f-469f-4bee-b77f-4b58edba325b-kube-api-access-f7t8d" (OuterVolumeSpecName: "kube-api-access-f7t8d") pod "6b30af5f-469f-4bee-b77f-4b58edba325b" (UID: "6b30af5f-469f-4bee-b77f-4b58edba325b"). InnerVolumeSpecName "kube-api-access-f7t8d". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.043217 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "6b30af5f-469f-4bee-b77f-4b58edba325b" (UID: "6b30af5f-469f-4bee-b77f-4b58edba325b"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.131508 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-run-openvswitch\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.131565 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-ovnkube-script-lib\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.131585 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-node-log\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.131618 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-systemd-units\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.131639 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-host-cni-bin\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.131655 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-host-run-ovn-kubernetes\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.131672 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-log-socket\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.131688 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-host-run-netns\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.131705 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-env-overrides\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.131723 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.131741 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5rrq\" (UniqueName: \"kubernetes.io/projected/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-kube-api-access-j5rrq\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.131755 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-run-ovn\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.131771 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-var-lib-openvswitch\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.131787 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-run-systemd\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.131802 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-ovn-node-metrics-cert\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.131817 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-host-slash\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.131842 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-host-cni-netd\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.131858 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-ovnkube-config\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.131873 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-host-kubelet\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.131893 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-etc-openvswitch\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.131926 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f7t8d\" (UniqueName: \"kubernetes.io/projected/6b30af5f-469f-4bee-b77f-4b58edba325b-kube-api-access-f7t8d\") on node \"crc\" DevicePath \"\"" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.131937 4949 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/6b30af5f-469f-4bee-b77f-4b58edba325b-run-systemd\") on node \"crc\" DevicePath \"\"" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.131946 4949 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6b30af5f-469f-4bee-b77f-4b58edba325b-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.131985 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-etc-openvswitch\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.132022 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-run-openvswitch\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.132715 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-ovnkube-script-lib\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.132754 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-node-log\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.132775 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-systemd-units\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.132794 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-host-cni-bin\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.132815 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-host-run-ovn-kubernetes\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.132835 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-log-socket\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.132855 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-host-run-netns\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.133080 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-run-systemd\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.133165 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-env-overrides\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.133180 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-var-lib-openvswitch\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.133209 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-host-kubelet\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.133234 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-host-slash\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.133231 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-host-cni-netd\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.133259 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.133231 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-run-ovn\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.133853 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-ovnkube-config\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.136886 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-ovn-node-metrics-cert\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.164776 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5rrq\" (UniqueName: \"kubernetes.io/projected/7c6a08b5-9e68-4407-b215-f02dbf0b2ac1-kube-api-access-j5rrq\") pod \"ovnkube-node-4htfq\" (UID: \"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1\") " pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.270066 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.620661 4949 generic.go:334] "Generic (PLEG): container finished" podID="7c6a08b5-9e68-4407-b215-f02dbf0b2ac1" containerID="927e468cb050f03156366efff6a8951b8cf2917e44cb008dfaab06463a05a855" exitCode=0 Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.620746 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" event={"ID":"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1","Type":"ContainerDied","Data":"927e468cb050f03156366efff6a8951b8cf2917e44cb008dfaab06463a05a855"} Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.621068 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" event={"ID":"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1","Type":"ContainerStarted","Data":"5d8b78287babff128fe679844cd3a48e9eebf8fcce7fcc9e186866e755bd1a1e"} Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.627072 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pppfm_6b30af5f-469f-4bee-b77f-4b58edba325b/ovn-acl-logging/0.log" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.627591 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-pppfm_6b30af5f-469f-4bee-b77f-4b58edba325b/ovn-controller/0.log" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.628012 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" event={"ID":"6b30af5f-469f-4bee-b77f-4b58edba325b","Type":"ContainerDied","Data":"2c8aa2de9ec6d307212b2a9111bb246385942ea204db4894c90bb0534930e300"} Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.628049 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-pppfm" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.628060 4949 scope.go:117] "RemoveContainer" containerID="38cbc33011ec57f66d2abf2dbe2f8c91a9857563b440dc7517bacd0666294750" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.630335 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-s5r4m_ffe32683-6bbe-472a-811e-8fe0fd1d1bb6/kube-multus/2.log" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.649610 4949 scope.go:117] "RemoveContainer" containerID="d08a6304f2c50f5e54adf26554bd2f81aba317d4ee153f55dc25123926bc380e" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.664043 4949 scope.go:117] "RemoveContainer" containerID="728c9faef5d0d63e5dd9cd98520d62db3e75646be0aeec6ff20c71598e35b5a4" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.694157 4949 scope.go:117] "RemoveContainer" containerID="35e991ed458eaa7ea90ab0ae0204b7c9b865ad5c5044709257614edda19865c0" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.704401 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pppfm"] Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.710345 4949 scope.go:117] "RemoveContainer" containerID="e0d4badfc0169a3e0cbd62d6661b7e542f43cbbe9005a56c68c7e0027235b64e" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.718790 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-pppfm"] Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.736732 4949 scope.go:117] "RemoveContainer" containerID="2df7f85dd23b39e72771fddf6c1ab34df3b9fcea664dffe05a1b8859281fff26" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.749058 4949 scope.go:117] "RemoveContainer" containerID="fde65de2ce60422cb5ddb92562725b8dcd2f8b36bee19cc6e6de8aaab216c7ba" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.760902 4949 scope.go:117] "RemoveContainer" containerID="2eb7bdabc02a6c9442d7f842e078acef313e10cd6dbf7153b289ee93b99c9abc" Oct 01 15:52:04 crc kubenswrapper[4949]: I1001 15:52:04.774389 4949 scope.go:117] "RemoveContainer" containerID="aa5db58ae52e340d9577e17c3da4fffb37d8b33c964bfdf02f39f28413bcc056" Oct 01 15:52:05 crc kubenswrapper[4949]: I1001 15:52:05.608377 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b30af5f-469f-4bee-b77f-4b58edba325b" path="/var/lib/kubelet/pods/6b30af5f-469f-4bee-b77f-4b58edba325b/volumes" Oct 01 15:52:05 crc kubenswrapper[4949]: I1001 15:52:05.639352 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" event={"ID":"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1","Type":"ContainerStarted","Data":"c688f10cc417e661531042b54b8b5a95ed9045f1bc209f24cff0716faffdf866"} Oct 01 15:52:05 crc kubenswrapper[4949]: I1001 15:52:05.639391 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" event={"ID":"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1","Type":"ContainerStarted","Data":"d9e25bfcc573d4bfa3a78103e88731f627c9f5c81c43b896fb838543632fdb8b"} Oct 01 15:52:05 crc kubenswrapper[4949]: I1001 15:52:05.639403 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" event={"ID":"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1","Type":"ContainerStarted","Data":"50f23c3fd3d3f2d6a82470191075a88dd7d825a4cd21f2875f8e55790d47d283"} Oct 01 15:52:05 crc kubenswrapper[4949]: I1001 15:52:05.639411 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" event={"ID":"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1","Type":"ContainerStarted","Data":"95cdabaefe69a7eaf56c033b328ded73dcc66dc47d61bdbbcaedd59f8213dc80"} Oct 01 15:52:05 crc kubenswrapper[4949]: I1001 15:52:05.639422 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" event={"ID":"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1","Type":"ContainerStarted","Data":"009bd443a89d131168712c181e6f89dc8f7c70b13f67c282ac7b8d13402e8c5c"} Oct 01 15:52:05 crc kubenswrapper[4949]: I1001 15:52:05.639430 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" event={"ID":"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1","Type":"ContainerStarted","Data":"9387c3d5e120f107a405d30ce20ddc5824795721715f53e1ca1cd0a19af9f789"} Oct 01 15:52:07 crc kubenswrapper[4949]: I1001 15:52:07.654895 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" event={"ID":"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1","Type":"ContainerStarted","Data":"730dba91710999885b20fdf562c63b7d6d9a95bf36f8d6267b83b9b522a8bbe7"} Oct 01 15:52:10 crc kubenswrapper[4949]: I1001 15:52:10.674243 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" event={"ID":"7c6a08b5-9e68-4407-b215-f02dbf0b2ac1","Type":"ContainerStarted","Data":"a2202dea3a8bd93b6c9030a161a6fb1576966afd910b7557150502ae3d67b2b6"} Oct 01 15:52:10 crc kubenswrapper[4949]: I1001 15:52:10.674693 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:10 crc kubenswrapper[4949]: I1001 15:52:10.674704 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:10 crc kubenswrapper[4949]: I1001 15:52:10.710161 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" podStartSLOduration=7.710106316 podStartE2EDuration="7.710106316s" podCreationTimestamp="2025-10-01 15:52:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:52:10.706287375 +0000 UTC m=+630.011893566" watchObservedRunningTime="2025-10-01 15:52:10.710106316 +0000 UTC m=+630.015712507" Oct 01 15:52:10 crc kubenswrapper[4949]: I1001 15:52:10.715927 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:11 crc kubenswrapper[4949]: I1001 15:52:11.678814 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:11 crc kubenswrapper[4949]: I1001 15:52:11.712236 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:17 crc kubenswrapper[4949]: I1001 15:52:17.602286 4949 scope.go:117] "RemoveContainer" containerID="674ea8da82695405b8163ca176857f496a30efb1ab1f6e9e6485ce661af8216d" Oct 01 15:52:17 crc kubenswrapper[4949]: E1001 15:52:17.603797 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-s5r4m_openshift-multus(ffe32683-6bbe-472a-811e-8fe0fd1d1bb6)\"" pod="openshift-multus/multus-s5r4m" podUID="ffe32683-6bbe-472a-811e-8fe0fd1d1bb6" Oct 01 15:52:29 crc kubenswrapper[4949]: I1001 15:52:29.602168 4949 scope.go:117] "RemoveContainer" containerID="674ea8da82695405b8163ca176857f496a30efb1ab1f6e9e6485ce661af8216d" Oct 01 15:52:30 crc kubenswrapper[4949]: I1001 15:52:30.779291 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-s5r4m_ffe32683-6bbe-472a-811e-8fe0fd1d1bb6/kube-multus/2.log" Oct 01 15:52:30 crc kubenswrapper[4949]: I1001 15:52:30.779591 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-s5r4m" event={"ID":"ffe32683-6bbe-472a-811e-8fe0fd1d1bb6","Type":"ContainerStarted","Data":"5472f4d85c79f1f06e74d5c9d15afb581e77ee5ae44825b01a976221be0057ee"} Oct 01 15:52:34 crc kubenswrapper[4949]: I1001 15:52:34.300936 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-4htfq" Oct 01 15:52:40 crc kubenswrapper[4949]: I1001 15:52:40.905571 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc2nd7q"] Oct 01 15:52:40 crc kubenswrapper[4949]: I1001 15:52:40.907009 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc2nd7q" Oct 01 15:52:40 crc kubenswrapper[4949]: I1001 15:52:40.909788 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Oct 01 15:52:40 crc kubenswrapper[4949]: I1001 15:52:40.919692 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc2nd7q"] Oct 01 15:52:41 crc kubenswrapper[4949]: I1001 15:52:41.022278 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9795b76c-919f-480c-8208-922a235c602a-util\") pod \"9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc2nd7q\" (UID: \"9795b76c-919f-480c-8208-922a235c602a\") " pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc2nd7q" Oct 01 15:52:41 crc kubenswrapper[4949]: I1001 15:52:41.022371 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9795b76c-919f-480c-8208-922a235c602a-bundle\") pod \"9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc2nd7q\" (UID: \"9795b76c-919f-480c-8208-922a235c602a\") " pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc2nd7q" Oct 01 15:52:41 crc kubenswrapper[4949]: I1001 15:52:41.022427 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7jtm9\" (UniqueName: \"kubernetes.io/projected/9795b76c-919f-480c-8208-922a235c602a-kube-api-access-7jtm9\") pod \"9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc2nd7q\" (UID: \"9795b76c-919f-480c-8208-922a235c602a\") " pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc2nd7q" Oct 01 15:52:41 crc kubenswrapper[4949]: I1001 15:52:41.123398 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9795b76c-919f-480c-8208-922a235c602a-bundle\") pod \"9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc2nd7q\" (UID: \"9795b76c-919f-480c-8208-922a235c602a\") " pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc2nd7q" Oct 01 15:52:41 crc kubenswrapper[4949]: I1001 15:52:41.123523 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7jtm9\" (UniqueName: \"kubernetes.io/projected/9795b76c-919f-480c-8208-922a235c602a-kube-api-access-7jtm9\") pod \"9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc2nd7q\" (UID: \"9795b76c-919f-480c-8208-922a235c602a\") " pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc2nd7q" Oct 01 15:52:41 crc kubenswrapper[4949]: I1001 15:52:41.123577 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9795b76c-919f-480c-8208-922a235c602a-util\") pod \"9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc2nd7q\" (UID: \"9795b76c-919f-480c-8208-922a235c602a\") " pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc2nd7q" Oct 01 15:52:41 crc kubenswrapper[4949]: I1001 15:52:41.124020 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9795b76c-919f-480c-8208-922a235c602a-bundle\") pod \"9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc2nd7q\" (UID: \"9795b76c-919f-480c-8208-922a235c602a\") " pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc2nd7q" Oct 01 15:52:41 crc kubenswrapper[4949]: I1001 15:52:41.124100 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9795b76c-919f-480c-8208-922a235c602a-util\") pod \"9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc2nd7q\" (UID: \"9795b76c-919f-480c-8208-922a235c602a\") " pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc2nd7q" Oct 01 15:52:41 crc kubenswrapper[4949]: I1001 15:52:41.142289 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7jtm9\" (UniqueName: \"kubernetes.io/projected/9795b76c-919f-480c-8208-922a235c602a-kube-api-access-7jtm9\") pod \"9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc2nd7q\" (UID: \"9795b76c-919f-480c-8208-922a235c602a\") " pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc2nd7q" Oct 01 15:52:41 crc kubenswrapper[4949]: I1001 15:52:41.224209 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc2nd7q" Oct 01 15:52:41 crc kubenswrapper[4949]: I1001 15:52:41.417404 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc2nd7q"] Oct 01 15:52:41 crc kubenswrapper[4949]: W1001 15:52:41.424019 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9795b76c_919f_480c_8208_922a235c602a.slice/crio-87e4d9011d911528f865163e5681431c83e9ad10286ff1c64766f7969299f3bb WatchSource:0}: Error finding container 87e4d9011d911528f865163e5681431c83e9ad10286ff1c64766f7969299f3bb: Status 404 returned error can't find the container with id 87e4d9011d911528f865163e5681431c83e9ad10286ff1c64766f7969299f3bb Oct 01 15:52:41 crc kubenswrapper[4949]: I1001 15:52:41.835444 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc2nd7q" event={"ID":"9795b76c-919f-480c-8208-922a235c602a","Type":"ContainerStarted","Data":"f467dc5b54a5e745c08d0d94edc8014fd003401b1a7b7aaa563139d9e63e676b"} Oct 01 15:52:41 crc kubenswrapper[4949]: I1001 15:52:41.835984 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc2nd7q" event={"ID":"9795b76c-919f-480c-8208-922a235c602a","Type":"ContainerStarted","Data":"87e4d9011d911528f865163e5681431c83e9ad10286ff1c64766f7969299f3bb"} Oct 01 15:52:42 crc kubenswrapper[4949]: I1001 15:52:42.841344 4949 generic.go:334] "Generic (PLEG): container finished" podID="9795b76c-919f-480c-8208-922a235c602a" containerID="f467dc5b54a5e745c08d0d94edc8014fd003401b1a7b7aaa563139d9e63e676b" exitCode=0 Oct 01 15:52:42 crc kubenswrapper[4949]: I1001 15:52:42.841399 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc2nd7q" event={"ID":"9795b76c-919f-480c-8208-922a235c602a","Type":"ContainerDied","Data":"f467dc5b54a5e745c08d0d94edc8014fd003401b1a7b7aaa563139d9e63e676b"} Oct 01 15:52:44 crc kubenswrapper[4949]: I1001 15:52:44.853770 4949 generic.go:334] "Generic (PLEG): container finished" podID="9795b76c-919f-480c-8208-922a235c602a" containerID="b229014f54903ac9e0aeef73edb9bee9c6d5bb5d656001a034d05be13af4e2bd" exitCode=0 Oct 01 15:52:44 crc kubenswrapper[4949]: I1001 15:52:44.853863 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc2nd7q" event={"ID":"9795b76c-919f-480c-8208-922a235c602a","Type":"ContainerDied","Data":"b229014f54903ac9e0aeef73edb9bee9c6d5bb5d656001a034d05be13af4e2bd"} Oct 01 15:52:45 crc kubenswrapper[4949]: I1001 15:52:45.861473 4949 generic.go:334] "Generic (PLEG): container finished" podID="9795b76c-919f-480c-8208-922a235c602a" containerID="40d8977701daa63a9b7f326431baa87344e4d019c8e165428792feeb66a474f5" exitCode=0 Oct 01 15:52:45 crc kubenswrapper[4949]: I1001 15:52:45.861523 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc2nd7q" event={"ID":"9795b76c-919f-480c-8208-922a235c602a","Type":"ContainerDied","Data":"40d8977701daa63a9b7f326431baa87344e4d019c8e165428792feeb66a474f5"} Oct 01 15:52:47 crc kubenswrapper[4949]: I1001 15:52:47.084836 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc2nd7q" Oct 01 15:52:47 crc kubenswrapper[4949]: I1001 15:52:47.204038 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9795b76c-919f-480c-8208-922a235c602a-bundle\") pod \"9795b76c-919f-480c-8208-922a235c602a\" (UID: \"9795b76c-919f-480c-8208-922a235c602a\") " Oct 01 15:52:47 crc kubenswrapper[4949]: I1001 15:52:47.204235 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9795b76c-919f-480c-8208-922a235c602a-util\") pod \"9795b76c-919f-480c-8208-922a235c602a\" (UID: \"9795b76c-919f-480c-8208-922a235c602a\") " Oct 01 15:52:47 crc kubenswrapper[4949]: I1001 15:52:47.204313 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jtm9\" (UniqueName: \"kubernetes.io/projected/9795b76c-919f-480c-8208-922a235c602a-kube-api-access-7jtm9\") pod \"9795b76c-919f-480c-8208-922a235c602a\" (UID: \"9795b76c-919f-480c-8208-922a235c602a\") " Oct 01 15:52:47 crc kubenswrapper[4949]: I1001 15:52:47.204817 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9795b76c-919f-480c-8208-922a235c602a-bundle" (OuterVolumeSpecName: "bundle") pod "9795b76c-919f-480c-8208-922a235c602a" (UID: "9795b76c-919f-480c-8208-922a235c602a"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 15:52:47 crc kubenswrapper[4949]: I1001 15:52:47.211789 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9795b76c-919f-480c-8208-922a235c602a-kube-api-access-7jtm9" (OuterVolumeSpecName: "kube-api-access-7jtm9") pod "9795b76c-919f-480c-8208-922a235c602a" (UID: "9795b76c-919f-480c-8208-922a235c602a"). InnerVolumeSpecName "kube-api-access-7jtm9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:52:47 crc kubenswrapper[4949]: I1001 15:52:47.215962 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9795b76c-919f-480c-8208-922a235c602a-util" (OuterVolumeSpecName: "util") pod "9795b76c-919f-480c-8208-922a235c602a" (UID: "9795b76c-919f-480c-8208-922a235c602a"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 15:52:47 crc kubenswrapper[4949]: I1001 15:52:47.306221 4949 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9795b76c-919f-480c-8208-922a235c602a-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 15:52:47 crc kubenswrapper[4949]: I1001 15:52:47.306269 4949 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9795b76c-919f-480c-8208-922a235c602a-util\") on node \"crc\" DevicePath \"\"" Oct 01 15:52:47 crc kubenswrapper[4949]: I1001 15:52:47.306283 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7jtm9\" (UniqueName: \"kubernetes.io/projected/9795b76c-919f-480c-8208-922a235c602a-kube-api-access-7jtm9\") on node \"crc\" DevicePath \"\"" Oct 01 15:52:47 crc kubenswrapper[4949]: I1001 15:52:47.874617 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc2nd7q" event={"ID":"9795b76c-919f-480c-8208-922a235c602a","Type":"ContainerDied","Data":"87e4d9011d911528f865163e5681431c83e9ad10286ff1c64766f7969299f3bb"} Oct 01 15:52:47 crc kubenswrapper[4949]: I1001 15:52:47.874664 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc2nd7q" Oct 01 15:52:47 crc kubenswrapper[4949]: I1001 15:52:47.874663 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="87e4d9011d911528f865163e5681431c83e9ad10286ff1c64766f7969299f3bb" Oct 01 15:52:52 crc kubenswrapper[4949]: I1001 15:52:52.447654 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-5d6f6cfd66-bpnxg"] Oct 01 15:52:52 crc kubenswrapper[4949]: E1001 15:52:52.448139 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9795b76c-919f-480c-8208-922a235c602a" containerName="extract" Oct 01 15:52:52 crc kubenswrapper[4949]: I1001 15:52:52.448154 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="9795b76c-919f-480c-8208-922a235c602a" containerName="extract" Oct 01 15:52:52 crc kubenswrapper[4949]: E1001 15:52:52.448164 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9795b76c-919f-480c-8208-922a235c602a" containerName="util" Oct 01 15:52:52 crc kubenswrapper[4949]: I1001 15:52:52.448170 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="9795b76c-919f-480c-8208-922a235c602a" containerName="util" Oct 01 15:52:52 crc kubenswrapper[4949]: E1001 15:52:52.448194 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9795b76c-919f-480c-8208-922a235c602a" containerName="pull" Oct 01 15:52:52 crc kubenswrapper[4949]: I1001 15:52:52.448205 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="9795b76c-919f-480c-8208-922a235c602a" containerName="pull" Oct 01 15:52:52 crc kubenswrapper[4949]: I1001 15:52:52.448334 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="9795b76c-919f-480c-8208-922a235c602a" containerName="extract" Oct 01 15:52:52 crc kubenswrapper[4949]: I1001 15:52:52.448722 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5d6f6cfd66-bpnxg" Oct 01 15:52:52 crc kubenswrapper[4949]: I1001 15:52:52.450733 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-operator-dockercfg-lx7n5" Oct 01 15:52:52 crc kubenswrapper[4949]: I1001 15:52:52.451788 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"openshift-service-ca.crt" Oct 01 15:52:52 crc kubenswrapper[4949]: I1001 15:52:52.452311 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"kube-root-ca.crt" Oct 01 15:52:52 crc kubenswrapper[4949]: I1001 15:52:52.459019 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5d6f6cfd66-bpnxg"] Oct 01 15:52:52 crc kubenswrapper[4949]: I1001 15:52:52.572476 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-64mqp\" (UniqueName: \"kubernetes.io/projected/7045f77f-0c3a-4e54-8378-2fcda1244f0c-kube-api-access-64mqp\") pod \"nmstate-operator-5d6f6cfd66-bpnxg\" (UID: \"7045f77f-0c3a-4e54-8378-2fcda1244f0c\") " pod="openshift-nmstate/nmstate-operator-5d6f6cfd66-bpnxg" Oct 01 15:52:52 crc kubenswrapper[4949]: I1001 15:52:52.673727 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-64mqp\" (UniqueName: \"kubernetes.io/projected/7045f77f-0c3a-4e54-8378-2fcda1244f0c-kube-api-access-64mqp\") pod \"nmstate-operator-5d6f6cfd66-bpnxg\" (UID: \"7045f77f-0c3a-4e54-8378-2fcda1244f0c\") " pod="openshift-nmstate/nmstate-operator-5d6f6cfd66-bpnxg" Oct 01 15:52:52 crc kubenswrapper[4949]: I1001 15:52:52.697468 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-64mqp\" (UniqueName: \"kubernetes.io/projected/7045f77f-0c3a-4e54-8378-2fcda1244f0c-kube-api-access-64mqp\") pod \"nmstate-operator-5d6f6cfd66-bpnxg\" (UID: \"7045f77f-0c3a-4e54-8378-2fcda1244f0c\") " pod="openshift-nmstate/nmstate-operator-5d6f6cfd66-bpnxg" Oct 01 15:52:52 crc kubenswrapper[4949]: I1001 15:52:52.764216 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5d6f6cfd66-bpnxg" Oct 01 15:52:52 crc kubenswrapper[4949]: I1001 15:52:52.990285 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-5d6f6cfd66-bpnxg"] Oct 01 15:52:53 crc kubenswrapper[4949]: I1001 15:52:53.905474 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5d6f6cfd66-bpnxg" event={"ID":"7045f77f-0c3a-4e54-8378-2fcda1244f0c","Type":"ContainerStarted","Data":"cb619d1c26605c6f9635e8e33d0f4593bee8596df894b7ee264cb6512d30e9ad"} Oct 01 15:52:55 crc kubenswrapper[4949]: I1001 15:52:55.919863 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5d6f6cfd66-bpnxg" event={"ID":"7045f77f-0c3a-4e54-8378-2fcda1244f0c","Type":"ContainerStarted","Data":"1e04c7dd31a3c4877439051f5269f3ade73ccbaff51e483ebacf167c09746e0a"} Oct 01 15:52:55 crc kubenswrapper[4949]: I1001 15:52:55.942853 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-5d6f6cfd66-bpnxg" podStartSLOduration=1.5416686529999999 podStartE2EDuration="3.942832473s" podCreationTimestamp="2025-10-01 15:52:52 +0000 UTC" firstStartedPulling="2025-10-01 15:52:53.000827834 +0000 UTC m=+672.306434045" lastFinishedPulling="2025-10-01 15:52:55.401991674 +0000 UTC m=+674.707597865" observedRunningTime="2025-10-01 15:52:55.938881257 +0000 UTC m=+675.244487458" watchObservedRunningTime="2025-10-01 15:52:55.942832473 +0000 UTC m=+675.248438664" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.458516 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6d689559c5-gzrq5"] Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.459706 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6d689559c5-gzrq5" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.461875 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"openshift-nmstate-webhook" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.462171 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"nmstate-handler-dockercfg-bdj7h" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.463221 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-58fcddf996-tgtpv"] Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.464053 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58fcddf996-tgtpv" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.479792 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6d689559c5-gzrq5"] Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.487860 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-g2cvf"] Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.489145 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/d8e345e7-61b3-4723-8332-bb171b328a6a-tls-key-pair\") pod \"nmstate-webhook-6d689559c5-gzrq5\" (UID: \"d8e345e7-61b3-4723-8332-bb171b328a6a\") " pod="openshift-nmstate/nmstate-webhook-6d689559c5-gzrq5" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.489236 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdprg\" (UniqueName: \"kubernetes.io/projected/d8e345e7-61b3-4723-8332-bb171b328a6a-kube-api-access-fdprg\") pod \"nmstate-webhook-6d689559c5-gzrq5\" (UID: \"d8e345e7-61b3-4723-8332-bb171b328a6a\") " pod="openshift-nmstate/nmstate-webhook-6d689559c5-gzrq5" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.489286 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hjqwk\" (UniqueName: \"kubernetes.io/projected/42191e90-2de0-4988-860e-61d057d81232-kube-api-access-hjqwk\") pod \"nmstate-metrics-58fcddf996-tgtpv\" (UID: \"42191e90-2de0-4988-860e-61d057d81232\") " pod="openshift-nmstate/nmstate-metrics-58fcddf996-tgtpv" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.489460 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-g2cvf" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.495064 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58fcddf996-tgtpv"] Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.590471 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/d8e345e7-61b3-4723-8332-bb171b328a6a-tls-key-pair\") pod \"nmstate-webhook-6d689559c5-gzrq5\" (UID: \"d8e345e7-61b3-4723-8332-bb171b328a6a\") " pod="openshift-nmstate/nmstate-webhook-6d689559c5-gzrq5" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.590529 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3628bbb1-354b-40b0-89ac-3cc73d3a0ec9-dbus-socket\") pod \"nmstate-handler-g2cvf\" (UID: \"3628bbb1-354b-40b0-89ac-3cc73d3a0ec9\") " pod="openshift-nmstate/nmstate-handler-g2cvf" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.590562 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3628bbb1-354b-40b0-89ac-3cc73d3a0ec9-nmstate-lock\") pod \"nmstate-handler-g2cvf\" (UID: \"3628bbb1-354b-40b0-89ac-3cc73d3a0ec9\") " pod="openshift-nmstate/nmstate-handler-g2cvf" Oct 01 15:53:01 crc kubenswrapper[4949]: E1001 15:53:01.590677 4949 secret.go:188] Couldn't get secret openshift-nmstate/openshift-nmstate-webhook: secret "openshift-nmstate-webhook" not found Oct 01 15:53:01 crc kubenswrapper[4949]: E1001 15:53:01.590737 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d8e345e7-61b3-4723-8332-bb171b328a6a-tls-key-pair podName:d8e345e7-61b3-4723-8332-bb171b328a6a nodeName:}" failed. No retries permitted until 2025-10-01 15:53:02.090714763 +0000 UTC m=+681.396320954 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "tls-key-pair" (UniqueName: "kubernetes.io/secret/d8e345e7-61b3-4723-8332-bb171b328a6a-tls-key-pair") pod "nmstate-webhook-6d689559c5-gzrq5" (UID: "d8e345e7-61b3-4723-8332-bb171b328a6a") : secret "openshift-nmstate-webhook" not found Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.590771 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdprg\" (UniqueName: \"kubernetes.io/projected/d8e345e7-61b3-4723-8332-bb171b328a6a-kube-api-access-fdprg\") pod \"nmstate-webhook-6d689559c5-gzrq5\" (UID: \"d8e345e7-61b3-4723-8332-bb171b328a6a\") " pod="openshift-nmstate/nmstate-webhook-6d689559c5-gzrq5" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.590802 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tj5qn\" (UniqueName: \"kubernetes.io/projected/3628bbb1-354b-40b0-89ac-3cc73d3a0ec9-kube-api-access-tj5qn\") pod \"nmstate-handler-g2cvf\" (UID: \"3628bbb1-354b-40b0-89ac-3cc73d3a0ec9\") " pod="openshift-nmstate/nmstate-handler-g2cvf" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.590820 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hjqwk\" (UniqueName: \"kubernetes.io/projected/42191e90-2de0-4988-860e-61d057d81232-kube-api-access-hjqwk\") pod \"nmstate-metrics-58fcddf996-tgtpv\" (UID: \"42191e90-2de0-4988-860e-61d057d81232\") " pod="openshift-nmstate/nmstate-metrics-58fcddf996-tgtpv" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.590842 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3628bbb1-354b-40b0-89ac-3cc73d3a0ec9-ovs-socket\") pod \"nmstate-handler-g2cvf\" (UID: \"3628bbb1-354b-40b0-89ac-3cc73d3a0ec9\") " pod="openshift-nmstate/nmstate-handler-g2cvf" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.612545 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-864bb6dfb5-lf4tp"] Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.613466 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-lf4tp" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.619796 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"default-dockercfg-zkkgg" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.620067 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-nmstate"/"nginx-conf" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.620226 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-nmstate"/"plugin-serving-cert" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.625791 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hjqwk\" (UniqueName: \"kubernetes.io/projected/42191e90-2de0-4988-860e-61d057d81232-kube-api-access-hjqwk\") pod \"nmstate-metrics-58fcddf996-tgtpv\" (UID: \"42191e90-2de0-4988-860e-61d057d81232\") " pod="openshift-nmstate/nmstate-metrics-58fcddf996-tgtpv" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.627648 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-864bb6dfb5-lf4tp"] Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.627763 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdprg\" (UniqueName: \"kubernetes.io/projected/d8e345e7-61b3-4723-8332-bb171b328a6a-kube-api-access-fdprg\") pod \"nmstate-webhook-6d689559c5-gzrq5\" (UID: \"d8e345e7-61b3-4723-8332-bb171b328a6a\") " pod="openshift-nmstate/nmstate-webhook-6d689559c5-gzrq5" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.691591 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3628bbb1-354b-40b0-89ac-3cc73d3a0ec9-nmstate-lock\") pod \"nmstate-handler-g2cvf\" (UID: \"3628bbb1-354b-40b0-89ac-3cc73d3a0ec9\") " pod="openshift-nmstate/nmstate-handler-g2cvf" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.692218 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwfk9\" (UniqueName: \"kubernetes.io/projected/a1e85787-64f5-453e-805e-59446da74677-kube-api-access-hwfk9\") pod \"nmstate-console-plugin-864bb6dfb5-lf4tp\" (UID: \"a1e85787-64f5-453e-805e-59446da74677\") " pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-lf4tp" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.692362 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tj5qn\" (UniqueName: \"kubernetes.io/projected/3628bbb1-354b-40b0-89ac-3cc73d3a0ec9-kube-api-access-tj5qn\") pod \"nmstate-handler-g2cvf\" (UID: \"3628bbb1-354b-40b0-89ac-3cc73d3a0ec9\") " pod="openshift-nmstate/nmstate-handler-g2cvf" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.691718 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3628bbb1-354b-40b0-89ac-3cc73d3a0ec9-nmstate-lock\") pod \"nmstate-handler-g2cvf\" (UID: \"3628bbb1-354b-40b0-89ac-3cc73d3a0ec9\") " pod="openshift-nmstate/nmstate-handler-g2cvf" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.692484 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3628bbb1-354b-40b0-89ac-3cc73d3a0ec9-ovs-socket\") pod \"nmstate-handler-g2cvf\" (UID: \"3628bbb1-354b-40b0-89ac-3cc73d3a0ec9\") " pod="openshift-nmstate/nmstate-handler-g2cvf" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.692570 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a1e85787-64f5-453e-805e-59446da74677-nginx-conf\") pod \"nmstate-console-plugin-864bb6dfb5-lf4tp\" (UID: \"a1e85787-64f5-453e-805e-59446da74677\") " pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-lf4tp" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.692603 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a1e85787-64f5-453e-805e-59446da74677-plugin-serving-cert\") pod \"nmstate-console-plugin-864bb6dfb5-lf4tp\" (UID: \"a1e85787-64f5-453e-805e-59446da74677\") " pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-lf4tp" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.692667 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3628bbb1-354b-40b0-89ac-3cc73d3a0ec9-dbus-socket\") pod \"nmstate-handler-g2cvf\" (UID: \"3628bbb1-354b-40b0-89ac-3cc73d3a0ec9\") " pod="openshift-nmstate/nmstate-handler-g2cvf" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.692902 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3628bbb1-354b-40b0-89ac-3cc73d3a0ec9-ovs-socket\") pod \"nmstate-handler-g2cvf\" (UID: \"3628bbb1-354b-40b0-89ac-3cc73d3a0ec9\") " pod="openshift-nmstate/nmstate-handler-g2cvf" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.692936 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3628bbb1-354b-40b0-89ac-3cc73d3a0ec9-dbus-socket\") pod \"nmstate-handler-g2cvf\" (UID: \"3628bbb1-354b-40b0-89ac-3cc73d3a0ec9\") " pod="openshift-nmstate/nmstate-handler-g2cvf" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.711941 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tj5qn\" (UniqueName: \"kubernetes.io/projected/3628bbb1-354b-40b0-89ac-3cc73d3a0ec9-kube-api-access-tj5qn\") pod \"nmstate-handler-g2cvf\" (UID: \"3628bbb1-354b-40b0-89ac-3cc73d3a0ec9\") " pod="openshift-nmstate/nmstate-handler-g2cvf" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.785365 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-544b6b9d57-tr59c"] Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.786217 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-544b6b9d57-tr59c" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.793246 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwfk9\" (UniqueName: \"kubernetes.io/projected/a1e85787-64f5-453e-805e-59446da74677-kube-api-access-hwfk9\") pod \"nmstate-console-plugin-864bb6dfb5-lf4tp\" (UID: \"a1e85787-64f5-453e-805e-59446da74677\") " pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-lf4tp" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.793327 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a1e85787-64f5-453e-805e-59446da74677-nginx-conf\") pod \"nmstate-console-plugin-864bb6dfb5-lf4tp\" (UID: \"a1e85787-64f5-453e-805e-59446da74677\") " pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-lf4tp" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.793353 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a1e85787-64f5-453e-805e-59446da74677-plugin-serving-cert\") pod \"nmstate-console-plugin-864bb6dfb5-lf4tp\" (UID: \"a1e85787-64f5-453e-805e-59446da74677\") " pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-lf4tp" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.794603 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a1e85787-64f5-453e-805e-59446da74677-nginx-conf\") pod \"nmstate-console-plugin-864bb6dfb5-lf4tp\" (UID: \"a1e85787-64f5-453e-805e-59446da74677\") " pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-lf4tp" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.803254 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-544b6b9d57-tr59c"] Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.807258 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a1e85787-64f5-453e-805e-59446da74677-plugin-serving-cert\") pod \"nmstate-console-plugin-864bb6dfb5-lf4tp\" (UID: \"a1e85787-64f5-453e-805e-59446da74677\") " pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-lf4tp" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.809487 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwfk9\" (UniqueName: \"kubernetes.io/projected/a1e85787-64f5-453e-805e-59446da74677-kube-api-access-hwfk9\") pod \"nmstate-console-plugin-864bb6dfb5-lf4tp\" (UID: \"a1e85787-64f5-453e-805e-59446da74677\") " pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-lf4tp" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.841358 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58fcddf996-tgtpv" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.845810 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-g2cvf" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.894372 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4bbc6807-4a1b-42f9-88d7-1cf1cc8968a0-console-serving-cert\") pod \"console-544b6b9d57-tr59c\" (UID: \"4bbc6807-4a1b-42f9-88d7-1cf1cc8968a0\") " pod="openshift-console/console-544b6b9d57-tr59c" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.894411 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4bbc6807-4a1b-42f9-88d7-1cf1cc8968a0-console-oauth-config\") pod \"console-544b6b9d57-tr59c\" (UID: \"4bbc6807-4a1b-42f9-88d7-1cf1cc8968a0\") " pod="openshift-console/console-544b6b9d57-tr59c" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.894431 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4bbc6807-4a1b-42f9-88d7-1cf1cc8968a0-service-ca\") pod \"console-544b6b9d57-tr59c\" (UID: \"4bbc6807-4a1b-42f9-88d7-1cf1cc8968a0\") " pod="openshift-console/console-544b6b9d57-tr59c" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.894460 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4bbc6807-4a1b-42f9-88d7-1cf1cc8968a0-trusted-ca-bundle\") pod \"console-544b6b9d57-tr59c\" (UID: \"4bbc6807-4a1b-42f9-88d7-1cf1cc8968a0\") " pod="openshift-console/console-544b6b9d57-tr59c" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.894477 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4bbc6807-4a1b-42f9-88d7-1cf1cc8968a0-console-config\") pod \"console-544b6b9d57-tr59c\" (UID: \"4bbc6807-4a1b-42f9-88d7-1cf1cc8968a0\") " pod="openshift-console/console-544b6b9d57-tr59c" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.894497 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25dlg\" (UniqueName: \"kubernetes.io/projected/4bbc6807-4a1b-42f9-88d7-1cf1cc8968a0-kube-api-access-25dlg\") pod \"console-544b6b9d57-tr59c\" (UID: \"4bbc6807-4a1b-42f9-88d7-1cf1cc8968a0\") " pod="openshift-console/console-544b6b9d57-tr59c" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.894517 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4bbc6807-4a1b-42f9-88d7-1cf1cc8968a0-oauth-serving-cert\") pod \"console-544b6b9d57-tr59c\" (UID: \"4bbc6807-4a1b-42f9-88d7-1cf1cc8968a0\") " pod="openshift-console/console-544b6b9d57-tr59c" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.962735 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-lf4tp" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.984101 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-g2cvf" event={"ID":"3628bbb1-354b-40b0-89ac-3cc73d3a0ec9","Type":"ContainerStarted","Data":"41afdae1f10045eb6276c010cc8a7af333fededfabbb81660eb0dc03911919a2"} Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.996002 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4bbc6807-4a1b-42f9-88d7-1cf1cc8968a0-oauth-serving-cert\") pod \"console-544b6b9d57-tr59c\" (UID: \"4bbc6807-4a1b-42f9-88d7-1cf1cc8968a0\") " pod="openshift-console/console-544b6b9d57-tr59c" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.996192 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4bbc6807-4a1b-42f9-88d7-1cf1cc8968a0-console-serving-cert\") pod \"console-544b6b9d57-tr59c\" (UID: \"4bbc6807-4a1b-42f9-88d7-1cf1cc8968a0\") " pod="openshift-console/console-544b6b9d57-tr59c" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.996227 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4bbc6807-4a1b-42f9-88d7-1cf1cc8968a0-console-oauth-config\") pod \"console-544b6b9d57-tr59c\" (UID: \"4bbc6807-4a1b-42f9-88d7-1cf1cc8968a0\") " pod="openshift-console/console-544b6b9d57-tr59c" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.996256 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4bbc6807-4a1b-42f9-88d7-1cf1cc8968a0-service-ca\") pod \"console-544b6b9d57-tr59c\" (UID: \"4bbc6807-4a1b-42f9-88d7-1cf1cc8968a0\") " pod="openshift-console/console-544b6b9d57-tr59c" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.996297 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4bbc6807-4a1b-42f9-88d7-1cf1cc8968a0-trusted-ca-bundle\") pod \"console-544b6b9d57-tr59c\" (UID: \"4bbc6807-4a1b-42f9-88d7-1cf1cc8968a0\") " pod="openshift-console/console-544b6b9d57-tr59c" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.996319 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4bbc6807-4a1b-42f9-88d7-1cf1cc8968a0-console-config\") pod \"console-544b6b9d57-tr59c\" (UID: \"4bbc6807-4a1b-42f9-88d7-1cf1cc8968a0\") " pod="openshift-console/console-544b6b9d57-tr59c" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.996346 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-25dlg\" (UniqueName: \"kubernetes.io/projected/4bbc6807-4a1b-42f9-88d7-1cf1cc8968a0-kube-api-access-25dlg\") pod \"console-544b6b9d57-tr59c\" (UID: \"4bbc6807-4a1b-42f9-88d7-1cf1cc8968a0\") " pod="openshift-console/console-544b6b9d57-tr59c" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.997578 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4bbc6807-4a1b-42f9-88d7-1cf1cc8968a0-trusted-ca-bundle\") pod \"console-544b6b9d57-tr59c\" (UID: \"4bbc6807-4a1b-42f9-88d7-1cf1cc8968a0\") " pod="openshift-console/console-544b6b9d57-tr59c" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.997663 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/4bbc6807-4a1b-42f9-88d7-1cf1cc8968a0-oauth-serving-cert\") pod \"console-544b6b9d57-tr59c\" (UID: \"4bbc6807-4a1b-42f9-88d7-1cf1cc8968a0\") " pod="openshift-console/console-544b6b9d57-tr59c" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.997761 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/4bbc6807-4a1b-42f9-88d7-1cf1cc8968a0-service-ca\") pod \"console-544b6b9d57-tr59c\" (UID: \"4bbc6807-4a1b-42f9-88d7-1cf1cc8968a0\") " pod="openshift-console/console-544b6b9d57-tr59c" Oct 01 15:53:01 crc kubenswrapper[4949]: I1001 15:53:01.997810 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/4bbc6807-4a1b-42f9-88d7-1cf1cc8968a0-console-config\") pod \"console-544b6b9d57-tr59c\" (UID: \"4bbc6807-4a1b-42f9-88d7-1cf1cc8968a0\") " pod="openshift-console/console-544b6b9d57-tr59c" Oct 01 15:53:02 crc kubenswrapper[4949]: I1001 15:53:02.001319 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/4bbc6807-4a1b-42f9-88d7-1cf1cc8968a0-console-oauth-config\") pod \"console-544b6b9d57-tr59c\" (UID: \"4bbc6807-4a1b-42f9-88d7-1cf1cc8968a0\") " pod="openshift-console/console-544b6b9d57-tr59c" Oct 01 15:53:02 crc kubenswrapper[4949]: I1001 15:53:02.001618 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/4bbc6807-4a1b-42f9-88d7-1cf1cc8968a0-console-serving-cert\") pod \"console-544b6b9d57-tr59c\" (UID: \"4bbc6807-4a1b-42f9-88d7-1cf1cc8968a0\") " pod="openshift-console/console-544b6b9d57-tr59c" Oct 01 15:53:02 crc kubenswrapper[4949]: I1001 15:53:02.017683 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-25dlg\" (UniqueName: \"kubernetes.io/projected/4bbc6807-4a1b-42f9-88d7-1cf1cc8968a0-kube-api-access-25dlg\") pod \"console-544b6b9d57-tr59c\" (UID: \"4bbc6807-4a1b-42f9-88d7-1cf1cc8968a0\") " pod="openshift-console/console-544b6b9d57-tr59c" Oct 01 15:53:02 crc kubenswrapper[4949]: I1001 15:53:02.056617 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-58fcddf996-tgtpv"] Oct 01 15:53:02 crc kubenswrapper[4949]: I1001 15:53:02.097465 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/d8e345e7-61b3-4723-8332-bb171b328a6a-tls-key-pair\") pod \"nmstate-webhook-6d689559c5-gzrq5\" (UID: \"d8e345e7-61b3-4723-8332-bb171b328a6a\") " pod="openshift-nmstate/nmstate-webhook-6d689559c5-gzrq5" Oct 01 15:53:02 crc kubenswrapper[4949]: I1001 15:53:02.101455 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/d8e345e7-61b3-4723-8332-bb171b328a6a-tls-key-pair\") pod \"nmstate-webhook-6d689559c5-gzrq5\" (UID: \"d8e345e7-61b3-4723-8332-bb171b328a6a\") " pod="openshift-nmstate/nmstate-webhook-6d689559c5-gzrq5" Oct 01 15:53:02 crc kubenswrapper[4949]: I1001 15:53:02.124163 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6d689559c5-gzrq5" Oct 01 15:53:02 crc kubenswrapper[4949]: I1001 15:53:02.146513 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-544b6b9d57-tr59c" Oct 01 15:53:02 crc kubenswrapper[4949]: I1001 15:53:02.150848 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-864bb6dfb5-lf4tp"] Oct 01 15:53:02 crc kubenswrapper[4949]: W1001 15:53:02.158189 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda1e85787_64f5_453e_805e_59446da74677.slice/crio-f990ba22142551f435af3d8516a57bc696074a2f9001558209f09797500a3ec3 WatchSource:0}: Error finding container f990ba22142551f435af3d8516a57bc696074a2f9001558209f09797500a3ec3: Status 404 returned error can't find the container with id f990ba22142551f435af3d8516a57bc696074a2f9001558209f09797500a3ec3 Oct 01 15:53:02 crc kubenswrapper[4949]: I1001 15:53:02.305956 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6d689559c5-gzrq5"] Oct 01 15:53:02 crc kubenswrapper[4949]: W1001 15:53:02.319346 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8e345e7_61b3_4723_8332_bb171b328a6a.slice/crio-cba03a42a5fcfb234361bdfe93c70657b64117ecc54c43c42bd33e5bcec3c15f WatchSource:0}: Error finding container cba03a42a5fcfb234361bdfe93c70657b64117ecc54c43c42bd33e5bcec3c15f: Status 404 returned error can't find the container with id cba03a42a5fcfb234361bdfe93c70657b64117ecc54c43c42bd33e5bcec3c15f Oct 01 15:53:02 crc kubenswrapper[4949]: I1001 15:53:02.351410 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-544b6b9d57-tr59c"] Oct 01 15:53:02 crc kubenswrapper[4949]: W1001 15:53:02.356293 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4bbc6807_4a1b_42f9_88d7_1cf1cc8968a0.slice/crio-d2437132f0a862cb462403ef8c91fbf3fb255da76c001c1c415d7dada22c4758 WatchSource:0}: Error finding container d2437132f0a862cb462403ef8c91fbf3fb255da76c001c1c415d7dada22c4758: Status 404 returned error can't find the container with id d2437132f0a862cb462403ef8c91fbf3fb255da76c001c1c415d7dada22c4758 Oct 01 15:53:02 crc kubenswrapper[4949]: I1001 15:53:02.992934 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6d689559c5-gzrq5" event={"ID":"d8e345e7-61b3-4723-8332-bb171b328a6a","Type":"ContainerStarted","Data":"cba03a42a5fcfb234361bdfe93c70657b64117ecc54c43c42bd33e5bcec3c15f"} Oct 01 15:53:02 crc kubenswrapper[4949]: I1001 15:53:02.994253 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-lf4tp" event={"ID":"a1e85787-64f5-453e-805e-59446da74677","Type":"ContainerStarted","Data":"f990ba22142551f435af3d8516a57bc696074a2f9001558209f09797500a3ec3"} Oct 01 15:53:02 crc kubenswrapper[4949]: I1001 15:53:02.995465 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58fcddf996-tgtpv" event={"ID":"42191e90-2de0-4988-860e-61d057d81232","Type":"ContainerStarted","Data":"8d9b2e370934c63ca2484685f613c9cb3a0092b895d9084a99a869e89537c9d2"} Oct 01 15:53:02 crc kubenswrapper[4949]: I1001 15:53:02.997235 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-544b6b9d57-tr59c" event={"ID":"4bbc6807-4a1b-42f9-88d7-1cf1cc8968a0","Type":"ContainerStarted","Data":"3beeba1949924c3ebf66375ed20a1fd6f8b75c1dd231f03f61afbf0be522078f"} Oct 01 15:53:02 crc kubenswrapper[4949]: I1001 15:53:02.997289 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-544b6b9d57-tr59c" event={"ID":"4bbc6807-4a1b-42f9-88d7-1cf1cc8968a0","Type":"ContainerStarted","Data":"d2437132f0a862cb462403ef8c91fbf3fb255da76c001c1c415d7dada22c4758"} Oct 01 15:53:07 crc kubenswrapper[4949]: I1001 15:53:07.031931 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-g2cvf" event={"ID":"3628bbb1-354b-40b0-89ac-3cc73d3a0ec9","Type":"ContainerStarted","Data":"96c52a3add26e2c7255e9782705759508fc82619dc576c0dfaaafc25fc385141"} Oct 01 15:53:07 crc kubenswrapper[4949]: I1001 15:53:07.032453 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-g2cvf" Oct 01 15:53:07 crc kubenswrapper[4949]: I1001 15:53:07.033972 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6d689559c5-gzrq5" event={"ID":"d8e345e7-61b3-4723-8332-bb171b328a6a","Type":"ContainerStarted","Data":"7d84429cb334d8af2b7d77a848dbc2692901ce13f82f55f8ec04f02c417016d2"} Oct 01 15:53:07 crc kubenswrapper[4949]: I1001 15:53:07.034314 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6d689559c5-gzrq5" Oct 01 15:53:07 crc kubenswrapper[4949]: I1001 15:53:07.037821 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-lf4tp" event={"ID":"a1e85787-64f5-453e-805e-59446da74677","Type":"ContainerStarted","Data":"3983b635678d5ce4336eca166c55ba6042e9ecd6ffdcafd10bf170210a5d76d8"} Oct 01 15:53:07 crc kubenswrapper[4949]: I1001 15:53:07.048714 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58fcddf996-tgtpv" event={"ID":"42191e90-2de0-4988-860e-61d057d81232","Type":"ContainerStarted","Data":"2f83d858ea177a2239d168688614f932cf62f1acca2133a2e5638af2e52fa5a6"} Oct 01 15:53:07 crc kubenswrapper[4949]: I1001 15:53:07.059287 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-g2cvf" podStartSLOduration=1.607573292 podStartE2EDuration="6.059262608s" podCreationTimestamp="2025-10-01 15:53:01 +0000 UTC" firstStartedPulling="2025-10-01 15:53:01.875913852 +0000 UTC m=+681.181520043" lastFinishedPulling="2025-10-01 15:53:06.327603178 +0000 UTC m=+685.633209359" observedRunningTime="2025-10-01 15:53:07.055484198 +0000 UTC m=+686.361090399" watchObservedRunningTime="2025-10-01 15:53:07.059262608 +0000 UTC m=+686.364868799" Oct 01 15:53:07 crc kubenswrapper[4949]: I1001 15:53:07.061380 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-544b6b9d57-tr59c" podStartSLOduration=6.061371615 podStartE2EDuration="6.061371615s" podCreationTimestamp="2025-10-01 15:53:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:53:03.020266941 +0000 UTC m=+682.325873132" watchObservedRunningTime="2025-10-01 15:53:07.061371615 +0000 UTC m=+686.366977806" Oct 01 15:53:07 crc kubenswrapper[4949]: I1001 15:53:07.078235 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6d689559c5-gzrq5" podStartSLOduration=2.073587944 podStartE2EDuration="6.078213264s" podCreationTimestamp="2025-10-01 15:53:01 +0000 UTC" firstStartedPulling="2025-10-01 15:53:02.322879686 +0000 UTC m=+681.628485867" lastFinishedPulling="2025-10-01 15:53:06.327504986 +0000 UTC m=+685.633111187" observedRunningTime="2025-10-01 15:53:07.070399855 +0000 UTC m=+686.376006046" watchObservedRunningTime="2025-10-01 15:53:07.078213264 +0000 UTC m=+686.383819455" Oct 01 15:53:07 crc kubenswrapper[4949]: I1001 15:53:07.097632 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-lf4tp" podStartSLOduration=1.9577029320000001 podStartE2EDuration="6.097608961s" podCreationTimestamp="2025-10-01 15:53:01 +0000 UTC" firstStartedPulling="2025-10-01 15:53:02.159899448 +0000 UTC m=+681.465505639" lastFinishedPulling="2025-10-01 15:53:06.299805467 +0000 UTC m=+685.605411668" observedRunningTime="2025-10-01 15:53:07.091339893 +0000 UTC m=+686.396946084" watchObservedRunningTime="2025-10-01 15:53:07.097608961 +0000 UTC m=+686.403215152" Oct 01 15:53:10 crc kubenswrapper[4949]: I1001 15:53:10.065630 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58fcddf996-tgtpv" event={"ID":"42191e90-2de0-4988-860e-61d057d81232","Type":"ContainerStarted","Data":"bf16fea18da54745bde0857da1d9f8f017df4bf44be029e3e1711f6808b1e564"} Oct 01 15:53:10 crc kubenswrapper[4949]: I1001 15:53:10.087827 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-58fcddf996-tgtpv" podStartSLOduration=2.040775899 podStartE2EDuration="9.087807676s" podCreationTimestamp="2025-10-01 15:53:01 +0000 UTC" firstStartedPulling="2025-10-01 15:53:02.066805364 +0000 UTC m=+681.372411555" lastFinishedPulling="2025-10-01 15:53:09.113837141 +0000 UTC m=+688.419443332" observedRunningTime="2025-10-01 15:53:10.082891105 +0000 UTC m=+689.388497296" watchObservedRunningTime="2025-10-01 15:53:10.087807676 +0000 UTC m=+689.393413887" Oct 01 15:53:11 crc kubenswrapper[4949]: I1001 15:53:11.875971 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-g2cvf" Oct 01 15:53:12 crc kubenswrapper[4949]: I1001 15:53:12.147326 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-544b6b9d57-tr59c" Oct 01 15:53:12 crc kubenswrapper[4949]: I1001 15:53:12.151412 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-544b6b9d57-tr59c" Oct 01 15:53:12 crc kubenswrapper[4949]: I1001 15:53:12.157316 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-544b6b9d57-tr59c" Oct 01 15:53:13 crc kubenswrapper[4949]: I1001 15:53:13.084908 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-544b6b9d57-tr59c" Oct 01 15:53:13 crc kubenswrapper[4949]: I1001 15:53:13.138248 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-xlwdp"] Oct 01 15:53:18 crc kubenswrapper[4949]: I1001 15:53:18.038942 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 15:53:18 crc kubenswrapper[4949]: I1001 15:53:18.039394 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 15:53:22 crc kubenswrapper[4949]: I1001 15:53:22.129366 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6d689559c5-gzrq5" Oct 01 15:53:35 crc kubenswrapper[4949]: I1001 15:53:35.648570 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96rvf5l"] Oct 01 15:53:35 crc kubenswrapper[4949]: I1001 15:53:35.650261 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96rvf5l" Oct 01 15:53:35 crc kubenswrapper[4949]: I1001 15:53:35.653636 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Oct 01 15:53:35 crc kubenswrapper[4949]: I1001 15:53:35.678814 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96rvf5l"] Oct 01 15:53:35 crc kubenswrapper[4949]: I1001 15:53:35.702797 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/34c10916-fa5b-41e3-82f3-6263afd45c83-bundle\") pod \"f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96rvf5l\" (UID: \"34c10916-fa5b-41e3-82f3-6263afd45c83\") " pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96rvf5l" Oct 01 15:53:35 crc kubenswrapper[4949]: I1001 15:53:35.702950 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zp6s\" (UniqueName: \"kubernetes.io/projected/34c10916-fa5b-41e3-82f3-6263afd45c83-kube-api-access-5zp6s\") pod \"f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96rvf5l\" (UID: \"34c10916-fa5b-41e3-82f3-6263afd45c83\") " pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96rvf5l" Oct 01 15:53:35 crc kubenswrapper[4949]: I1001 15:53:35.703002 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/34c10916-fa5b-41e3-82f3-6263afd45c83-util\") pod \"f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96rvf5l\" (UID: \"34c10916-fa5b-41e3-82f3-6263afd45c83\") " pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96rvf5l" Oct 01 15:53:35 crc kubenswrapper[4949]: I1001 15:53:35.804281 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zp6s\" (UniqueName: \"kubernetes.io/projected/34c10916-fa5b-41e3-82f3-6263afd45c83-kube-api-access-5zp6s\") pod \"f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96rvf5l\" (UID: \"34c10916-fa5b-41e3-82f3-6263afd45c83\") " pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96rvf5l" Oct 01 15:53:35 crc kubenswrapper[4949]: I1001 15:53:35.804389 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/34c10916-fa5b-41e3-82f3-6263afd45c83-util\") pod \"f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96rvf5l\" (UID: \"34c10916-fa5b-41e3-82f3-6263afd45c83\") " pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96rvf5l" Oct 01 15:53:35 crc kubenswrapper[4949]: I1001 15:53:35.804445 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/34c10916-fa5b-41e3-82f3-6263afd45c83-bundle\") pod \"f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96rvf5l\" (UID: \"34c10916-fa5b-41e3-82f3-6263afd45c83\") " pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96rvf5l" Oct 01 15:53:35 crc kubenswrapper[4949]: I1001 15:53:35.806018 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/34c10916-fa5b-41e3-82f3-6263afd45c83-util\") pod \"f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96rvf5l\" (UID: \"34c10916-fa5b-41e3-82f3-6263afd45c83\") " pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96rvf5l" Oct 01 15:53:35 crc kubenswrapper[4949]: I1001 15:53:35.806107 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/34c10916-fa5b-41e3-82f3-6263afd45c83-bundle\") pod \"f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96rvf5l\" (UID: \"34c10916-fa5b-41e3-82f3-6263afd45c83\") " pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96rvf5l" Oct 01 15:53:35 crc kubenswrapper[4949]: I1001 15:53:35.822165 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zp6s\" (UniqueName: \"kubernetes.io/projected/34c10916-fa5b-41e3-82f3-6263afd45c83-kube-api-access-5zp6s\") pod \"f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96rvf5l\" (UID: \"34c10916-fa5b-41e3-82f3-6263afd45c83\") " pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96rvf5l" Oct 01 15:53:35 crc kubenswrapper[4949]: I1001 15:53:35.988026 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96rvf5l" Oct 01 15:53:36 crc kubenswrapper[4949]: I1001 15:53:36.365698 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96rvf5l"] Oct 01 15:53:37 crc kubenswrapper[4949]: I1001 15:53:37.216403 4949 generic.go:334] "Generic (PLEG): container finished" podID="34c10916-fa5b-41e3-82f3-6263afd45c83" containerID="b1e9ae28f0a54c22117275f93095b8393b41e921a229d3d50841f543b844a056" exitCode=0 Oct 01 15:53:37 crc kubenswrapper[4949]: I1001 15:53:37.216458 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96rvf5l" event={"ID":"34c10916-fa5b-41e3-82f3-6263afd45c83","Type":"ContainerDied","Data":"b1e9ae28f0a54c22117275f93095b8393b41e921a229d3d50841f543b844a056"} Oct 01 15:53:37 crc kubenswrapper[4949]: I1001 15:53:37.216492 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96rvf5l" event={"ID":"34c10916-fa5b-41e3-82f3-6263afd45c83","Type":"ContainerStarted","Data":"9297c9c385e5a85e54eca9076bfc448658439e09fb345ce04411bfbd5025d099"} Oct 01 15:53:38 crc kubenswrapper[4949]: I1001 15:53:38.254705 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/console-f9d7485db-xlwdp" podUID="62b77904-e0d8-4a98-b6e0-49b2c18821db" containerName="console" containerID="cri-o://fac36984e56b9db23222a3c3358942cb70499209c3233056b2824f5f3f3f5e43" gracePeriod=15 Oct 01 15:53:38 crc kubenswrapper[4949]: I1001 15:53:38.593796 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-xlwdp_62b77904-e0d8-4a98-b6e0-49b2c18821db/console/0.log" Oct 01 15:53:38 crc kubenswrapper[4949]: I1001 15:53:38.594102 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-xlwdp" Oct 01 15:53:38 crc kubenswrapper[4949]: I1001 15:53:38.759664 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/62b77904-e0d8-4a98-b6e0-49b2c18821db-console-config\") pod \"62b77904-e0d8-4a98-b6e0-49b2c18821db\" (UID: \"62b77904-e0d8-4a98-b6e0-49b2c18821db\") " Oct 01 15:53:38 crc kubenswrapper[4949]: I1001 15:53:38.759729 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lj724\" (UniqueName: \"kubernetes.io/projected/62b77904-e0d8-4a98-b6e0-49b2c18821db-kube-api-access-lj724\") pod \"62b77904-e0d8-4a98-b6e0-49b2c18821db\" (UID: \"62b77904-e0d8-4a98-b6e0-49b2c18821db\") " Oct 01 15:53:38 crc kubenswrapper[4949]: I1001 15:53:38.759765 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/62b77904-e0d8-4a98-b6e0-49b2c18821db-service-ca\") pod \"62b77904-e0d8-4a98-b6e0-49b2c18821db\" (UID: \"62b77904-e0d8-4a98-b6e0-49b2c18821db\") " Oct 01 15:53:38 crc kubenswrapper[4949]: I1001 15:53:38.759808 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62b77904-e0d8-4a98-b6e0-49b2c18821db-trusted-ca-bundle\") pod \"62b77904-e0d8-4a98-b6e0-49b2c18821db\" (UID: \"62b77904-e0d8-4a98-b6e0-49b2c18821db\") " Oct 01 15:53:38 crc kubenswrapper[4949]: I1001 15:53:38.759840 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/62b77904-e0d8-4a98-b6e0-49b2c18821db-oauth-serving-cert\") pod \"62b77904-e0d8-4a98-b6e0-49b2c18821db\" (UID: \"62b77904-e0d8-4a98-b6e0-49b2c18821db\") " Oct 01 15:53:38 crc kubenswrapper[4949]: I1001 15:53:38.759905 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/62b77904-e0d8-4a98-b6e0-49b2c18821db-console-serving-cert\") pod \"62b77904-e0d8-4a98-b6e0-49b2c18821db\" (UID: \"62b77904-e0d8-4a98-b6e0-49b2c18821db\") " Oct 01 15:53:38 crc kubenswrapper[4949]: I1001 15:53:38.759931 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/62b77904-e0d8-4a98-b6e0-49b2c18821db-console-oauth-config\") pod \"62b77904-e0d8-4a98-b6e0-49b2c18821db\" (UID: \"62b77904-e0d8-4a98-b6e0-49b2c18821db\") " Oct 01 15:53:38 crc kubenswrapper[4949]: I1001 15:53:38.760722 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62b77904-e0d8-4a98-b6e0-49b2c18821db-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "62b77904-e0d8-4a98-b6e0-49b2c18821db" (UID: "62b77904-e0d8-4a98-b6e0-49b2c18821db"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:53:38 crc kubenswrapper[4949]: I1001 15:53:38.760819 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62b77904-e0d8-4a98-b6e0-49b2c18821db-service-ca" (OuterVolumeSpecName: "service-ca") pod "62b77904-e0d8-4a98-b6e0-49b2c18821db" (UID: "62b77904-e0d8-4a98-b6e0-49b2c18821db"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:53:38 crc kubenswrapper[4949]: I1001 15:53:38.761294 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62b77904-e0d8-4a98-b6e0-49b2c18821db-console-config" (OuterVolumeSpecName: "console-config") pod "62b77904-e0d8-4a98-b6e0-49b2c18821db" (UID: "62b77904-e0d8-4a98-b6e0-49b2c18821db"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:53:38 crc kubenswrapper[4949]: I1001 15:53:38.761575 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62b77904-e0d8-4a98-b6e0-49b2c18821db-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "62b77904-e0d8-4a98-b6e0-49b2c18821db" (UID: "62b77904-e0d8-4a98-b6e0-49b2c18821db"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:53:38 crc kubenswrapper[4949]: I1001 15:53:38.768411 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62b77904-e0d8-4a98-b6e0-49b2c18821db-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "62b77904-e0d8-4a98-b6e0-49b2c18821db" (UID: "62b77904-e0d8-4a98-b6e0-49b2c18821db"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:53:38 crc kubenswrapper[4949]: I1001 15:53:38.772473 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62b77904-e0d8-4a98-b6e0-49b2c18821db-kube-api-access-lj724" (OuterVolumeSpecName: "kube-api-access-lj724") pod "62b77904-e0d8-4a98-b6e0-49b2c18821db" (UID: "62b77904-e0d8-4a98-b6e0-49b2c18821db"). InnerVolumeSpecName "kube-api-access-lj724". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:53:38 crc kubenswrapper[4949]: I1001 15:53:38.772962 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/62b77904-e0d8-4a98-b6e0-49b2c18821db-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "62b77904-e0d8-4a98-b6e0-49b2c18821db" (UID: "62b77904-e0d8-4a98-b6e0-49b2c18821db"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:53:38 crc kubenswrapper[4949]: I1001 15:53:38.861843 4949 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/62b77904-e0d8-4a98-b6e0-49b2c18821db-console-config\") on node \"crc\" DevicePath \"\"" Oct 01 15:53:38 crc kubenswrapper[4949]: I1001 15:53:38.861900 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lj724\" (UniqueName: \"kubernetes.io/projected/62b77904-e0d8-4a98-b6e0-49b2c18821db-kube-api-access-lj724\") on node \"crc\" DevicePath \"\"" Oct 01 15:53:38 crc kubenswrapper[4949]: I1001 15:53:38.861922 4949 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/62b77904-e0d8-4a98-b6e0-49b2c18821db-service-ca\") on node \"crc\" DevicePath \"\"" Oct 01 15:53:38 crc kubenswrapper[4949]: I1001 15:53:38.861933 4949 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/62b77904-e0d8-4a98-b6e0-49b2c18821db-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 15:53:38 crc kubenswrapper[4949]: I1001 15:53:38.861943 4949 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/62b77904-e0d8-4a98-b6e0-49b2c18821db-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 01 15:53:38 crc kubenswrapper[4949]: I1001 15:53:38.861952 4949 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/62b77904-e0d8-4a98-b6e0-49b2c18821db-console-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 01 15:53:38 crc kubenswrapper[4949]: I1001 15:53:38.861962 4949 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/62b77904-e0d8-4a98-b6e0-49b2c18821db-console-oauth-config\") on node \"crc\" DevicePath \"\"" Oct 01 15:53:39 crc kubenswrapper[4949]: I1001 15:53:39.263150 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-console_console-f9d7485db-xlwdp_62b77904-e0d8-4a98-b6e0-49b2c18821db/console/0.log" Oct 01 15:53:39 crc kubenswrapper[4949]: I1001 15:53:39.263196 4949 generic.go:334] "Generic (PLEG): container finished" podID="62b77904-e0d8-4a98-b6e0-49b2c18821db" containerID="fac36984e56b9db23222a3c3358942cb70499209c3233056b2824f5f3f3f5e43" exitCode=2 Oct 01 15:53:39 crc kubenswrapper[4949]: I1001 15:53:39.263241 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-xlwdp" event={"ID":"62b77904-e0d8-4a98-b6e0-49b2c18821db","Type":"ContainerDied","Data":"fac36984e56b9db23222a3c3358942cb70499209c3233056b2824f5f3f3f5e43"} Oct 01 15:53:39 crc kubenswrapper[4949]: I1001 15:53:39.263268 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-xlwdp" event={"ID":"62b77904-e0d8-4a98-b6e0-49b2c18821db","Type":"ContainerDied","Data":"dc98c9cfd0dc6a0079dfb2137f79dea8785293a720f9c15966cd71e18f481208"} Oct 01 15:53:39 crc kubenswrapper[4949]: I1001 15:53:39.263285 4949 scope.go:117] "RemoveContainer" containerID="fac36984e56b9db23222a3c3358942cb70499209c3233056b2824f5f3f3f5e43" Oct 01 15:53:39 crc kubenswrapper[4949]: I1001 15:53:39.263381 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-xlwdp" Oct 01 15:53:39 crc kubenswrapper[4949]: I1001 15:53:39.273232 4949 generic.go:334] "Generic (PLEG): container finished" podID="34c10916-fa5b-41e3-82f3-6263afd45c83" containerID="de931c60d248a225434aad257e67a24cde876361fc99a6b248b1cae2012a8cbb" exitCode=0 Oct 01 15:53:39 crc kubenswrapper[4949]: I1001 15:53:39.273298 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96rvf5l" event={"ID":"34c10916-fa5b-41e3-82f3-6263afd45c83","Type":"ContainerDied","Data":"de931c60d248a225434aad257e67a24cde876361fc99a6b248b1cae2012a8cbb"} Oct 01 15:53:39 crc kubenswrapper[4949]: I1001 15:53:39.294617 4949 scope.go:117] "RemoveContainer" containerID="fac36984e56b9db23222a3c3358942cb70499209c3233056b2824f5f3f3f5e43" Oct 01 15:53:39 crc kubenswrapper[4949]: E1001 15:53:39.295148 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fac36984e56b9db23222a3c3358942cb70499209c3233056b2824f5f3f3f5e43\": container with ID starting with fac36984e56b9db23222a3c3358942cb70499209c3233056b2824f5f3f3f5e43 not found: ID does not exist" containerID="fac36984e56b9db23222a3c3358942cb70499209c3233056b2824f5f3f3f5e43" Oct 01 15:53:39 crc kubenswrapper[4949]: I1001 15:53:39.295196 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fac36984e56b9db23222a3c3358942cb70499209c3233056b2824f5f3f3f5e43"} err="failed to get container status \"fac36984e56b9db23222a3c3358942cb70499209c3233056b2824f5f3f3f5e43\": rpc error: code = NotFound desc = could not find container \"fac36984e56b9db23222a3c3358942cb70499209c3233056b2824f5f3f3f5e43\": container with ID starting with fac36984e56b9db23222a3c3358942cb70499209c3233056b2824f5f3f3f5e43 not found: ID does not exist" Oct 01 15:53:39 crc kubenswrapper[4949]: I1001 15:53:39.318027 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-console/console-f9d7485db-xlwdp"] Oct 01 15:53:39 crc kubenswrapper[4949]: I1001 15:53:39.322165 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-console/console-f9d7485db-xlwdp"] Oct 01 15:53:39 crc kubenswrapper[4949]: E1001 15:53:39.393278 4949 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod62b77904_e0d8_4a98_b6e0_49b2c18821db.slice\": RecentStats: unable to find data in memory cache]" Oct 01 15:53:39 crc kubenswrapper[4949]: I1001 15:53:39.609499 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62b77904-e0d8-4a98-b6e0-49b2c18821db" path="/var/lib/kubelet/pods/62b77904-e0d8-4a98-b6e0-49b2c18821db/volumes" Oct 01 15:53:40 crc kubenswrapper[4949]: I1001 15:53:40.287233 4949 generic.go:334] "Generic (PLEG): container finished" podID="34c10916-fa5b-41e3-82f3-6263afd45c83" containerID="0a603e5820038e048e22460f8964cb6f346177f794a5be2d29b06e5111510e23" exitCode=0 Oct 01 15:53:40 crc kubenswrapper[4949]: I1001 15:53:40.287282 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96rvf5l" event={"ID":"34c10916-fa5b-41e3-82f3-6263afd45c83","Type":"ContainerDied","Data":"0a603e5820038e048e22460f8964cb6f346177f794a5be2d29b06e5111510e23"} Oct 01 15:53:41 crc kubenswrapper[4949]: I1001 15:53:41.500033 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96rvf5l" Oct 01 15:53:41 crc kubenswrapper[4949]: I1001 15:53:41.697525 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/34c10916-fa5b-41e3-82f3-6263afd45c83-bundle\") pod \"34c10916-fa5b-41e3-82f3-6263afd45c83\" (UID: \"34c10916-fa5b-41e3-82f3-6263afd45c83\") " Oct 01 15:53:41 crc kubenswrapper[4949]: I1001 15:53:41.697661 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/34c10916-fa5b-41e3-82f3-6263afd45c83-util\") pod \"34c10916-fa5b-41e3-82f3-6263afd45c83\" (UID: \"34c10916-fa5b-41e3-82f3-6263afd45c83\") " Oct 01 15:53:41 crc kubenswrapper[4949]: I1001 15:53:41.697691 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zp6s\" (UniqueName: \"kubernetes.io/projected/34c10916-fa5b-41e3-82f3-6263afd45c83-kube-api-access-5zp6s\") pod \"34c10916-fa5b-41e3-82f3-6263afd45c83\" (UID: \"34c10916-fa5b-41e3-82f3-6263afd45c83\") " Oct 01 15:53:41 crc kubenswrapper[4949]: I1001 15:53:41.698891 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34c10916-fa5b-41e3-82f3-6263afd45c83-bundle" (OuterVolumeSpecName: "bundle") pod "34c10916-fa5b-41e3-82f3-6263afd45c83" (UID: "34c10916-fa5b-41e3-82f3-6263afd45c83"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 15:53:41 crc kubenswrapper[4949]: I1001 15:53:41.703330 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34c10916-fa5b-41e3-82f3-6263afd45c83-kube-api-access-5zp6s" (OuterVolumeSpecName: "kube-api-access-5zp6s") pod "34c10916-fa5b-41e3-82f3-6263afd45c83" (UID: "34c10916-fa5b-41e3-82f3-6263afd45c83"). InnerVolumeSpecName "kube-api-access-5zp6s". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:53:41 crc kubenswrapper[4949]: I1001 15:53:41.799480 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zp6s\" (UniqueName: \"kubernetes.io/projected/34c10916-fa5b-41e3-82f3-6263afd45c83-kube-api-access-5zp6s\") on node \"crc\" DevicePath \"\"" Oct 01 15:53:41 crc kubenswrapper[4949]: I1001 15:53:41.799894 4949 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/34c10916-fa5b-41e3-82f3-6263afd45c83-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 15:53:41 crc kubenswrapper[4949]: I1001 15:53:41.968784 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34c10916-fa5b-41e3-82f3-6263afd45c83-util" (OuterVolumeSpecName: "util") pod "34c10916-fa5b-41e3-82f3-6263afd45c83" (UID: "34c10916-fa5b-41e3-82f3-6263afd45c83"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 15:53:42 crc kubenswrapper[4949]: I1001 15:53:42.001740 4949 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/34c10916-fa5b-41e3-82f3-6263afd45c83-util\") on node \"crc\" DevicePath \"\"" Oct 01 15:53:42 crc kubenswrapper[4949]: I1001 15:53:42.302314 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96rvf5l" event={"ID":"34c10916-fa5b-41e3-82f3-6263afd45c83","Type":"ContainerDied","Data":"9297c9c385e5a85e54eca9076bfc448658439e09fb345ce04411bfbd5025d099"} Oct 01 15:53:42 crc kubenswrapper[4949]: I1001 15:53:42.302358 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9297c9c385e5a85e54eca9076bfc448658439e09fb345ce04411bfbd5025d099" Oct 01 15:53:42 crc kubenswrapper[4949]: I1001 15:53:42.302442 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96rvf5l" Oct 01 15:53:48 crc kubenswrapper[4949]: I1001 15:53:48.038819 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 15:53:48 crc kubenswrapper[4949]: I1001 15:53:48.039083 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 15:53:56 crc kubenswrapper[4949]: I1001 15:53:56.015102 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-5994bb94b4-vzvwd"] Oct 01 15:53:56 crc kubenswrapper[4949]: E1001 15:53:56.015888 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34c10916-fa5b-41e3-82f3-6263afd45c83" containerName="extract" Oct 01 15:53:56 crc kubenswrapper[4949]: I1001 15:53:56.015905 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="34c10916-fa5b-41e3-82f3-6263afd45c83" containerName="extract" Oct 01 15:53:56 crc kubenswrapper[4949]: E1001 15:53:56.015918 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34c10916-fa5b-41e3-82f3-6263afd45c83" containerName="util" Oct 01 15:53:56 crc kubenswrapper[4949]: I1001 15:53:56.015924 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="34c10916-fa5b-41e3-82f3-6263afd45c83" containerName="util" Oct 01 15:53:56 crc kubenswrapper[4949]: E1001 15:53:56.015936 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34c10916-fa5b-41e3-82f3-6263afd45c83" containerName="pull" Oct 01 15:53:56 crc kubenswrapper[4949]: I1001 15:53:56.015943 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="34c10916-fa5b-41e3-82f3-6263afd45c83" containerName="pull" Oct 01 15:53:56 crc kubenswrapper[4949]: E1001 15:53:56.015959 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62b77904-e0d8-4a98-b6e0-49b2c18821db" containerName="console" Oct 01 15:53:56 crc kubenswrapper[4949]: I1001 15:53:56.015966 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="62b77904-e0d8-4a98-b6e0-49b2c18821db" containerName="console" Oct 01 15:53:56 crc kubenswrapper[4949]: I1001 15:53:56.016088 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="62b77904-e0d8-4a98-b6e0-49b2c18821db" containerName="console" Oct 01 15:53:56 crc kubenswrapper[4949]: I1001 15:53:56.016105 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="34c10916-fa5b-41e3-82f3-6263afd45c83" containerName="extract" Oct 01 15:53:56 crc kubenswrapper[4949]: I1001 15:53:56.016599 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5994bb94b4-vzvwd" Oct 01 15:53:56 crc kubenswrapper[4949]: I1001 15:53:56.018922 4949 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Oct 01 15:53:56 crc kubenswrapper[4949]: I1001 15:53:56.018922 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Oct 01 15:53:56 crc kubenswrapper[4949]: I1001 15:53:56.019164 4949 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Oct 01 15:53:56 crc kubenswrapper[4949]: I1001 15:53:56.019303 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Oct 01 15:53:56 crc kubenswrapper[4949]: I1001 15:53:56.022559 4949 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-p4t9c" Oct 01 15:53:56 crc kubenswrapper[4949]: I1001 15:53:56.031942 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5994bb94b4-vzvwd"] Oct 01 15:53:56 crc kubenswrapper[4949]: I1001 15:53:56.093516 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c3977b01-fc92-43d9-988f-132323039996-apiservice-cert\") pod \"metallb-operator-controller-manager-5994bb94b4-vzvwd\" (UID: \"c3977b01-fc92-43d9-988f-132323039996\") " pod="metallb-system/metallb-operator-controller-manager-5994bb94b4-vzvwd" Oct 01 15:53:56 crc kubenswrapper[4949]: I1001 15:53:56.093563 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c3977b01-fc92-43d9-988f-132323039996-webhook-cert\") pod \"metallb-operator-controller-manager-5994bb94b4-vzvwd\" (UID: \"c3977b01-fc92-43d9-988f-132323039996\") " pod="metallb-system/metallb-operator-controller-manager-5994bb94b4-vzvwd" Oct 01 15:53:56 crc kubenswrapper[4949]: I1001 15:53:56.093593 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49p5v\" (UniqueName: \"kubernetes.io/projected/c3977b01-fc92-43d9-988f-132323039996-kube-api-access-49p5v\") pod \"metallb-operator-controller-manager-5994bb94b4-vzvwd\" (UID: \"c3977b01-fc92-43d9-988f-132323039996\") " pod="metallb-system/metallb-operator-controller-manager-5994bb94b4-vzvwd" Oct 01 15:53:56 crc kubenswrapper[4949]: I1001 15:53:56.194145 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49p5v\" (UniqueName: \"kubernetes.io/projected/c3977b01-fc92-43d9-988f-132323039996-kube-api-access-49p5v\") pod \"metallb-operator-controller-manager-5994bb94b4-vzvwd\" (UID: \"c3977b01-fc92-43d9-988f-132323039996\") " pod="metallb-system/metallb-operator-controller-manager-5994bb94b4-vzvwd" Oct 01 15:53:56 crc kubenswrapper[4949]: I1001 15:53:56.194233 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c3977b01-fc92-43d9-988f-132323039996-apiservice-cert\") pod \"metallb-operator-controller-manager-5994bb94b4-vzvwd\" (UID: \"c3977b01-fc92-43d9-988f-132323039996\") " pod="metallb-system/metallb-operator-controller-manager-5994bb94b4-vzvwd" Oct 01 15:53:56 crc kubenswrapper[4949]: I1001 15:53:56.194257 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c3977b01-fc92-43d9-988f-132323039996-webhook-cert\") pod \"metallb-operator-controller-manager-5994bb94b4-vzvwd\" (UID: \"c3977b01-fc92-43d9-988f-132323039996\") " pod="metallb-system/metallb-operator-controller-manager-5994bb94b4-vzvwd" Oct 01 15:53:56 crc kubenswrapper[4949]: I1001 15:53:56.200885 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c3977b01-fc92-43d9-988f-132323039996-apiservice-cert\") pod \"metallb-operator-controller-manager-5994bb94b4-vzvwd\" (UID: \"c3977b01-fc92-43d9-988f-132323039996\") " pod="metallb-system/metallb-operator-controller-manager-5994bb94b4-vzvwd" Oct 01 15:53:56 crc kubenswrapper[4949]: I1001 15:53:56.209219 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c3977b01-fc92-43d9-988f-132323039996-webhook-cert\") pod \"metallb-operator-controller-manager-5994bb94b4-vzvwd\" (UID: \"c3977b01-fc92-43d9-988f-132323039996\") " pod="metallb-system/metallb-operator-controller-manager-5994bb94b4-vzvwd" Oct 01 15:53:56 crc kubenswrapper[4949]: I1001 15:53:56.213792 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49p5v\" (UniqueName: \"kubernetes.io/projected/c3977b01-fc92-43d9-988f-132323039996-kube-api-access-49p5v\") pod \"metallb-operator-controller-manager-5994bb94b4-vzvwd\" (UID: \"c3977b01-fc92-43d9-988f-132323039996\") " pod="metallb-system/metallb-operator-controller-manager-5994bb94b4-vzvwd" Oct 01 15:53:56 crc kubenswrapper[4949]: I1001 15:53:56.333019 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5994bb94b4-vzvwd" Oct 01 15:53:56 crc kubenswrapper[4949]: I1001 15:53:56.335964 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-5d485496f6-kg4s9"] Oct 01 15:53:56 crc kubenswrapper[4949]: I1001 15:53:56.336642 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5d485496f6-kg4s9" Oct 01 15:53:56 crc kubenswrapper[4949]: I1001 15:53:56.338718 4949 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Oct 01 15:53:56 crc kubenswrapper[4949]: I1001 15:53:56.339298 4949 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Oct 01 15:53:56 crc kubenswrapper[4949]: I1001 15:53:56.339590 4949 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-w56tn" Oct 01 15:53:56 crc kubenswrapper[4949]: I1001 15:53:56.353216 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5d485496f6-kg4s9"] Oct 01 15:53:56 crc kubenswrapper[4949]: I1001 15:53:56.397172 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/37b8bea9-55e5-46a9-9217-d001b1157e9f-apiservice-cert\") pod \"metallb-operator-webhook-server-5d485496f6-kg4s9\" (UID: \"37b8bea9-55e5-46a9-9217-d001b1157e9f\") " pod="metallb-system/metallb-operator-webhook-server-5d485496f6-kg4s9" Oct 01 15:53:56 crc kubenswrapper[4949]: I1001 15:53:56.397216 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/37b8bea9-55e5-46a9-9217-d001b1157e9f-webhook-cert\") pod \"metallb-operator-webhook-server-5d485496f6-kg4s9\" (UID: \"37b8bea9-55e5-46a9-9217-d001b1157e9f\") " pod="metallb-system/metallb-operator-webhook-server-5d485496f6-kg4s9" Oct 01 15:53:56 crc kubenswrapper[4949]: I1001 15:53:56.397242 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwm9b\" (UniqueName: \"kubernetes.io/projected/37b8bea9-55e5-46a9-9217-d001b1157e9f-kube-api-access-xwm9b\") pod \"metallb-operator-webhook-server-5d485496f6-kg4s9\" (UID: \"37b8bea9-55e5-46a9-9217-d001b1157e9f\") " pod="metallb-system/metallb-operator-webhook-server-5d485496f6-kg4s9" Oct 01 15:53:56 crc kubenswrapper[4949]: I1001 15:53:56.498067 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/37b8bea9-55e5-46a9-9217-d001b1157e9f-apiservice-cert\") pod \"metallb-operator-webhook-server-5d485496f6-kg4s9\" (UID: \"37b8bea9-55e5-46a9-9217-d001b1157e9f\") " pod="metallb-system/metallb-operator-webhook-server-5d485496f6-kg4s9" Oct 01 15:53:56 crc kubenswrapper[4949]: I1001 15:53:56.498333 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/37b8bea9-55e5-46a9-9217-d001b1157e9f-webhook-cert\") pod \"metallb-operator-webhook-server-5d485496f6-kg4s9\" (UID: \"37b8bea9-55e5-46a9-9217-d001b1157e9f\") " pod="metallb-system/metallb-operator-webhook-server-5d485496f6-kg4s9" Oct 01 15:53:56 crc kubenswrapper[4949]: I1001 15:53:56.498355 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwm9b\" (UniqueName: \"kubernetes.io/projected/37b8bea9-55e5-46a9-9217-d001b1157e9f-kube-api-access-xwm9b\") pod \"metallb-operator-webhook-server-5d485496f6-kg4s9\" (UID: \"37b8bea9-55e5-46a9-9217-d001b1157e9f\") " pod="metallb-system/metallb-operator-webhook-server-5d485496f6-kg4s9" Oct 01 15:53:56 crc kubenswrapper[4949]: I1001 15:53:56.503691 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/37b8bea9-55e5-46a9-9217-d001b1157e9f-webhook-cert\") pod \"metallb-operator-webhook-server-5d485496f6-kg4s9\" (UID: \"37b8bea9-55e5-46a9-9217-d001b1157e9f\") " pod="metallb-system/metallb-operator-webhook-server-5d485496f6-kg4s9" Oct 01 15:53:56 crc kubenswrapper[4949]: I1001 15:53:56.508795 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/37b8bea9-55e5-46a9-9217-d001b1157e9f-apiservice-cert\") pod \"metallb-operator-webhook-server-5d485496f6-kg4s9\" (UID: \"37b8bea9-55e5-46a9-9217-d001b1157e9f\") " pod="metallb-system/metallb-operator-webhook-server-5d485496f6-kg4s9" Oct 01 15:53:56 crc kubenswrapper[4949]: I1001 15:53:56.519066 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwm9b\" (UniqueName: \"kubernetes.io/projected/37b8bea9-55e5-46a9-9217-d001b1157e9f-kube-api-access-xwm9b\") pod \"metallb-operator-webhook-server-5d485496f6-kg4s9\" (UID: \"37b8bea9-55e5-46a9-9217-d001b1157e9f\") " pod="metallb-system/metallb-operator-webhook-server-5d485496f6-kg4s9" Oct 01 15:53:56 crc kubenswrapper[4949]: I1001 15:53:56.559466 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5994bb94b4-vzvwd"] Oct 01 15:53:56 crc kubenswrapper[4949]: I1001 15:53:56.695510 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5d485496f6-kg4s9" Oct 01 15:53:57 crc kubenswrapper[4949]: I1001 15:53:57.221861 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5d485496f6-kg4s9"] Oct 01 15:53:57 crc kubenswrapper[4949]: W1001 15:53:57.224471 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod37b8bea9_55e5_46a9_9217_d001b1157e9f.slice/crio-3512c610195a66906a1aa7e3fcea0d4d5e35c771e5f449d83e99f3ebe4a799ad WatchSource:0}: Error finding container 3512c610195a66906a1aa7e3fcea0d4d5e35c771e5f449d83e99f3ebe4a799ad: Status 404 returned error can't find the container with id 3512c610195a66906a1aa7e3fcea0d4d5e35c771e5f449d83e99f3ebe4a799ad Oct 01 15:53:57 crc kubenswrapper[4949]: I1001 15:53:57.380022 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5d485496f6-kg4s9" event={"ID":"37b8bea9-55e5-46a9-9217-d001b1157e9f","Type":"ContainerStarted","Data":"3512c610195a66906a1aa7e3fcea0d4d5e35c771e5f449d83e99f3ebe4a799ad"} Oct 01 15:53:57 crc kubenswrapper[4949]: I1001 15:53:57.380979 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5994bb94b4-vzvwd" event={"ID":"c3977b01-fc92-43d9-988f-132323039996","Type":"ContainerStarted","Data":"f12fe69cc3ed5bbbdb99773b4fa215f94b3c492b3b76bd77184134144594a112"} Oct 01 15:54:00 crc kubenswrapper[4949]: I1001 15:54:00.406812 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5994bb94b4-vzvwd" event={"ID":"c3977b01-fc92-43d9-988f-132323039996","Type":"ContainerStarted","Data":"5f26e21452d489b74fcf1e93d9628f2840398552d33847a7b9d64f4748b83cd6"} Oct 01 15:54:00 crc kubenswrapper[4949]: I1001 15:54:00.407475 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-5994bb94b4-vzvwd" Oct 01 15:54:00 crc kubenswrapper[4949]: I1001 15:54:00.435892 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-5994bb94b4-vzvwd" podStartSLOduration=2.751955155 podStartE2EDuration="5.43517469s" podCreationTimestamp="2025-10-01 15:53:55 +0000 UTC" firstStartedPulling="2025-10-01 15:53:56.586461712 +0000 UTC m=+735.892067903" lastFinishedPulling="2025-10-01 15:53:59.269681247 +0000 UTC m=+738.575287438" observedRunningTime="2025-10-01 15:54:00.426162562 +0000 UTC m=+739.731768773" watchObservedRunningTime="2025-10-01 15:54:00.43517469 +0000 UTC m=+739.740780891" Oct 01 15:54:02 crc kubenswrapper[4949]: I1001 15:54:02.420427 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5d485496f6-kg4s9" event={"ID":"37b8bea9-55e5-46a9-9217-d001b1157e9f","Type":"ContainerStarted","Data":"54999edece2bd43148be09e2ad7cd8eae9c7dac0b312dac220b390145f7db59c"} Oct 01 15:54:02 crc kubenswrapper[4949]: I1001 15:54:02.420960 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-5d485496f6-kg4s9" Oct 01 15:54:02 crc kubenswrapper[4949]: I1001 15:54:02.442107 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-5d485496f6-kg4s9" podStartSLOduration=1.8572933040000001 podStartE2EDuration="6.442083898s" podCreationTimestamp="2025-10-01 15:53:56 +0000 UTC" firstStartedPulling="2025-10-01 15:53:57.22865342 +0000 UTC m=+736.534259611" lastFinishedPulling="2025-10-01 15:54:01.813443994 +0000 UTC m=+741.119050205" observedRunningTime="2025-10-01 15:54:02.437518571 +0000 UTC m=+741.743124762" watchObservedRunningTime="2025-10-01 15:54:02.442083898 +0000 UTC m=+741.747690089" Oct 01 15:54:05 crc kubenswrapper[4949]: I1001 15:54:05.052469 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-sm5dr"] Oct 01 15:54:05 crc kubenswrapper[4949]: I1001 15:54:05.053177 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-sm5dr" podUID="0df389db-e47d-4b16-9221-f1e5311c5cd6" containerName="controller-manager" containerID="cri-o://5e42e8d85aadb3a15018a374233fd37e736c557cd32a7c6c346257840aab55d8" gracePeriod=30 Oct 01 15:54:05 crc kubenswrapper[4949]: I1001 15:54:05.100684 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4v6mr"] Oct 01 15:54:05 crc kubenswrapper[4949]: I1001 15:54:05.100940 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4v6mr" podUID="ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342" containerName="route-controller-manager" containerID="cri-o://c66212f138dd731ce04816c9313303d83af05d266f166596e7f0acbada9a70c5" gracePeriod=30 Oct 01 15:54:05 crc kubenswrapper[4949]: I1001 15:54:05.441076 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-sm5dr" Oct 01 15:54:05 crc kubenswrapper[4949]: I1001 15:54:05.441555 4949 generic.go:334] "Generic (PLEG): container finished" podID="0df389db-e47d-4b16-9221-f1e5311c5cd6" containerID="5e42e8d85aadb3a15018a374233fd37e736c557cd32a7c6c346257840aab55d8" exitCode=0 Oct 01 15:54:05 crc kubenswrapper[4949]: I1001 15:54:05.441623 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-sm5dr" event={"ID":"0df389db-e47d-4b16-9221-f1e5311c5cd6","Type":"ContainerDied","Data":"5e42e8d85aadb3a15018a374233fd37e736c557cd32a7c6c346257840aab55d8"} Oct 01 15:54:05 crc kubenswrapper[4949]: I1001 15:54:05.441650 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-sm5dr" event={"ID":"0df389db-e47d-4b16-9221-f1e5311c5cd6","Type":"ContainerDied","Data":"df9635a815ea03823da56a9c4158745f5984d0fd963e2fbc26517136c9eff0b1"} Oct 01 15:54:05 crc kubenswrapper[4949]: I1001 15:54:05.441673 4949 scope.go:117] "RemoveContainer" containerID="5e42e8d85aadb3a15018a374233fd37e736c557cd32a7c6c346257840aab55d8" Oct 01 15:54:05 crc kubenswrapper[4949]: I1001 15:54:05.458621 4949 generic.go:334] "Generic (PLEG): container finished" podID="ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342" containerID="c66212f138dd731ce04816c9313303d83af05d266f166596e7f0acbada9a70c5" exitCode=0 Oct 01 15:54:05 crc kubenswrapper[4949]: I1001 15:54:05.458672 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4v6mr" event={"ID":"ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342","Type":"ContainerDied","Data":"c66212f138dd731ce04816c9313303d83af05d266f166596e7f0acbada9a70c5"} Oct 01 15:54:05 crc kubenswrapper[4949]: I1001 15:54:05.479537 4949 scope.go:117] "RemoveContainer" containerID="5e42e8d85aadb3a15018a374233fd37e736c557cd32a7c6c346257840aab55d8" Oct 01 15:54:05 crc kubenswrapper[4949]: E1001 15:54:05.480204 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5e42e8d85aadb3a15018a374233fd37e736c557cd32a7c6c346257840aab55d8\": container with ID starting with 5e42e8d85aadb3a15018a374233fd37e736c557cd32a7c6c346257840aab55d8 not found: ID does not exist" containerID="5e42e8d85aadb3a15018a374233fd37e736c557cd32a7c6c346257840aab55d8" Oct 01 15:54:05 crc kubenswrapper[4949]: I1001 15:54:05.480245 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5e42e8d85aadb3a15018a374233fd37e736c557cd32a7c6c346257840aab55d8"} err="failed to get container status \"5e42e8d85aadb3a15018a374233fd37e736c557cd32a7c6c346257840aab55d8\": rpc error: code = NotFound desc = could not find container \"5e42e8d85aadb3a15018a374233fd37e736c557cd32a7c6c346257840aab55d8\": container with ID starting with 5e42e8d85aadb3a15018a374233fd37e736c557cd32a7c6c346257840aab55d8 not found: ID does not exist" Oct 01 15:54:05 crc kubenswrapper[4949]: I1001 15:54:05.506645 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4v6mr" Oct 01 15:54:05 crc kubenswrapper[4949]: I1001 15:54:05.516851 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btq4q\" (UniqueName: \"kubernetes.io/projected/ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342-kube-api-access-btq4q\") pod \"ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342\" (UID: \"ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342\") " Oct 01 15:54:05 crc kubenswrapper[4949]: I1001 15:54:05.517729 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0df389db-e47d-4b16-9221-f1e5311c5cd6-serving-cert\") pod \"0df389db-e47d-4b16-9221-f1e5311c5cd6\" (UID: \"0df389db-e47d-4b16-9221-f1e5311c5cd6\") " Oct 01 15:54:05 crc kubenswrapper[4949]: I1001 15:54:05.517755 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2h4x9\" (UniqueName: \"kubernetes.io/projected/0df389db-e47d-4b16-9221-f1e5311c5cd6-kube-api-access-2h4x9\") pod \"0df389db-e47d-4b16-9221-f1e5311c5cd6\" (UID: \"0df389db-e47d-4b16-9221-f1e5311c5cd6\") " Oct 01 15:54:05 crc kubenswrapper[4949]: I1001 15:54:05.517782 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0df389db-e47d-4b16-9221-f1e5311c5cd6-client-ca\") pod \"0df389db-e47d-4b16-9221-f1e5311c5cd6\" (UID: \"0df389db-e47d-4b16-9221-f1e5311c5cd6\") " Oct 01 15:54:05 crc kubenswrapper[4949]: I1001 15:54:05.517802 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342-config\") pod \"ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342\" (UID: \"ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342\") " Oct 01 15:54:05 crc kubenswrapper[4949]: I1001 15:54:05.517817 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342-client-ca\") pod \"ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342\" (UID: \"ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342\") " Oct 01 15:54:05 crc kubenswrapper[4949]: I1001 15:54:05.517834 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342-serving-cert\") pod \"ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342\" (UID: \"ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342\") " Oct 01 15:54:05 crc kubenswrapper[4949]: I1001 15:54:05.517883 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0df389db-e47d-4b16-9221-f1e5311c5cd6-config\") pod \"0df389db-e47d-4b16-9221-f1e5311c5cd6\" (UID: \"0df389db-e47d-4b16-9221-f1e5311c5cd6\") " Oct 01 15:54:05 crc kubenswrapper[4949]: I1001 15:54:05.517900 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0df389db-e47d-4b16-9221-f1e5311c5cd6-proxy-ca-bundles\") pod \"0df389db-e47d-4b16-9221-f1e5311c5cd6\" (UID: \"0df389db-e47d-4b16-9221-f1e5311c5cd6\") " Oct 01 15:54:05 crc kubenswrapper[4949]: I1001 15:54:05.518561 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0df389db-e47d-4b16-9221-f1e5311c5cd6-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "0df389db-e47d-4b16-9221-f1e5311c5cd6" (UID: "0df389db-e47d-4b16-9221-f1e5311c5cd6"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:54:05 crc kubenswrapper[4949]: I1001 15:54:05.519365 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342-config" (OuterVolumeSpecName: "config") pod "ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342" (UID: "ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:54:05 crc kubenswrapper[4949]: I1001 15:54:05.520573 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0df389db-e47d-4b16-9221-f1e5311c5cd6-client-ca" (OuterVolumeSpecName: "client-ca") pod "0df389db-e47d-4b16-9221-f1e5311c5cd6" (UID: "0df389db-e47d-4b16-9221-f1e5311c5cd6"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:54:05 crc kubenswrapper[4949]: I1001 15:54:05.520645 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342-client-ca" (OuterVolumeSpecName: "client-ca") pod "ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342" (UID: "ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:54:05 crc kubenswrapper[4949]: I1001 15:54:05.521273 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0df389db-e47d-4b16-9221-f1e5311c5cd6-config" (OuterVolumeSpecName: "config") pod "0df389db-e47d-4b16-9221-f1e5311c5cd6" (UID: "0df389db-e47d-4b16-9221-f1e5311c5cd6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:54:05 crc kubenswrapper[4949]: I1001 15:54:05.525046 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0df389db-e47d-4b16-9221-f1e5311c5cd6-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0df389db-e47d-4b16-9221-f1e5311c5cd6" (UID: "0df389db-e47d-4b16-9221-f1e5311c5cd6"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:54:05 crc kubenswrapper[4949]: I1001 15:54:05.525240 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342-kube-api-access-btq4q" (OuterVolumeSpecName: "kube-api-access-btq4q") pod "ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342" (UID: "ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342"). InnerVolumeSpecName "kube-api-access-btq4q". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:54:05 crc kubenswrapper[4949]: I1001 15:54:05.526095 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0df389db-e47d-4b16-9221-f1e5311c5cd6-kube-api-access-2h4x9" (OuterVolumeSpecName: "kube-api-access-2h4x9") pod "0df389db-e47d-4b16-9221-f1e5311c5cd6" (UID: "0df389db-e47d-4b16-9221-f1e5311c5cd6"). InnerVolumeSpecName "kube-api-access-2h4x9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:54:05 crc kubenswrapper[4949]: I1001 15:54:05.529492 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342" (UID: "ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:54:05 crc kubenswrapper[4949]: I1001 15:54:05.619243 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btq4q\" (UniqueName: \"kubernetes.io/projected/ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342-kube-api-access-btq4q\") on node \"crc\" DevicePath \"\"" Oct 01 15:54:05 crc kubenswrapper[4949]: I1001 15:54:05.619294 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0df389db-e47d-4b16-9221-f1e5311c5cd6-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 01 15:54:05 crc kubenswrapper[4949]: I1001 15:54:05.619309 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2h4x9\" (UniqueName: \"kubernetes.io/projected/0df389db-e47d-4b16-9221-f1e5311c5cd6-kube-api-access-2h4x9\") on node \"crc\" DevicePath \"\"" Oct 01 15:54:05 crc kubenswrapper[4949]: I1001 15:54:05.619321 4949 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/0df389db-e47d-4b16-9221-f1e5311c5cd6-client-ca\") on node \"crc\" DevicePath \"\"" Oct 01 15:54:05 crc kubenswrapper[4949]: I1001 15:54:05.619335 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342-config\") on node \"crc\" DevicePath \"\"" Oct 01 15:54:05 crc kubenswrapper[4949]: I1001 15:54:05.619347 4949 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342-client-ca\") on node \"crc\" DevicePath \"\"" Oct 01 15:54:05 crc kubenswrapper[4949]: I1001 15:54:05.619359 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 01 15:54:05 crc kubenswrapper[4949]: I1001 15:54:05.619371 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0df389db-e47d-4b16-9221-f1e5311c5cd6-config\") on node \"crc\" DevicePath \"\"" Oct 01 15:54:05 crc kubenswrapper[4949]: I1001 15:54:05.619384 4949 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0df389db-e47d-4b16-9221-f1e5311c5cd6-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Oct 01 15:54:06 crc kubenswrapper[4949]: I1001 15:54:06.188633 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-577b94f566-tmk5c"] Oct 01 15:54:06 crc kubenswrapper[4949]: E1001 15:54:06.188893 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342" containerName="route-controller-manager" Oct 01 15:54:06 crc kubenswrapper[4949]: I1001 15:54:06.188911 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342" containerName="route-controller-manager" Oct 01 15:54:06 crc kubenswrapper[4949]: E1001 15:54:06.188948 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0df389db-e47d-4b16-9221-f1e5311c5cd6" containerName="controller-manager" Oct 01 15:54:06 crc kubenswrapper[4949]: I1001 15:54:06.188957 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="0df389db-e47d-4b16-9221-f1e5311c5cd6" containerName="controller-manager" Oct 01 15:54:06 crc kubenswrapper[4949]: I1001 15:54:06.189093 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="0df389db-e47d-4b16-9221-f1e5311c5cd6" containerName="controller-manager" Oct 01 15:54:06 crc kubenswrapper[4949]: I1001 15:54:06.189110 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342" containerName="route-controller-manager" Oct 01 15:54:06 crc kubenswrapper[4949]: I1001 15:54:06.189751 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-577b94f566-tmk5c" Oct 01 15:54:06 crc kubenswrapper[4949]: I1001 15:54:06.206881 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-577b94f566-tmk5c"] Oct 01 15:54:06 crc kubenswrapper[4949]: I1001 15:54:06.332374 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/08125712-c7ec-4bad-86b0-c5077cc27713-client-ca\") pod \"controller-manager-577b94f566-tmk5c\" (UID: \"08125712-c7ec-4bad-86b0-c5077cc27713\") " pod="openshift-controller-manager/controller-manager-577b94f566-tmk5c" Oct 01 15:54:06 crc kubenswrapper[4949]: I1001 15:54:06.332453 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrq69\" (UniqueName: \"kubernetes.io/projected/08125712-c7ec-4bad-86b0-c5077cc27713-kube-api-access-zrq69\") pod \"controller-manager-577b94f566-tmk5c\" (UID: \"08125712-c7ec-4bad-86b0-c5077cc27713\") " pod="openshift-controller-manager/controller-manager-577b94f566-tmk5c" Oct 01 15:54:06 crc kubenswrapper[4949]: I1001 15:54:06.332526 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/08125712-c7ec-4bad-86b0-c5077cc27713-proxy-ca-bundles\") pod \"controller-manager-577b94f566-tmk5c\" (UID: \"08125712-c7ec-4bad-86b0-c5077cc27713\") " pod="openshift-controller-manager/controller-manager-577b94f566-tmk5c" Oct 01 15:54:06 crc kubenswrapper[4949]: I1001 15:54:06.332577 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08125712-c7ec-4bad-86b0-c5077cc27713-config\") pod \"controller-manager-577b94f566-tmk5c\" (UID: \"08125712-c7ec-4bad-86b0-c5077cc27713\") " pod="openshift-controller-manager/controller-manager-577b94f566-tmk5c" Oct 01 15:54:06 crc kubenswrapper[4949]: I1001 15:54:06.332629 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08125712-c7ec-4bad-86b0-c5077cc27713-serving-cert\") pod \"controller-manager-577b94f566-tmk5c\" (UID: \"08125712-c7ec-4bad-86b0-c5077cc27713\") " pod="openshift-controller-manager/controller-manager-577b94f566-tmk5c" Oct 01 15:54:06 crc kubenswrapper[4949]: I1001 15:54:06.433832 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08125712-c7ec-4bad-86b0-c5077cc27713-config\") pod \"controller-manager-577b94f566-tmk5c\" (UID: \"08125712-c7ec-4bad-86b0-c5077cc27713\") " pod="openshift-controller-manager/controller-manager-577b94f566-tmk5c" Oct 01 15:54:06 crc kubenswrapper[4949]: I1001 15:54:06.433914 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08125712-c7ec-4bad-86b0-c5077cc27713-serving-cert\") pod \"controller-manager-577b94f566-tmk5c\" (UID: \"08125712-c7ec-4bad-86b0-c5077cc27713\") " pod="openshift-controller-manager/controller-manager-577b94f566-tmk5c" Oct 01 15:54:06 crc kubenswrapper[4949]: I1001 15:54:06.433955 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/08125712-c7ec-4bad-86b0-c5077cc27713-client-ca\") pod \"controller-manager-577b94f566-tmk5c\" (UID: \"08125712-c7ec-4bad-86b0-c5077cc27713\") " pod="openshift-controller-manager/controller-manager-577b94f566-tmk5c" Oct 01 15:54:06 crc kubenswrapper[4949]: I1001 15:54:06.433986 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrq69\" (UniqueName: \"kubernetes.io/projected/08125712-c7ec-4bad-86b0-c5077cc27713-kube-api-access-zrq69\") pod \"controller-manager-577b94f566-tmk5c\" (UID: \"08125712-c7ec-4bad-86b0-c5077cc27713\") " pod="openshift-controller-manager/controller-manager-577b94f566-tmk5c" Oct 01 15:54:06 crc kubenswrapper[4949]: I1001 15:54:06.434008 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/08125712-c7ec-4bad-86b0-c5077cc27713-proxy-ca-bundles\") pod \"controller-manager-577b94f566-tmk5c\" (UID: \"08125712-c7ec-4bad-86b0-c5077cc27713\") " pod="openshift-controller-manager/controller-manager-577b94f566-tmk5c" Oct 01 15:54:06 crc kubenswrapper[4949]: I1001 15:54:06.435349 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08125712-c7ec-4bad-86b0-c5077cc27713-config\") pod \"controller-manager-577b94f566-tmk5c\" (UID: \"08125712-c7ec-4bad-86b0-c5077cc27713\") " pod="openshift-controller-manager/controller-manager-577b94f566-tmk5c" Oct 01 15:54:06 crc kubenswrapper[4949]: I1001 15:54:06.435446 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/08125712-c7ec-4bad-86b0-c5077cc27713-proxy-ca-bundles\") pod \"controller-manager-577b94f566-tmk5c\" (UID: \"08125712-c7ec-4bad-86b0-c5077cc27713\") " pod="openshift-controller-manager/controller-manager-577b94f566-tmk5c" Oct 01 15:54:06 crc kubenswrapper[4949]: I1001 15:54:06.435507 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/08125712-c7ec-4bad-86b0-c5077cc27713-client-ca\") pod \"controller-manager-577b94f566-tmk5c\" (UID: \"08125712-c7ec-4bad-86b0-c5077cc27713\") " pod="openshift-controller-manager/controller-manager-577b94f566-tmk5c" Oct 01 15:54:06 crc kubenswrapper[4949]: I1001 15:54:06.437862 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08125712-c7ec-4bad-86b0-c5077cc27713-serving-cert\") pod \"controller-manager-577b94f566-tmk5c\" (UID: \"08125712-c7ec-4bad-86b0-c5077cc27713\") " pod="openshift-controller-manager/controller-manager-577b94f566-tmk5c" Oct 01 15:54:06 crc kubenswrapper[4949]: I1001 15:54:06.454486 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrq69\" (UniqueName: \"kubernetes.io/projected/08125712-c7ec-4bad-86b0-c5077cc27713-kube-api-access-zrq69\") pod \"controller-manager-577b94f566-tmk5c\" (UID: \"08125712-c7ec-4bad-86b0-c5077cc27713\") " pod="openshift-controller-manager/controller-manager-577b94f566-tmk5c" Oct 01 15:54:06 crc kubenswrapper[4949]: I1001 15:54:06.467389 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4v6mr" Oct 01 15:54:06 crc kubenswrapper[4949]: I1001 15:54:06.467387 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-4v6mr" event={"ID":"ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342","Type":"ContainerDied","Data":"be713065403d589f3e5295d9b29a93c5e8fc94c9713a27166c2fe42d0df05aa0"} Oct 01 15:54:06 crc kubenswrapper[4949]: I1001 15:54:06.467621 4949 scope.go:117] "RemoveContainer" containerID="c66212f138dd731ce04816c9313303d83af05d266f166596e7f0acbada9a70c5" Oct 01 15:54:06 crc kubenswrapper[4949]: I1001 15:54:06.468883 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-sm5dr" Oct 01 15:54:06 crc kubenswrapper[4949]: I1001 15:54:06.493107 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-sm5dr"] Oct 01 15:54:06 crc kubenswrapper[4949]: I1001 15:54:06.498530 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-sm5dr"] Oct 01 15:54:06 crc kubenswrapper[4949]: I1001 15:54:06.504176 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-577b94f566-tmk5c" Oct 01 15:54:06 crc kubenswrapper[4949]: I1001 15:54:06.509314 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4v6mr"] Oct 01 15:54:06 crc kubenswrapper[4949]: I1001 15:54:06.515902 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-4v6mr"] Oct 01 15:54:06 crc kubenswrapper[4949]: I1001 15:54:06.938088 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-577b94f566-tmk5c"] Oct 01 15:54:06 crc kubenswrapper[4949]: W1001 15:54:06.946247 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod08125712_c7ec_4bad_86b0_c5077cc27713.slice/crio-ce6df2ed3ffb3f70adab51942658c4746528f303b8f1d1bc84dc9ffa3984303e WatchSource:0}: Error finding container ce6df2ed3ffb3f70adab51942658c4746528f303b8f1d1bc84dc9ffa3984303e: Status 404 returned error can't find the container with id ce6df2ed3ffb3f70adab51942658c4746528f303b8f1d1bc84dc9ffa3984303e Oct 01 15:54:07 crc kubenswrapper[4949]: I1001 15:54:07.070567 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-577b94f566-tmk5c"] Oct 01 15:54:07 crc kubenswrapper[4949]: I1001 15:54:07.127218 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9547d54c4-rjjl5"] Oct 01 15:54:07 crc kubenswrapper[4949]: I1001 15:54:07.128117 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9547d54c4-rjjl5" Oct 01 15:54:07 crc kubenswrapper[4949]: I1001 15:54:07.130013 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Oct 01 15:54:07 crc kubenswrapper[4949]: I1001 15:54:07.132188 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Oct 01 15:54:07 crc kubenswrapper[4949]: I1001 15:54:07.132255 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Oct 01 15:54:07 crc kubenswrapper[4949]: I1001 15:54:07.132262 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Oct 01 15:54:07 crc kubenswrapper[4949]: I1001 15:54:07.132366 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Oct 01 15:54:07 crc kubenswrapper[4949]: I1001 15:54:07.132572 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Oct 01 15:54:07 crc kubenswrapper[4949]: I1001 15:54:07.139181 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9547d54c4-rjjl5"] Oct 01 15:54:07 crc kubenswrapper[4949]: I1001 15:54:07.242017 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/228480aa-f994-4572-990a-f88b471ea464-config\") pod \"route-controller-manager-9547d54c4-rjjl5\" (UID: \"228480aa-f994-4572-990a-f88b471ea464\") " pod="openshift-route-controller-manager/route-controller-manager-9547d54c4-rjjl5" Oct 01 15:54:07 crc kubenswrapper[4949]: I1001 15:54:07.242066 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/228480aa-f994-4572-990a-f88b471ea464-client-ca\") pod \"route-controller-manager-9547d54c4-rjjl5\" (UID: \"228480aa-f994-4572-990a-f88b471ea464\") " pod="openshift-route-controller-manager/route-controller-manager-9547d54c4-rjjl5" Oct 01 15:54:07 crc kubenswrapper[4949]: I1001 15:54:07.242089 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/228480aa-f994-4572-990a-f88b471ea464-serving-cert\") pod \"route-controller-manager-9547d54c4-rjjl5\" (UID: \"228480aa-f994-4572-990a-f88b471ea464\") " pod="openshift-route-controller-manager/route-controller-manager-9547d54c4-rjjl5" Oct 01 15:54:07 crc kubenswrapper[4949]: I1001 15:54:07.242171 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lpt8\" (UniqueName: \"kubernetes.io/projected/228480aa-f994-4572-990a-f88b471ea464-kube-api-access-9lpt8\") pod \"route-controller-manager-9547d54c4-rjjl5\" (UID: \"228480aa-f994-4572-990a-f88b471ea464\") " pod="openshift-route-controller-manager/route-controller-manager-9547d54c4-rjjl5" Oct 01 15:54:07 crc kubenswrapper[4949]: I1001 15:54:07.343446 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/228480aa-f994-4572-990a-f88b471ea464-config\") pod \"route-controller-manager-9547d54c4-rjjl5\" (UID: \"228480aa-f994-4572-990a-f88b471ea464\") " pod="openshift-route-controller-manager/route-controller-manager-9547d54c4-rjjl5" Oct 01 15:54:07 crc kubenswrapper[4949]: I1001 15:54:07.343525 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/228480aa-f994-4572-990a-f88b471ea464-client-ca\") pod \"route-controller-manager-9547d54c4-rjjl5\" (UID: \"228480aa-f994-4572-990a-f88b471ea464\") " pod="openshift-route-controller-manager/route-controller-manager-9547d54c4-rjjl5" Oct 01 15:54:07 crc kubenswrapper[4949]: I1001 15:54:07.343545 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/228480aa-f994-4572-990a-f88b471ea464-serving-cert\") pod \"route-controller-manager-9547d54c4-rjjl5\" (UID: \"228480aa-f994-4572-990a-f88b471ea464\") " pod="openshift-route-controller-manager/route-controller-manager-9547d54c4-rjjl5" Oct 01 15:54:07 crc kubenswrapper[4949]: I1001 15:54:07.343563 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lpt8\" (UniqueName: \"kubernetes.io/projected/228480aa-f994-4572-990a-f88b471ea464-kube-api-access-9lpt8\") pod \"route-controller-manager-9547d54c4-rjjl5\" (UID: \"228480aa-f994-4572-990a-f88b471ea464\") " pod="openshift-route-controller-manager/route-controller-manager-9547d54c4-rjjl5" Oct 01 15:54:07 crc kubenswrapper[4949]: I1001 15:54:07.344665 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/228480aa-f994-4572-990a-f88b471ea464-config\") pod \"route-controller-manager-9547d54c4-rjjl5\" (UID: \"228480aa-f994-4572-990a-f88b471ea464\") " pod="openshift-route-controller-manager/route-controller-manager-9547d54c4-rjjl5" Oct 01 15:54:07 crc kubenswrapper[4949]: I1001 15:54:07.344865 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/228480aa-f994-4572-990a-f88b471ea464-client-ca\") pod \"route-controller-manager-9547d54c4-rjjl5\" (UID: \"228480aa-f994-4572-990a-f88b471ea464\") " pod="openshift-route-controller-manager/route-controller-manager-9547d54c4-rjjl5" Oct 01 15:54:07 crc kubenswrapper[4949]: I1001 15:54:07.348201 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/228480aa-f994-4572-990a-f88b471ea464-serving-cert\") pod \"route-controller-manager-9547d54c4-rjjl5\" (UID: \"228480aa-f994-4572-990a-f88b471ea464\") " pod="openshift-route-controller-manager/route-controller-manager-9547d54c4-rjjl5" Oct 01 15:54:07 crc kubenswrapper[4949]: I1001 15:54:07.359010 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lpt8\" (UniqueName: \"kubernetes.io/projected/228480aa-f994-4572-990a-f88b471ea464-kube-api-access-9lpt8\") pod \"route-controller-manager-9547d54c4-rjjl5\" (UID: \"228480aa-f994-4572-990a-f88b471ea464\") " pod="openshift-route-controller-manager/route-controller-manager-9547d54c4-rjjl5" Oct 01 15:54:07 crc kubenswrapper[4949]: I1001 15:54:07.444389 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-9547d54c4-rjjl5" Oct 01 15:54:07 crc kubenswrapper[4949]: I1001 15:54:07.477286 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-577b94f566-tmk5c" event={"ID":"08125712-c7ec-4bad-86b0-c5077cc27713","Type":"ContainerStarted","Data":"0512834ba9eed134f887f350293eea8e205a01af26bf19699f0a44c1a0f22cf2"} Oct 01 15:54:07 crc kubenswrapper[4949]: I1001 15:54:07.477334 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-577b94f566-tmk5c" event={"ID":"08125712-c7ec-4bad-86b0-c5077cc27713","Type":"ContainerStarted","Data":"ce6df2ed3ffb3f70adab51942658c4746528f303b8f1d1bc84dc9ffa3984303e"} Oct 01 15:54:07 crc kubenswrapper[4949]: I1001 15:54:07.477372 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-577b94f566-tmk5c" podUID="08125712-c7ec-4bad-86b0-c5077cc27713" containerName="controller-manager" containerID="cri-o://0512834ba9eed134f887f350293eea8e205a01af26bf19699f0a44c1a0f22cf2" gracePeriod=30 Oct 01 15:54:07 crc kubenswrapper[4949]: I1001 15:54:07.477550 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-577b94f566-tmk5c" Oct 01 15:54:07 crc kubenswrapper[4949]: I1001 15:54:07.483557 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-577b94f566-tmk5c" Oct 01 15:54:07 crc kubenswrapper[4949]: I1001 15:54:07.497411 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-577b94f566-tmk5c" podStartSLOduration=2.497394383 podStartE2EDuration="2.497394383s" podCreationTimestamp="2025-10-01 15:54:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:54:07.496738535 +0000 UTC m=+746.802344726" watchObservedRunningTime="2025-10-01 15:54:07.497394383 +0000 UTC m=+746.803000574" Oct 01 15:54:07 crc kubenswrapper[4949]: I1001 15:54:07.627478 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0df389db-e47d-4b16-9221-f1e5311c5cd6" path="/var/lib/kubelet/pods/0df389db-e47d-4b16-9221-f1e5311c5cd6/volumes" Oct 01 15:54:07 crc kubenswrapper[4949]: I1001 15:54:07.629403 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342" path="/var/lib/kubelet/pods/ce63f21b-3f88-4e9b-ae2c-0cbd9cd36342/volumes" Oct 01 15:54:07 crc kubenswrapper[4949]: I1001 15:54:07.673442 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-9547d54c4-rjjl5"] Oct 01 15:54:07 crc kubenswrapper[4949]: W1001 15:54:07.697407 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod228480aa_f994_4572_990a_f88b471ea464.slice/crio-9f73faf24081a90ca033e0292e7264f000002fb883589b4ce5332c6dfe9c6f26 WatchSource:0}: Error finding container 9f73faf24081a90ca033e0292e7264f000002fb883589b4ce5332c6dfe9c6f26: Status 404 returned error can't find the container with id 9f73faf24081a90ca033e0292e7264f000002fb883589b4ce5332c6dfe9c6f26 Oct 01 15:54:07 crc kubenswrapper[4949]: I1001 15:54:07.915663 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-577b94f566-tmk5c" Oct 01 15:54:08 crc kubenswrapper[4949]: I1001 15:54:08.056449 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08125712-c7ec-4bad-86b0-c5077cc27713-config\") pod \"08125712-c7ec-4bad-86b0-c5077cc27713\" (UID: \"08125712-c7ec-4bad-86b0-c5077cc27713\") " Oct 01 15:54:08 crc kubenswrapper[4949]: I1001 15:54:08.056507 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/08125712-c7ec-4bad-86b0-c5077cc27713-client-ca\") pod \"08125712-c7ec-4bad-86b0-c5077cc27713\" (UID: \"08125712-c7ec-4bad-86b0-c5077cc27713\") " Oct 01 15:54:08 crc kubenswrapper[4949]: I1001 15:54:08.056545 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08125712-c7ec-4bad-86b0-c5077cc27713-serving-cert\") pod \"08125712-c7ec-4bad-86b0-c5077cc27713\" (UID: \"08125712-c7ec-4bad-86b0-c5077cc27713\") " Oct 01 15:54:08 crc kubenswrapper[4949]: I1001 15:54:08.056628 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrq69\" (UniqueName: \"kubernetes.io/projected/08125712-c7ec-4bad-86b0-c5077cc27713-kube-api-access-zrq69\") pod \"08125712-c7ec-4bad-86b0-c5077cc27713\" (UID: \"08125712-c7ec-4bad-86b0-c5077cc27713\") " Oct 01 15:54:08 crc kubenswrapper[4949]: I1001 15:54:08.057493 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/08125712-c7ec-4bad-86b0-c5077cc27713-proxy-ca-bundles\") pod \"08125712-c7ec-4bad-86b0-c5077cc27713\" (UID: \"08125712-c7ec-4bad-86b0-c5077cc27713\") " Oct 01 15:54:08 crc kubenswrapper[4949]: I1001 15:54:08.057388 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08125712-c7ec-4bad-86b0-c5077cc27713-client-ca" (OuterVolumeSpecName: "client-ca") pod "08125712-c7ec-4bad-86b0-c5077cc27713" (UID: "08125712-c7ec-4bad-86b0-c5077cc27713"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:54:08 crc kubenswrapper[4949]: I1001 15:54:08.057439 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08125712-c7ec-4bad-86b0-c5077cc27713-config" (OuterVolumeSpecName: "config") pod "08125712-c7ec-4bad-86b0-c5077cc27713" (UID: "08125712-c7ec-4bad-86b0-c5077cc27713"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:54:08 crc kubenswrapper[4949]: I1001 15:54:08.057776 4949 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/08125712-c7ec-4bad-86b0-c5077cc27713-client-ca\") on node \"crc\" DevicePath \"\"" Oct 01 15:54:08 crc kubenswrapper[4949]: I1001 15:54:08.057796 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08125712-c7ec-4bad-86b0-c5077cc27713-config\") on node \"crc\" DevicePath \"\"" Oct 01 15:54:08 crc kubenswrapper[4949]: I1001 15:54:08.057993 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/08125712-c7ec-4bad-86b0-c5077cc27713-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "08125712-c7ec-4bad-86b0-c5077cc27713" (UID: "08125712-c7ec-4bad-86b0-c5077cc27713"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:54:08 crc kubenswrapper[4949]: I1001 15:54:08.061248 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08125712-c7ec-4bad-86b0-c5077cc27713-kube-api-access-zrq69" (OuterVolumeSpecName: "kube-api-access-zrq69") pod "08125712-c7ec-4bad-86b0-c5077cc27713" (UID: "08125712-c7ec-4bad-86b0-c5077cc27713"). InnerVolumeSpecName "kube-api-access-zrq69". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:54:08 crc kubenswrapper[4949]: I1001 15:54:08.062333 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/08125712-c7ec-4bad-86b0-c5077cc27713-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "08125712-c7ec-4bad-86b0-c5077cc27713" (UID: "08125712-c7ec-4bad-86b0-c5077cc27713"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:54:08 crc kubenswrapper[4949]: I1001 15:54:08.158480 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrq69\" (UniqueName: \"kubernetes.io/projected/08125712-c7ec-4bad-86b0-c5077cc27713-kube-api-access-zrq69\") on node \"crc\" DevicePath \"\"" Oct 01 15:54:08 crc kubenswrapper[4949]: I1001 15:54:08.158532 4949 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/08125712-c7ec-4bad-86b0-c5077cc27713-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Oct 01 15:54:08 crc kubenswrapper[4949]: I1001 15:54:08.158544 4949 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08125712-c7ec-4bad-86b0-c5077cc27713-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 01 15:54:08 crc kubenswrapper[4949]: I1001 15:54:08.194232 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-cf744949b-7tlf8"] Oct 01 15:54:08 crc kubenswrapper[4949]: E1001 15:54:08.194530 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08125712-c7ec-4bad-86b0-c5077cc27713" containerName="controller-manager" Oct 01 15:54:08 crc kubenswrapper[4949]: I1001 15:54:08.194551 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="08125712-c7ec-4bad-86b0-c5077cc27713" containerName="controller-manager" Oct 01 15:54:08 crc kubenswrapper[4949]: I1001 15:54:08.194662 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="08125712-c7ec-4bad-86b0-c5077cc27713" containerName="controller-manager" Oct 01 15:54:08 crc kubenswrapper[4949]: I1001 15:54:08.195101 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cf744949b-7tlf8" Oct 01 15:54:08 crc kubenswrapper[4949]: I1001 15:54:08.207441 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-cf744949b-7tlf8"] Oct 01 15:54:08 crc kubenswrapper[4949]: I1001 15:54:08.259257 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b326cb34-f339-4b3c-b73d-0fc285b56982-config\") pod \"controller-manager-cf744949b-7tlf8\" (UID: \"b326cb34-f339-4b3c-b73d-0fc285b56982\") " pod="openshift-controller-manager/controller-manager-cf744949b-7tlf8" Oct 01 15:54:08 crc kubenswrapper[4949]: I1001 15:54:08.259297 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b326cb34-f339-4b3c-b73d-0fc285b56982-proxy-ca-bundles\") pod \"controller-manager-cf744949b-7tlf8\" (UID: \"b326cb34-f339-4b3c-b73d-0fc285b56982\") " pod="openshift-controller-manager/controller-manager-cf744949b-7tlf8" Oct 01 15:54:08 crc kubenswrapper[4949]: I1001 15:54:08.259349 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b326cb34-f339-4b3c-b73d-0fc285b56982-client-ca\") pod \"controller-manager-cf744949b-7tlf8\" (UID: \"b326cb34-f339-4b3c-b73d-0fc285b56982\") " pod="openshift-controller-manager/controller-manager-cf744949b-7tlf8" Oct 01 15:54:08 crc kubenswrapper[4949]: I1001 15:54:08.259370 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b326cb34-f339-4b3c-b73d-0fc285b56982-serving-cert\") pod \"controller-manager-cf744949b-7tlf8\" (UID: \"b326cb34-f339-4b3c-b73d-0fc285b56982\") " pod="openshift-controller-manager/controller-manager-cf744949b-7tlf8" Oct 01 15:54:08 crc kubenswrapper[4949]: I1001 15:54:08.259491 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kh84q\" (UniqueName: \"kubernetes.io/projected/b326cb34-f339-4b3c-b73d-0fc285b56982-kube-api-access-kh84q\") pod \"controller-manager-cf744949b-7tlf8\" (UID: \"b326cb34-f339-4b3c-b73d-0fc285b56982\") " pod="openshift-controller-manager/controller-manager-cf744949b-7tlf8" Oct 01 15:54:08 crc kubenswrapper[4949]: I1001 15:54:08.360301 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kh84q\" (UniqueName: \"kubernetes.io/projected/b326cb34-f339-4b3c-b73d-0fc285b56982-kube-api-access-kh84q\") pod \"controller-manager-cf744949b-7tlf8\" (UID: \"b326cb34-f339-4b3c-b73d-0fc285b56982\") " pod="openshift-controller-manager/controller-manager-cf744949b-7tlf8" Oct 01 15:54:08 crc kubenswrapper[4949]: I1001 15:54:08.360344 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b326cb34-f339-4b3c-b73d-0fc285b56982-config\") pod \"controller-manager-cf744949b-7tlf8\" (UID: \"b326cb34-f339-4b3c-b73d-0fc285b56982\") " pod="openshift-controller-manager/controller-manager-cf744949b-7tlf8" Oct 01 15:54:08 crc kubenswrapper[4949]: I1001 15:54:08.360366 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b326cb34-f339-4b3c-b73d-0fc285b56982-proxy-ca-bundles\") pod \"controller-manager-cf744949b-7tlf8\" (UID: \"b326cb34-f339-4b3c-b73d-0fc285b56982\") " pod="openshift-controller-manager/controller-manager-cf744949b-7tlf8" Oct 01 15:54:08 crc kubenswrapper[4949]: I1001 15:54:08.360425 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b326cb34-f339-4b3c-b73d-0fc285b56982-client-ca\") pod \"controller-manager-cf744949b-7tlf8\" (UID: \"b326cb34-f339-4b3c-b73d-0fc285b56982\") " pod="openshift-controller-manager/controller-manager-cf744949b-7tlf8" Oct 01 15:54:08 crc kubenswrapper[4949]: I1001 15:54:08.360452 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b326cb34-f339-4b3c-b73d-0fc285b56982-serving-cert\") pod \"controller-manager-cf744949b-7tlf8\" (UID: \"b326cb34-f339-4b3c-b73d-0fc285b56982\") " pod="openshift-controller-manager/controller-manager-cf744949b-7tlf8" Oct 01 15:54:08 crc kubenswrapper[4949]: I1001 15:54:08.361267 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b326cb34-f339-4b3c-b73d-0fc285b56982-client-ca\") pod \"controller-manager-cf744949b-7tlf8\" (UID: \"b326cb34-f339-4b3c-b73d-0fc285b56982\") " pod="openshift-controller-manager/controller-manager-cf744949b-7tlf8" Oct 01 15:54:08 crc kubenswrapper[4949]: I1001 15:54:08.361581 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b326cb34-f339-4b3c-b73d-0fc285b56982-proxy-ca-bundles\") pod \"controller-manager-cf744949b-7tlf8\" (UID: \"b326cb34-f339-4b3c-b73d-0fc285b56982\") " pod="openshift-controller-manager/controller-manager-cf744949b-7tlf8" Oct 01 15:54:08 crc kubenswrapper[4949]: I1001 15:54:08.361614 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b326cb34-f339-4b3c-b73d-0fc285b56982-config\") pod \"controller-manager-cf744949b-7tlf8\" (UID: \"b326cb34-f339-4b3c-b73d-0fc285b56982\") " pod="openshift-controller-manager/controller-manager-cf744949b-7tlf8" Oct 01 15:54:08 crc kubenswrapper[4949]: I1001 15:54:08.364953 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b326cb34-f339-4b3c-b73d-0fc285b56982-serving-cert\") pod \"controller-manager-cf744949b-7tlf8\" (UID: \"b326cb34-f339-4b3c-b73d-0fc285b56982\") " pod="openshift-controller-manager/controller-manager-cf744949b-7tlf8" Oct 01 15:54:08 crc kubenswrapper[4949]: I1001 15:54:08.376931 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kh84q\" (UniqueName: \"kubernetes.io/projected/b326cb34-f339-4b3c-b73d-0fc285b56982-kube-api-access-kh84q\") pod \"controller-manager-cf744949b-7tlf8\" (UID: \"b326cb34-f339-4b3c-b73d-0fc285b56982\") " pod="openshift-controller-manager/controller-manager-cf744949b-7tlf8" Oct 01 15:54:08 crc kubenswrapper[4949]: I1001 15:54:08.487047 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9547d54c4-rjjl5" event={"ID":"228480aa-f994-4572-990a-f88b471ea464","Type":"ContainerStarted","Data":"d9849733108c167ca69243d3a6025dba9e77d56e4e40c1972767b72d2377ff1e"} Oct 01 15:54:08 crc kubenswrapper[4949]: I1001 15:54:08.487094 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-9547d54c4-rjjl5" event={"ID":"228480aa-f994-4572-990a-f88b471ea464","Type":"ContainerStarted","Data":"9f73faf24081a90ca033e0292e7264f000002fb883589b4ce5332c6dfe9c6f26"} Oct 01 15:54:08 crc kubenswrapper[4949]: I1001 15:54:08.487901 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-9547d54c4-rjjl5" Oct 01 15:54:08 crc kubenswrapper[4949]: I1001 15:54:08.489987 4949 generic.go:334] "Generic (PLEG): container finished" podID="08125712-c7ec-4bad-86b0-c5077cc27713" containerID="0512834ba9eed134f887f350293eea8e205a01af26bf19699f0a44c1a0f22cf2" exitCode=0 Oct 01 15:54:08 crc kubenswrapper[4949]: I1001 15:54:08.490032 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-577b94f566-tmk5c" Oct 01 15:54:08 crc kubenswrapper[4949]: I1001 15:54:08.490044 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-577b94f566-tmk5c" event={"ID":"08125712-c7ec-4bad-86b0-c5077cc27713","Type":"ContainerDied","Data":"0512834ba9eed134f887f350293eea8e205a01af26bf19699f0a44c1a0f22cf2"} Oct 01 15:54:08 crc kubenswrapper[4949]: I1001 15:54:08.492645 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-577b94f566-tmk5c" event={"ID":"08125712-c7ec-4bad-86b0-c5077cc27713","Type":"ContainerDied","Data":"ce6df2ed3ffb3f70adab51942658c4746528f303b8f1d1bc84dc9ffa3984303e"} Oct 01 15:54:08 crc kubenswrapper[4949]: I1001 15:54:08.492685 4949 scope.go:117] "RemoveContainer" containerID="0512834ba9eed134f887f350293eea8e205a01af26bf19699f0a44c1a0f22cf2" Oct 01 15:54:08 crc kubenswrapper[4949]: I1001 15:54:08.499887 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-9547d54c4-rjjl5" Oct 01 15:54:08 crc kubenswrapper[4949]: I1001 15:54:08.512594 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-9547d54c4-rjjl5" podStartSLOduration=1.51257384 podStartE2EDuration="1.51257384s" podCreationTimestamp="2025-10-01 15:54:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:54:08.503835489 +0000 UTC m=+747.809441690" watchObservedRunningTime="2025-10-01 15:54:08.51257384 +0000 UTC m=+747.818180031" Oct 01 15:54:08 crc kubenswrapper[4949]: I1001 15:54:08.521349 4949 scope.go:117] "RemoveContainer" containerID="0512834ba9eed134f887f350293eea8e205a01af26bf19699f0a44c1a0f22cf2" Oct 01 15:54:08 crc kubenswrapper[4949]: E1001 15:54:08.523899 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0512834ba9eed134f887f350293eea8e205a01af26bf19699f0a44c1a0f22cf2\": container with ID starting with 0512834ba9eed134f887f350293eea8e205a01af26bf19699f0a44c1a0f22cf2 not found: ID does not exist" containerID="0512834ba9eed134f887f350293eea8e205a01af26bf19699f0a44c1a0f22cf2" Oct 01 15:54:08 crc kubenswrapper[4949]: I1001 15:54:08.523942 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0512834ba9eed134f887f350293eea8e205a01af26bf19699f0a44c1a0f22cf2"} err="failed to get container status \"0512834ba9eed134f887f350293eea8e205a01af26bf19699f0a44c1a0f22cf2\": rpc error: code = NotFound desc = could not find container \"0512834ba9eed134f887f350293eea8e205a01af26bf19699f0a44c1a0f22cf2\": container with ID starting with 0512834ba9eed134f887f350293eea8e205a01af26bf19699f0a44c1a0f22cf2 not found: ID does not exist" Oct 01 15:54:08 crc kubenswrapper[4949]: I1001 15:54:08.532001 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-cf744949b-7tlf8" Oct 01 15:54:08 crc kubenswrapper[4949]: I1001 15:54:08.561500 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-577b94f566-tmk5c"] Oct 01 15:54:08 crc kubenswrapper[4949]: I1001 15:54:08.568292 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-577b94f566-tmk5c"] Oct 01 15:54:08 crc kubenswrapper[4949]: I1001 15:54:08.734273 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-cf744949b-7tlf8"] Oct 01 15:54:08 crc kubenswrapper[4949]: W1001 15:54:08.752831 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb326cb34_f339_4b3c_b73d_0fc285b56982.slice/crio-4bc549d1f2a812154384ddb8d14e79968495fb15e7bbccedaecbb4b6ae5ecb78 WatchSource:0}: Error finding container 4bc549d1f2a812154384ddb8d14e79968495fb15e7bbccedaecbb4b6ae5ecb78: Status 404 returned error can't find the container with id 4bc549d1f2a812154384ddb8d14e79968495fb15e7bbccedaecbb4b6ae5ecb78 Oct 01 15:54:09 crc kubenswrapper[4949]: I1001 15:54:09.498105 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cf744949b-7tlf8" event={"ID":"b326cb34-f339-4b3c-b73d-0fc285b56982","Type":"ContainerStarted","Data":"26d60a5d97aefe765a754e5d470978745a23494893181c9194765e47c40791eb"} Oct 01 15:54:09 crc kubenswrapper[4949]: I1001 15:54:09.498427 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-cf744949b-7tlf8" event={"ID":"b326cb34-f339-4b3c-b73d-0fc285b56982","Type":"ContainerStarted","Data":"4bc549d1f2a812154384ddb8d14e79968495fb15e7bbccedaecbb4b6ae5ecb78"} Oct 01 15:54:09 crc kubenswrapper[4949]: I1001 15:54:09.609439 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08125712-c7ec-4bad-86b0-c5077cc27713" path="/var/lib/kubelet/pods/08125712-c7ec-4bad-86b0-c5077cc27713/volumes" Oct 01 15:54:10 crc kubenswrapper[4949]: I1001 15:54:10.504059 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-cf744949b-7tlf8" Oct 01 15:54:10 crc kubenswrapper[4949]: I1001 15:54:10.509642 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-cf744949b-7tlf8" Oct 01 15:54:10 crc kubenswrapper[4949]: I1001 15:54:10.526845 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-cf744949b-7tlf8" podStartSLOduration=3.526826279 podStartE2EDuration="3.526826279s" podCreationTimestamp="2025-10-01 15:54:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:54:09.514930952 +0000 UTC m=+748.820537143" watchObservedRunningTime="2025-10-01 15:54:10.526826279 +0000 UTC m=+749.832432470" Oct 01 15:54:16 crc kubenswrapper[4949]: I1001 15:54:16.700012 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-5d485496f6-kg4s9" Oct 01 15:54:18 crc kubenswrapper[4949]: I1001 15:54:18.039450 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 15:54:18 crc kubenswrapper[4949]: I1001 15:54:18.039776 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 15:54:18 crc kubenswrapper[4949]: I1001 15:54:18.039826 4949 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l6287" Oct 01 15:54:18 crc kubenswrapper[4949]: I1001 15:54:18.040511 4949 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8b48ddbdf5b95765cf3f08bfbf80fa29211dfe735cc809fa7f3ec31b955af407"} pod="openshift-machine-config-operator/machine-config-daemon-l6287" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 01 15:54:18 crc kubenswrapper[4949]: I1001 15:54:18.040567 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" containerID="cri-o://8b48ddbdf5b95765cf3f08bfbf80fa29211dfe735cc809fa7f3ec31b955af407" gracePeriod=600 Oct 01 15:54:18 crc kubenswrapper[4949]: I1001 15:54:18.545177 4949 generic.go:334] "Generic (PLEG): container finished" podID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerID="8b48ddbdf5b95765cf3f08bfbf80fa29211dfe735cc809fa7f3ec31b955af407" exitCode=0 Oct 01 15:54:18 crc kubenswrapper[4949]: I1001 15:54:18.545258 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" event={"ID":"0e15cd67-d4ad-49b8-96a6-da114105e558","Type":"ContainerDied","Data":"8b48ddbdf5b95765cf3f08bfbf80fa29211dfe735cc809fa7f3ec31b955af407"} Oct 01 15:54:18 crc kubenswrapper[4949]: I1001 15:54:18.545569 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" event={"ID":"0e15cd67-d4ad-49b8-96a6-da114105e558","Type":"ContainerStarted","Data":"4d3a1844e3e942fdd712d174f5f7801debfe4a5d8b96ab72892f6da901567689"} Oct 01 15:54:18 crc kubenswrapper[4949]: I1001 15:54:18.545596 4949 scope.go:117] "RemoveContainer" containerID="d65dc5a00a68ef848a51564a98d129e0f9a7891d85cd971faa8d65554abab984" Oct 01 15:54:19 crc kubenswrapper[4949]: I1001 15:54:19.846001 4949 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Oct 01 15:54:36 crc kubenswrapper[4949]: I1001 15:54:36.334969 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-5994bb94b4-vzvwd" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.012908 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-d8v9f"] Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.015817 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-d8v9f" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.016719 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-5478bdb765-wvjl8"] Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.017419 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-5478bdb765-wvjl8" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.017600 4949 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-6rvf8" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.018178 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.019903 4949 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.026775 4949 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.039587 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-5478bdb765-wvjl8"] Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.094217 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-g9xqm"] Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.095027 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-g9xqm" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.098942 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-5d688f5ffc-xpj4x"] Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.100072 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5d688f5ffc-xpj4x" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.103472 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.103617 4949 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.103970 4949 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-psrs5" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.104169 4949 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.104384 4949 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.110543 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5d688f5ffc-xpj4x"] Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.127724 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/df0f332d-5479-4bfe-846e-03805ead7d11-frr-sockets\") pod \"frr-k8s-d8v9f\" (UID: \"df0f332d-5479-4bfe-846e-03805ead7d11\") " pod="metallb-system/frr-k8s-d8v9f" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.127773 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8qhr\" (UniqueName: \"kubernetes.io/projected/df0f332d-5479-4bfe-846e-03805ead7d11-kube-api-access-p8qhr\") pod \"frr-k8s-d8v9f\" (UID: \"df0f332d-5479-4bfe-846e-03805ead7d11\") " pod="metallb-system/frr-k8s-d8v9f" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.127804 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/df0f332d-5479-4bfe-846e-03805ead7d11-frr-startup\") pod \"frr-k8s-d8v9f\" (UID: \"df0f332d-5479-4bfe-846e-03805ead7d11\") " pod="metallb-system/frr-k8s-d8v9f" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.127831 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2krrv\" (UniqueName: \"kubernetes.io/projected/a1d2cf29-ae90-40e0-81bf-a6661fe62cb2-kube-api-access-2krrv\") pod \"frr-k8s-webhook-server-5478bdb765-wvjl8\" (UID: \"a1d2cf29-ae90-40e0-81bf-a6661fe62cb2\") " pod="metallb-system/frr-k8s-webhook-server-5478bdb765-wvjl8" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.127858 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/df0f332d-5479-4bfe-846e-03805ead7d11-frr-conf\") pod \"frr-k8s-d8v9f\" (UID: \"df0f332d-5479-4bfe-846e-03805ead7d11\") " pod="metallb-system/frr-k8s-d8v9f" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.127873 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/df0f332d-5479-4bfe-846e-03805ead7d11-metrics\") pod \"frr-k8s-d8v9f\" (UID: \"df0f332d-5479-4bfe-846e-03805ead7d11\") " pod="metallb-system/frr-k8s-d8v9f" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.127887 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df0f332d-5479-4bfe-846e-03805ead7d11-metrics-certs\") pod \"frr-k8s-d8v9f\" (UID: \"df0f332d-5479-4bfe-846e-03805ead7d11\") " pod="metallb-system/frr-k8s-d8v9f" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.127908 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/df0f332d-5479-4bfe-846e-03805ead7d11-reloader\") pod \"frr-k8s-d8v9f\" (UID: \"df0f332d-5479-4bfe-846e-03805ead7d11\") " pod="metallb-system/frr-k8s-d8v9f" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.127925 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a1d2cf29-ae90-40e0-81bf-a6661fe62cb2-cert\") pod \"frr-k8s-webhook-server-5478bdb765-wvjl8\" (UID: \"a1d2cf29-ae90-40e0-81bf-a6661fe62cb2\") " pod="metallb-system/frr-k8s-webhook-server-5478bdb765-wvjl8" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.229192 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/df0f332d-5479-4bfe-846e-03805ead7d11-frr-conf\") pod \"frr-k8s-d8v9f\" (UID: \"df0f332d-5479-4bfe-846e-03805ead7d11\") " pod="metallb-system/frr-k8s-d8v9f" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.229477 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/df0f332d-5479-4bfe-846e-03805ead7d11-metrics\") pod \"frr-k8s-d8v9f\" (UID: \"df0f332d-5479-4bfe-846e-03805ead7d11\") " pod="metallb-system/frr-k8s-d8v9f" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.229585 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df0f332d-5479-4bfe-846e-03805ead7d11-metrics-certs\") pod \"frr-k8s-d8v9f\" (UID: \"df0f332d-5479-4bfe-846e-03805ead7d11\") " pod="metallb-system/frr-k8s-d8v9f" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.229666 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjd4d\" (UniqueName: \"kubernetes.io/projected/6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae-kube-api-access-wjd4d\") pod \"speaker-g9xqm\" (UID: \"6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae\") " pod="metallb-system/speaker-g9xqm" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.229767 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/df0f332d-5479-4bfe-846e-03805ead7d11-reloader\") pod \"frr-k8s-d8v9f\" (UID: \"df0f332d-5479-4bfe-846e-03805ead7d11\") " pod="metallb-system/frr-k8s-d8v9f" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.229827 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/df0f332d-5479-4bfe-846e-03805ead7d11-metrics\") pod \"frr-k8s-d8v9f\" (UID: \"df0f332d-5479-4bfe-846e-03805ead7d11\") " pod="metallb-system/frr-k8s-d8v9f" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.229848 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a1d2cf29-ae90-40e0-81bf-a6661fe62cb2-cert\") pod \"frr-k8s-webhook-server-5478bdb765-wvjl8\" (UID: \"a1d2cf29-ae90-40e0-81bf-a6661fe62cb2\") " pod="metallb-system/frr-k8s-webhook-server-5478bdb765-wvjl8" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.229633 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/df0f332d-5479-4bfe-846e-03805ead7d11-frr-conf\") pod \"frr-k8s-d8v9f\" (UID: \"df0f332d-5479-4bfe-846e-03805ead7d11\") " pod="metallb-system/frr-k8s-d8v9f" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.229943 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae-metrics-certs\") pod \"speaker-g9xqm\" (UID: \"6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae\") " pod="metallb-system/speaker-g9xqm" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.230020 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r94sd\" (UniqueName: \"kubernetes.io/projected/4d7b6770-ccc5-4f7c-88f7-a32bc9541190-kube-api-access-r94sd\") pod \"controller-5d688f5ffc-xpj4x\" (UID: \"4d7b6770-ccc5-4f7c-88f7-a32bc9541190\") " pod="metallb-system/controller-5d688f5ffc-xpj4x" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.230060 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4d7b6770-ccc5-4f7c-88f7-a32bc9541190-cert\") pod \"controller-5d688f5ffc-xpj4x\" (UID: \"4d7b6770-ccc5-4f7c-88f7-a32bc9541190\") " pod="metallb-system/controller-5d688f5ffc-xpj4x" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.230073 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/df0f332d-5479-4bfe-846e-03805ead7d11-reloader\") pod \"frr-k8s-d8v9f\" (UID: \"df0f332d-5479-4bfe-846e-03805ead7d11\") " pod="metallb-system/frr-k8s-d8v9f" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.230172 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/df0f332d-5479-4bfe-846e-03805ead7d11-frr-sockets\") pod \"frr-k8s-d8v9f\" (UID: \"df0f332d-5479-4bfe-846e-03805ead7d11\") " pod="metallb-system/frr-k8s-d8v9f" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.230209 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8qhr\" (UniqueName: \"kubernetes.io/projected/df0f332d-5479-4bfe-846e-03805ead7d11-kube-api-access-p8qhr\") pod \"frr-k8s-d8v9f\" (UID: \"df0f332d-5479-4bfe-846e-03805ead7d11\") " pod="metallb-system/frr-k8s-d8v9f" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.230239 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae-memberlist\") pod \"speaker-g9xqm\" (UID: \"6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae\") " pod="metallb-system/speaker-g9xqm" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.230260 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae-metallb-excludel2\") pod \"speaker-g9xqm\" (UID: \"6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae\") " pod="metallb-system/speaker-g9xqm" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.230279 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4d7b6770-ccc5-4f7c-88f7-a32bc9541190-metrics-certs\") pod \"controller-5d688f5ffc-xpj4x\" (UID: \"4d7b6770-ccc5-4f7c-88f7-a32bc9541190\") " pod="metallb-system/controller-5d688f5ffc-xpj4x" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.230304 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/df0f332d-5479-4bfe-846e-03805ead7d11-frr-startup\") pod \"frr-k8s-d8v9f\" (UID: \"df0f332d-5479-4bfe-846e-03805ead7d11\") " pod="metallb-system/frr-k8s-d8v9f" Oct 01 15:54:37 crc kubenswrapper[4949]: E1001 15:54:37.230437 4949 secret.go:188] Couldn't get secret metallb-system/frr-k8s-webhook-server-cert: secret "frr-k8s-webhook-server-cert" not found Oct 01 15:54:37 crc kubenswrapper[4949]: E1001 15:54:37.230548 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a1d2cf29-ae90-40e0-81bf-a6661fe62cb2-cert podName:a1d2cf29-ae90-40e0-81bf-a6661fe62cb2 nodeName:}" failed. No retries permitted until 2025-10-01 15:54:37.730531722 +0000 UTC m=+777.036137913 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/a1d2cf29-ae90-40e0-81bf-a6661fe62cb2-cert") pod "frr-k8s-webhook-server-5478bdb765-wvjl8" (UID: "a1d2cf29-ae90-40e0-81bf-a6661fe62cb2") : secret "frr-k8s-webhook-server-cert" not found Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.230478 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/df0f332d-5479-4bfe-846e-03805ead7d11-frr-sockets\") pod \"frr-k8s-d8v9f\" (UID: \"df0f332d-5479-4bfe-846e-03805ead7d11\") " pod="metallb-system/frr-k8s-d8v9f" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.231047 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/df0f332d-5479-4bfe-846e-03805ead7d11-frr-startup\") pod \"frr-k8s-d8v9f\" (UID: \"df0f332d-5479-4bfe-846e-03805ead7d11\") " pod="metallb-system/frr-k8s-d8v9f" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.231088 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2krrv\" (UniqueName: \"kubernetes.io/projected/a1d2cf29-ae90-40e0-81bf-a6661fe62cb2-kube-api-access-2krrv\") pod \"frr-k8s-webhook-server-5478bdb765-wvjl8\" (UID: \"a1d2cf29-ae90-40e0-81bf-a6661fe62cb2\") " pod="metallb-system/frr-k8s-webhook-server-5478bdb765-wvjl8" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.239028 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df0f332d-5479-4bfe-846e-03805ead7d11-metrics-certs\") pod \"frr-k8s-d8v9f\" (UID: \"df0f332d-5479-4bfe-846e-03805ead7d11\") " pod="metallb-system/frr-k8s-d8v9f" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.248365 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2krrv\" (UniqueName: \"kubernetes.io/projected/a1d2cf29-ae90-40e0-81bf-a6661fe62cb2-kube-api-access-2krrv\") pod \"frr-k8s-webhook-server-5478bdb765-wvjl8\" (UID: \"a1d2cf29-ae90-40e0-81bf-a6661fe62cb2\") " pod="metallb-system/frr-k8s-webhook-server-5478bdb765-wvjl8" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.249666 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8qhr\" (UniqueName: \"kubernetes.io/projected/df0f332d-5479-4bfe-846e-03805ead7d11-kube-api-access-p8qhr\") pod \"frr-k8s-d8v9f\" (UID: \"df0f332d-5479-4bfe-846e-03805ead7d11\") " pod="metallb-system/frr-k8s-d8v9f" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.332185 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae-metrics-certs\") pod \"speaker-g9xqm\" (UID: \"6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae\") " pod="metallb-system/speaker-g9xqm" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.332250 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r94sd\" (UniqueName: \"kubernetes.io/projected/4d7b6770-ccc5-4f7c-88f7-a32bc9541190-kube-api-access-r94sd\") pod \"controller-5d688f5ffc-xpj4x\" (UID: \"4d7b6770-ccc5-4f7c-88f7-a32bc9541190\") " pod="metallb-system/controller-5d688f5ffc-xpj4x" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.332270 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4d7b6770-ccc5-4f7c-88f7-a32bc9541190-cert\") pod \"controller-5d688f5ffc-xpj4x\" (UID: \"4d7b6770-ccc5-4f7c-88f7-a32bc9541190\") " pod="metallb-system/controller-5d688f5ffc-xpj4x" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.332309 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae-memberlist\") pod \"speaker-g9xqm\" (UID: \"6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae\") " pod="metallb-system/speaker-g9xqm" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.332326 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae-metallb-excludel2\") pod \"speaker-g9xqm\" (UID: \"6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae\") " pod="metallb-system/speaker-g9xqm" Oct 01 15:54:37 crc kubenswrapper[4949]: E1001 15:54:37.332475 4949 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.332633 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4d7b6770-ccc5-4f7c-88f7-a32bc9541190-metrics-certs\") pod \"controller-5d688f5ffc-xpj4x\" (UID: \"4d7b6770-ccc5-4f7c-88f7-a32bc9541190\") " pod="metallb-system/controller-5d688f5ffc-xpj4x" Oct 01 15:54:37 crc kubenswrapper[4949]: E1001 15:54:37.332896 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae-memberlist podName:6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae nodeName:}" failed. No retries permitted until 2025-10-01 15:54:37.832870795 +0000 UTC m=+777.138476986 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae-memberlist") pod "speaker-g9xqm" (UID: "6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae") : secret "metallb-memberlist" not found Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.333136 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjd4d\" (UniqueName: \"kubernetes.io/projected/6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae-kube-api-access-wjd4d\") pod \"speaker-g9xqm\" (UID: \"6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae\") " pod="metallb-system/speaker-g9xqm" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.333322 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae-metallb-excludel2\") pod \"speaker-g9xqm\" (UID: \"6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae\") " pod="metallb-system/speaker-g9xqm" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.335271 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae-metrics-certs\") pod \"speaker-g9xqm\" (UID: \"6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae\") " pod="metallb-system/speaker-g9xqm" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.335441 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4d7b6770-ccc5-4f7c-88f7-a32bc9541190-cert\") pod \"controller-5d688f5ffc-xpj4x\" (UID: \"4d7b6770-ccc5-4f7c-88f7-a32bc9541190\") " pod="metallb-system/controller-5d688f5ffc-xpj4x" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.335510 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-d8v9f" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.335956 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4d7b6770-ccc5-4f7c-88f7-a32bc9541190-metrics-certs\") pod \"controller-5d688f5ffc-xpj4x\" (UID: \"4d7b6770-ccc5-4f7c-88f7-a32bc9541190\") " pod="metallb-system/controller-5d688f5ffc-xpj4x" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.348814 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjd4d\" (UniqueName: \"kubernetes.io/projected/6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae-kube-api-access-wjd4d\") pod \"speaker-g9xqm\" (UID: \"6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae\") " pod="metallb-system/speaker-g9xqm" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.361459 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r94sd\" (UniqueName: \"kubernetes.io/projected/4d7b6770-ccc5-4f7c-88f7-a32bc9541190-kube-api-access-r94sd\") pod \"controller-5d688f5ffc-xpj4x\" (UID: \"4d7b6770-ccc5-4f7c-88f7-a32bc9541190\") " pod="metallb-system/controller-5d688f5ffc-xpj4x" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.418349 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5d688f5ffc-xpj4x" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.654249 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d8v9f" event={"ID":"df0f332d-5479-4bfe-846e-03805ead7d11","Type":"ContainerStarted","Data":"fddfc46a1154b3147e0f6c614dd30f482fec15c32674f5e25ae6cacbbb51631c"} Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.737843 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a1d2cf29-ae90-40e0-81bf-a6661fe62cb2-cert\") pod \"frr-k8s-webhook-server-5478bdb765-wvjl8\" (UID: \"a1d2cf29-ae90-40e0-81bf-a6661fe62cb2\") " pod="metallb-system/frr-k8s-webhook-server-5478bdb765-wvjl8" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.742336 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a1d2cf29-ae90-40e0-81bf-a6661fe62cb2-cert\") pod \"frr-k8s-webhook-server-5478bdb765-wvjl8\" (UID: \"a1d2cf29-ae90-40e0-81bf-a6661fe62cb2\") " pod="metallb-system/frr-k8s-webhook-server-5478bdb765-wvjl8" Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.838946 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae-memberlist\") pod \"speaker-g9xqm\" (UID: \"6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae\") " pod="metallb-system/speaker-g9xqm" Oct 01 15:54:37 crc kubenswrapper[4949]: E1001 15:54:37.839181 4949 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Oct 01 15:54:37 crc kubenswrapper[4949]: E1001 15:54:37.839251 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae-memberlist podName:6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae nodeName:}" failed. No retries permitted until 2025-10-01 15:54:38.839233114 +0000 UTC m=+778.144839305 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae-memberlist") pod "speaker-g9xqm" (UID: "6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae") : secret "metallb-memberlist" not found Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.888381 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-5d688f5ffc-xpj4x"] Oct 01 15:54:37 crc kubenswrapper[4949]: W1001 15:54:37.909430 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4d7b6770_ccc5_4f7c_88f7_a32bc9541190.slice/crio-9cb0e84d3d620c49a069b99b71a60d7530f59d8cc82ce34d38128f76e4b5df14 WatchSource:0}: Error finding container 9cb0e84d3d620c49a069b99b71a60d7530f59d8cc82ce34d38128f76e4b5df14: Status 404 returned error can't find the container with id 9cb0e84d3d620c49a069b99b71a60d7530f59d8cc82ce34d38128f76e4b5df14 Oct 01 15:54:37 crc kubenswrapper[4949]: I1001 15:54:37.944173 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-5478bdb765-wvjl8" Oct 01 15:54:38 crc kubenswrapper[4949]: I1001 15:54:38.420259 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-5478bdb765-wvjl8"] Oct 01 15:54:38 crc kubenswrapper[4949]: I1001 15:54:38.663139 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5d688f5ffc-xpj4x" event={"ID":"4d7b6770-ccc5-4f7c-88f7-a32bc9541190","Type":"ContainerStarted","Data":"5eaf5133999b120f71ead5e64859b24778ade91c5a19020be2dd39d9e2ea3518"} Oct 01 15:54:38 crc kubenswrapper[4949]: I1001 15:54:38.663665 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5d688f5ffc-xpj4x" event={"ID":"4d7b6770-ccc5-4f7c-88f7-a32bc9541190","Type":"ContainerStarted","Data":"509b7af7d104472f7d4c55fd23cbef4e42a8bfd1c246458f611eb34856b31cfa"} Oct 01 15:54:38 crc kubenswrapper[4949]: I1001 15:54:38.663683 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-5d688f5ffc-xpj4x" event={"ID":"4d7b6770-ccc5-4f7c-88f7-a32bc9541190","Type":"ContainerStarted","Data":"9cb0e84d3d620c49a069b99b71a60d7530f59d8cc82ce34d38128f76e4b5df14"} Oct 01 15:54:38 crc kubenswrapper[4949]: I1001 15:54:38.663734 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-5d688f5ffc-xpj4x" Oct 01 15:54:38 crc kubenswrapper[4949]: I1001 15:54:38.663956 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-5478bdb765-wvjl8" event={"ID":"a1d2cf29-ae90-40e0-81bf-a6661fe62cb2","Type":"ContainerStarted","Data":"30a5b470225545c1e904cfffdec82205f30081e4a5185b2f86399a5b2343b27e"} Oct 01 15:54:38 crc kubenswrapper[4949]: I1001 15:54:38.680981 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-5d688f5ffc-xpj4x" podStartSLOduration=1.680960526 podStartE2EDuration="1.680960526s" podCreationTimestamp="2025-10-01 15:54:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:54:38.679162116 +0000 UTC m=+777.984768317" watchObservedRunningTime="2025-10-01 15:54:38.680960526 +0000 UTC m=+777.986566717" Oct 01 15:54:38 crc kubenswrapper[4949]: I1001 15:54:38.851370 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae-memberlist\") pod \"speaker-g9xqm\" (UID: \"6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae\") " pod="metallb-system/speaker-g9xqm" Oct 01 15:54:38 crc kubenswrapper[4949]: I1001 15:54:38.856290 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae-memberlist\") pod \"speaker-g9xqm\" (UID: \"6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae\") " pod="metallb-system/speaker-g9xqm" Oct 01 15:54:38 crc kubenswrapper[4949]: I1001 15:54:38.912731 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-g9xqm" Oct 01 15:54:39 crc kubenswrapper[4949]: I1001 15:54:39.674951 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-g9xqm" event={"ID":"6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae","Type":"ContainerStarted","Data":"b4d3eaf7c141a44b8147b2324ddee9df2a741f0fc419c40d8cc2eec13325f394"} Oct 01 15:54:39 crc kubenswrapper[4949]: I1001 15:54:39.675330 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-g9xqm" event={"ID":"6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae","Type":"ContainerStarted","Data":"a9de4c8be43074ae5ba849ae3115d7a58e2005585f3c5fb0d3e8af641b615c3b"} Oct 01 15:54:39 crc kubenswrapper[4949]: I1001 15:54:39.675346 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-g9xqm" event={"ID":"6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae","Type":"ContainerStarted","Data":"2a5972514735de4d716e5a54ca96c9ed44180ec1f95befeda429af27c76d952d"} Oct 01 15:54:39 crc kubenswrapper[4949]: I1001 15:54:39.675720 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-g9xqm" Oct 01 15:54:39 crc kubenswrapper[4949]: I1001 15:54:39.694917 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-g9xqm" podStartSLOduration=2.694892519 podStartE2EDuration="2.694892519s" podCreationTimestamp="2025-10-01 15:54:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:54:39.691228717 +0000 UTC m=+778.996834928" watchObservedRunningTime="2025-10-01 15:54:39.694892519 +0000 UTC m=+779.000498700" Oct 01 15:54:45 crc kubenswrapper[4949]: I1001 15:54:45.719549 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-5478bdb765-wvjl8" event={"ID":"a1d2cf29-ae90-40e0-81bf-a6661fe62cb2","Type":"ContainerStarted","Data":"bef953aca680f2fce32579e0bd6e0deee0fea52fda16df76e31f79be5f29c852"} Oct 01 15:54:45 crc kubenswrapper[4949]: I1001 15:54:45.720106 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-5478bdb765-wvjl8" Oct 01 15:54:45 crc kubenswrapper[4949]: I1001 15:54:45.721629 4949 generic.go:334] "Generic (PLEG): container finished" podID="df0f332d-5479-4bfe-846e-03805ead7d11" containerID="4e1ec65c181866ac4ac1d31a7d13e2b4c65a77fb8f3f1350922323603d86680c" exitCode=0 Oct 01 15:54:45 crc kubenswrapper[4949]: I1001 15:54:45.721659 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d8v9f" event={"ID":"df0f332d-5479-4bfe-846e-03805ead7d11","Type":"ContainerDied","Data":"4e1ec65c181866ac4ac1d31a7d13e2b4c65a77fb8f3f1350922323603d86680c"} Oct 01 15:54:45 crc kubenswrapper[4949]: I1001 15:54:45.743904 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-5478bdb765-wvjl8" podStartSLOduration=3.204493586 podStartE2EDuration="9.743882041s" podCreationTimestamp="2025-10-01 15:54:36 +0000 UTC" firstStartedPulling="2025-10-01 15:54:38.432323237 +0000 UTC m=+777.737929428" lastFinishedPulling="2025-10-01 15:54:44.971711692 +0000 UTC m=+784.277317883" observedRunningTime="2025-10-01 15:54:45.740684163 +0000 UTC m=+785.046290384" watchObservedRunningTime="2025-10-01 15:54:45.743882041 +0000 UTC m=+785.049488242" Oct 01 15:54:46 crc kubenswrapper[4949]: I1001 15:54:46.729081 4949 generic.go:334] "Generic (PLEG): container finished" podID="df0f332d-5479-4bfe-846e-03805ead7d11" containerID="c7452318866e052928a6393a52b66a5b45a8745e28e5c6272ebe04d403e6e894" exitCode=0 Oct 01 15:54:46 crc kubenswrapper[4949]: I1001 15:54:46.729157 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d8v9f" event={"ID":"df0f332d-5479-4bfe-846e-03805ead7d11","Type":"ContainerDied","Data":"c7452318866e052928a6393a52b66a5b45a8745e28e5c6272ebe04d403e6e894"} Oct 01 15:54:47 crc kubenswrapper[4949]: I1001 15:54:47.737656 4949 generic.go:334] "Generic (PLEG): container finished" podID="df0f332d-5479-4bfe-846e-03805ead7d11" containerID="893109630b07b8a64bc691bf0b5cbf8e7bc7c8a36c3fa9391db366750a072882" exitCode=0 Oct 01 15:54:47 crc kubenswrapper[4949]: I1001 15:54:47.737751 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d8v9f" event={"ID":"df0f332d-5479-4bfe-846e-03805ead7d11","Type":"ContainerDied","Data":"893109630b07b8a64bc691bf0b5cbf8e7bc7c8a36c3fa9391db366750a072882"} Oct 01 15:54:48 crc kubenswrapper[4949]: I1001 15:54:48.752844 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d8v9f" event={"ID":"df0f332d-5479-4bfe-846e-03805ead7d11","Type":"ContainerStarted","Data":"143ea5406a67a3c2184d13a4494599256abc2149c4a9339822f847ca5c3e3e28"} Oct 01 15:54:48 crc kubenswrapper[4949]: I1001 15:54:48.753279 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d8v9f" event={"ID":"df0f332d-5479-4bfe-846e-03805ead7d11","Type":"ContainerStarted","Data":"fda606ab02323c81bd9b00d176b29e0a676d5b92ed85faf1e656f69f72761b57"} Oct 01 15:54:48 crc kubenswrapper[4949]: I1001 15:54:48.753302 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d8v9f" event={"ID":"df0f332d-5479-4bfe-846e-03805ead7d11","Type":"ContainerStarted","Data":"4f8c9a5ab15b7db4053866141a5f95637490ae959d120f240e934d69a08613c7"} Oct 01 15:54:48 crc kubenswrapper[4949]: I1001 15:54:48.753319 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d8v9f" event={"ID":"df0f332d-5479-4bfe-846e-03805ead7d11","Type":"ContainerStarted","Data":"a117e16d45f195bf48b78a6ce4af11bf5cd926545d30112ed595b49f3f21ab5b"} Oct 01 15:54:48 crc kubenswrapper[4949]: I1001 15:54:48.753336 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d8v9f" event={"ID":"df0f332d-5479-4bfe-846e-03805ead7d11","Type":"ContainerStarted","Data":"5657390087c46e46a3b36fa1a732eea973e99b4a94594f4bf35a41627421554d"} Oct 01 15:54:49 crc kubenswrapper[4949]: I1001 15:54:49.777373 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-d8v9f" event={"ID":"df0f332d-5479-4bfe-846e-03805ead7d11","Type":"ContainerStarted","Data":"721d51cda66030e43533e1c77438eac6f0c3e2ee07288fcfa1b37a27f92daefc"} Oct 01 15:54:49 crc kubenswrapper[4949]: I1001 15:54:49.778398 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-d8v9f" Oct 01 15:54:49 crc kubenswrapper[4949]: I1001 15:54:49.811368 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-d8v9f" podStartSLOduration=6.353472291 podStartE2EDuration="13.811348814s" podCreationTimestamp="2025-10-01 15:54:36 +0000 UTC" firstStartedPulling="2025-10-01 15:54:37.499387479 +0000 UTC m=+776.804993670" lastFinishedPulling="2025-10-01 15:54:44.957264002 +0000 UTC m=+784.262870193" observedRunningTime="2025-10-01 15:54:49.807976441 +0000 UTC m=+789.113582642" watchObservedRunningTime="2025-10-01 15:54:49.811348814 +0000 UTC m=+789.116955005" Oct 01 15:54:52 crc kubenswrapper[4949]: I1001 15:54:52.336106 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-d8v9f" Oct 01 15:54:52 crc kubenswrapper[4949]: I1001 15:54:52.376897 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-d8v9f" Oct 01 15:54:57 crc kubenswrapper[4949]: I1001 15:54:57.338244 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-d8v9f" Oct 01 15:54:57 crc kubenswrapper[4949]: I1001 15:54:57.424233 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-5d688f5ffc-xpj4x" Oct 01 15:54:57 crc kubenswrapper[4949]: I1001 15:54:57.949607 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-5478bdb765-wvjl8" Oct 01 15:54:58 crc kubenswrapper[4949]: I1001 15:54:58.917056 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-g9xqm" Oct 01 15:55:01 crc kubenswrapper[4949]: I1001 15:55:01.735540 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-ztrcz"] Oct 01 15:55:01 crc kubenswrapper[4949]: I1001 15:55:01.736723 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ztrcz" Oct 01 15:55:01 crc kubenswrapper[4949]: I1001 15:55:01.743336 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-index-dockercfg-zqg5f" Oct 01 15:55:01 crc kubenswrapper[4949]: I1001 15:55:01.743379 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Oct 01 15:55:01 crc kubenswrapper[4949]: I1001 15:55:01.743672 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Oct 01 15:55:01 crc kubenswrapper[4949]: I1001 15:55:01.767188 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-ztrcz"] Oct 01 15:55:01 crc kubenswrapper[4949]: I1001 15:55:01.865688 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t746s\" (UniqueName: \"kubernetes.io/projected/dd555060-4290-4ba5-8887-b7fe770fc47a-kube-api-access-t746s\") pod \"openstack-operator-index-ztrcz\" (UID: \"dd555060-4290-4ba5-8887-b7fe770fc47a\") " pod="openstack-operators/openstack-operator-index-ztrcz" Oct 01 15:55:01 crc kubenswrapper[4949]: I1001 15:55:01.966934 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t746s\" (UniqueName: \"kubernetes.io/projected/dd555060-4290-4ba5-8887-b7fe770fc47a-kube-api-access-t746s\") pod \"openstack-operator-index-ztrcz\" (UID: \"dd555060-4290-4ba5-8887-b7fe770fc47a\") " pod="openstack-operators/openstack-operator-index-ztrcz" Oct 01 15:55:01 crc kubenswrapper[4949]: I1001 15:55:01.985081 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t746s\" (UniqueName: \"kubernetes.io/projected/dd555060-4290-4ba5-8887-b7fe770fc47a-kube-api-access-t746s\") pod \"openstack-operator-index-ztrcz\" (UID: \"dd555060-4290-4ba5-8887-b7fe770fc47a\") " pod="openstack-operators/openstack-operator-index-ztrcz" Oct 01 15:55:02 crc kubenswrapper[4949]: I1001 15:55:02.061051 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ztrcz" Oct 01 15:55:02 crc kubenswrapper[4949]: I1001 15:55:02.510209 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-ztrcz"] Oct 01 15:55:02 crc kubenswrapper[4949]: I1001 15:55:02.865815 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ztrcz" event={"ID":"dd555060-4290-4ba5-8887-b7fe770fc47a","Type":"ContainerStarted","Data":"e49f918fb577ec8702f273bcfabe223bde2793c9f13bded485a745091c5e7e6b"} Oct 01 15:55:04 crc kubenswrapper[4949]: I1001 15:55:04.880817 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ztrcz" event={"ID":"dd555060-4290-4ba5-8887-b7fe770fc47a","Type":"ContainerStarted","Data":"bb4af06cff1c605ddf3f3bbe8401e3552979c64b5cbf1e5c79cb4bb529a66b19"} Oct 01 15:55:04 crc kubenswrapper[4949]: I1001 15:55:04.900099 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-ztrcz" podStartSLOduration=2.020657608 podStartE2EDuration="3.900070173s" podCreationTimestamp="2025-10-01 15:55:01 +0000 UTC" firstStartedPulling="2025-10-01 15:55:02.523750267 +0000 UTC m=+801.829356458" lastFinishedPulling="2025-10-01 15:55:04.403162832 +0000 UTC m=+803.708769023" observedRunningTime="2025-10-01 15:55:04.89884946 +0000 UTC m=+804.204455671" watchObservedRunningTime="2025-10-01 15:55:04.900070173 +0000 UTC m=+804.205676394" Oct 01 15:55:05 crc kubenswrapper[4949]: I1001 15:55:05.104855 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-ztrcz"] Oct 01 15:55:05 crc kubenswrapper[4949]: I1001 15:55:05.710989 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-index-2hj4v"] Oct 01 15:55:05 crc kubenswrapper[4949]: I1001 15:55:05.711726 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2hj4v" Oct 01 15:55:05 crc kubenswrapper[4949]: I1001 15:55:05.728419 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-2hj4v"] Oct 01 15:55:05 crc kubenswrapper[4949]: I1001 15:55:05.735654 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksq58\" (UniqueName: \"kubernetes.io/projected/00d1abc7-5d1b-40be-8310-4d62e84f0c06-kube-api-access-ksq58\") pod \"openstack-operator-index-2hj4v\" (UID: \"00d1abc7-5d1b-40be-8310-4d62e84f0c06\") " pod="openstack-operators/openstack-operator-index-2hj4v" Oct 01 15:55:05 crc kubenswrapper[4949]: I1001 15:55:05.836621 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksq58\" (UniqueName: \"kubernetes.io/projected/00d1abc7-5d1b-40be-8310-4d62e84f0c06-kube-api-access-ksq58\") pod \"openstack-operator-index-2hj4v\" (UID: \"00d1abc7-5d1b-40be-8310-4d62e84f0c06\") " pod="openstack-operators/openstack-operator-index-2hj4v" Oct 01 15:55:05 crc kubenswrapper[4949]: I1001 15:55:05.858384 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksq58\" (UniqueName: \"kubernetes.io/projected/00d1abc7-5d1b-40be-8310-4d62e84f0c06-kube-api-access-ksq58\") pod \"openstack-operator-index-2hj4v\" (UID: \"00d1abc7-5d1b-40be-8310-4d62e84f0c06\") " pod="openstack-operators/openstack-operator-index-2hj4v" Oct 01 15:55:06 crc kubenswrapper[4949]: I1001 15:55:06.035778 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-2hj4v" Oct 01 15:55:06 crc kubenswrapper[4949]: I1001 15:55:06.421612 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-index-2hj4v"] Oct 01 15:55:06 crc kubenswrapper[4949]: W1001 15:55:06.431114 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod00d1abc7_5d1b_40be_8310_4d62e84f0c06.slice/crio-c0c584f5b1484b8b2c8f69f2e96f11e206c99d007abcd50347933ce759903ef8 WatchSource:0}: Error finding container c0c584f5b1484b8b2c8f69f2e96f11e206c99d007abcd50347933ce759903ef8: Status 404 returned error can't find the container with id c0c584f5b1484b8b2c8f69f2e96f11e206c99d007abcd50347933ce759903ef8 Oct 01 15:55:06 crc kubenswrapper[4949]: I1001 15:55:06.915258 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2hj4v" event={"ID":"00d1abc7-5d1b-40be-8310-4d62e84f0c06","Type":"ContainerStarted","Data":"1b184b8e7d1857652562eb9d292f248a95e1f7a30ed866ec6fb3ad8e723ce257"} Oct 01 15:55:06 crc kubenswrapper[4949]: I1001 15:55:06.915305 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/openstack-operator-index-ztrcz" podUID="dd555060-4290-4ba5-8887-b7fe770fc47a" containerName="registry-server" containerID="cri-o://bb4af06cff1c605ddf3f3bbe8401e3552979c64b5cbf1e5c79cb4bb529a66b19" gracePeriod=2 Oct 01 15:55:06 crc kubenswrapper[4949]: I1001 15:55:06.918817 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-2hj4v" event={"ID":"00d1abc7-5d1b-40be-8310-4d62e84f0c06","Type":"ContainerStarted","Data":"c0c584f5b1484b8b2c8f69f2e96f11e206c99d007abcd50347933ce759903ef8"} Oct 01 15:55:06 crc kubenswrapper[4949]: I1001 15:55:06.933104 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-index-2hj4v" podStartSLOduration=1.881649849 podStartE2EDuration="1.933085365s" podCreationTimestamp="2025-10-01 15:55:05 +0000 UTC" firstStartedPulling="2025-10-01 15:55:06.435807673 +0000 UTC m=+805.741413864" lastFinishedPulling="2025-10-01 15:55:06.487243189 +0000 UTC m=+805.792849380" observedRunningTime="2025-10-01 15:55:06.932068316 +0000 UTC m=+806.237674547" watchObservedRunningTime="2025-10-01 15:55:06.933085365 +0000 UTC m=+806.238691556" Oct 01 15:55:07 crc kubenswrapper[4949]: I1001 15:55:07.311862 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ztrcz" Oct 01 15:55:07 crc kubenswrapper[4949]: I1001 15:55:07.457224 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t746s\" (UniqueName: \"kubernetes.io/projected/dd555060-4290-4ba5-8887-b7fe770fc47a-kube-api-access-t746s\") pod \"dd555060-4290-4ba5-8887-b7fe770fc47a\" (UID: \"dd555060-4290-4ba5-8887-b7fe770fc47a\") " Oct 01 15:55:07 crc kubenswrapper[4949]: I1001 15:55:07.462555 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd555060-4290-4ba5-8887-b7fe770fc47a-kube-api-access-t746s" (OuterVolumeSpecName: "kube-api-access-t746s") pod "dd555060-4290-4ba5-8887-b7fe770fc47a" (UID: "dd555060-4290-4ba5-8887-b7fe770fc47a"). InnerVolumeSpecName "kube-api-access-t746s". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:55:07 crc kubenswrapper[4949]: I1001 15:55:07.558856 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t746s\" (UniqueName: \"kubernetes.io/projected/dd555060-4290-4ba5-8887-b7fe770fc47a-kube-api-access-t746s\") on node \"crc\" DevicePath \"\"" Oct 01 15:55:07 crc kubenswrapper[4949]: I1001 15:55:07.922054 4949 generic.go:334] "Generic (PLEG): container finished" podID="dd555060-4290-4ba5-8887-b7fe770fc47a" containerID="bb4af06cff1c605ddf3f3bbe8401e3552979c64b5cbf1e5c79cb4bb529a66b19" exitCode=0 Oct 01 15:55:07 crc kubenswrapper[4949]: I1001 15:55:07.922939 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-index-ztrcz" Oct 01 15:55:07 crc kubenswrapper[4949]: I1001 15:55:07.923341 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ztrcz" event={"ID":"dd555060-4290-4ba5-8887-b7fe770fc47a","Type":"ContainerDied","Data":"bb4af06cff1c605ddf3f3bbe8401e3552979c64b5cbf1e5c79cb4bb529a66b19"} Oct 01 15:55:07 crc kubenswrapper[4949]: I1001 15:55:07.923368 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-index-ztrcz" event={"ID":"dd555060-4290-4ba5-8887-b7fe770fc47a","Type":"ContainerDied","Data":"e49f918fb577ec8702f273bcfabe223bde2793c9f13bded485a745091c5e7e6b"} Oct 01 15:55:07 crc kubenswrapper[4949]: I1001 15:55:07.923384 4949 scope.go:117] "RemoveContainer" containerID="bb4af06cff1c605ddf3f3bbe8401e3552979c64b5cbf1e5c79cb4bb529a66b19" Oct 01 15:55:07 crc kubenswrapper[4949]: I1001 15:55:07.942865 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/openstack-operator-index-ztrcz"] Oct 01 15:55:07 crc kubenswrapper[4949]: I1001 15:55:07.944566 4949 scope.go:117] "RemoveContainer" containerID="bb4af06cff1c605ddf3f3bbe8401e3552979c64b5cbf1e5c79cb4bb529a66b19" Oct 01 15:55:07 crc kubenswrapper[4949]: E1001 15:55:07.945078 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb4af06cff1c605ddf3f3bbe8401e3552979c64b5cbf1e5c79cb4bb529a66b19\": container with ID starting with bb4af06cff1c605ddf3f3bbe8401e3552979c64b5cbf1e5c79cb4bb529a66b19 not found: ID does not exist" containerID="bb4af06cff1c605ddf3f3bbe8401e3552979c64b5cbf1e5c79cb4bb529a66b19" Oct 01 15:55:07 crc kubenswrapper[4949]: I1001 15:55:07.946066 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb4af06cff1c605ddf3f3bbe8401e3552979c64b5cbf1e5c79cb4bb529a66b19"} err="failed to get container status \"bb4af06cff1c605ddf3f3bbe8401e3552979c64b5cbf1e5c79cb4bb529a66b19\": rpc error: code = NotFound desc = could not find container \"bb4af06cff1c605ddf3f3bbe8401e3552979c64b5cbf1e5c79cb4bb529a66b19\": container with ID starting with bb4af06cff1c605ddf3f3bbe8401e3552979c64b5cbf1e5c79cb4bb529a66b19 not found: ID does not exist" Oct 01 15:55:07 crc kubenswrapper[4949]: I1001 15:55:07.946538 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/openstack-operator-index-ztrcz"] Oct 01 15:55:09 crc kubenswrapper[4949]: I1001 15:55:09.610431 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd555060-4290-4ba5-8887-b7fe770fc47a" path="/var/lib/kubelet/pods/dd555060-4290-4ba5-8887-b7fe770fc47a/volumes" Oct 01 15:55:15 crc kubenswrapper[4949]: I1001 15:55:15.321362 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-zsjpd"] Oct 01 15:55:15 crc kubenswrapper[4949]: E1001 15:55:15.322070 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd555060-4290-4ba5-8887-b7fe770fc47a" containerName="registry-server" Oct 01 15:55:15 crc kubenswrapper[4949]: I1001 15:55:15.322085 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd555060-4290-4ba5-8887-b7fe770fc47a" containerName="registry-server" Oct 01 15:55:15 crc kubenswrapper[4949]: I1001 15:55:15.322216 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd555060-4290-4ba5-8887-b7fe770fc47a" containerName="registry-server" Oct 01 15:55:15 crc kubenswrapper[4949]: I1001 15:55:15.322938 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zsjpd" Oct 01 15:55:15 crc kubenswrapper[4949]: I1001 15:55:15.335832 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zsjpd"] Oct 01 15:55:15 crc kubenswrapper[4949]: I1001 15:55:15.456836 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m4z76\" (UniqueName: \"kubernetes.io/projected/cfd3d432-0e28-4058-b6b1-ada65114f081-kube-api-access-m4z76\") pod \"redhat-marketplace-zsjpd\" (UID: \"cfd3d432-0e28-4058-b6b1-ada65114f081\") " pod="openshift-marketplace/redhat-marketplace-zsjpd" Oct 01 15:55:15 crc kubenswrapper[4949]: I1001 15:55:15.456914 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfd3d432-0e28-4058-b6b1-ada65114f081-catalog-content\") pod \"redhat-marketplace-zsjpd\" (UID: \"cfd3d432-0e28-4058-b6b1-ada65114f081\") " pod="openshift-marketplace/redhat-marketplace-zsjpd" Oct 01 15:55:15 crc kubenswrapper[4949]: I1001 15:55:15.456953 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfd3d432-0e28-4058-b6b1-ada65114f081-utilities\") pod \"redhat-marketplace-zsjpd\" (UID: \"cfd3d432-0e28-4058-b6b1-ada65114f081\") " pod="openshift-marketplace/redhat-marketplace-zsjpd" Oct 01 15:55:15 crc kubenswrapper[4949]: I1001 15:55:15.558780 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m4z76\" (UniqueName: \"kubernetes.io/projected/cfd3d432-0e28-4058-b6b1-ada65114f081-kube-api-access-m4z76\") pod \"redhat-marketplace-zsjpd\" (UID: \"cfd3d432-0e28-4058-b6b1-ada65114f081\") " pod="openshift-marketplace/redhat-marketplace-zsjpd" Oct 01 15:55:15 crc kubenswrapper[4949]: I1001 15:55:15.558892 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfd3d432-0e28-4058-b6b1-ada65114f081-catalog-content\") pod \"redhat-marketplace-zsjpd\" (UID: \"cfd3d432-0e28-4058-b6b1-ada65114f081\") " pod="openshift-marketplace/redhat-marketplace-zsjpd" Oct 01 15:55:15 crc kubenswrapper[4949]: I1001 15:55:15.558959 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfd3d432-0e28-4058-b6b1-ada65114f081-utilities\") pod \"redhat-marketplace-zsjpd\" (UID: \"cfd3d432-0e28-4058-b6b1-ada65114f081\") " pod="openshift-marketplace/redhat-marketplace-zsjpd" Oct 01 15:55:15 crc kubenswrapper[4949]: I1001 15:55:15.559534 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfd3d432-0e28-4058-b6b1-ada65114f081-catalog-content\") pod \"redhat-marketplace-zsjpd\" (UID: \"cfd3d432-0e28-4058-b6b1-ada65114f081\") " pod="openshift-marketplace/redhat-marketplace-zsjpd" Oct 01 15:55:15 crc kubenswrapper[4949]: I1001 15:55:15.559739 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfd3d432-0e28-4058-b6b1-ada65114f081-utilities\") pod \"redhat-marketplace-zsjpd\" (UID: \"cfd3d432-0e28-4058-b6b1-ada65114f081\") " pod="openshift-marketplace/redhat-marketplace-zsjpd" Oct 01 15:55:15 crc kubenswrapper[4949]: I1001 15:55:15.577196 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m4z76\" (UniqueName: \"kubernetes.io/projected/cfd3d432-0e28-4058-b6b1-ada65114f081-kube-api-access-m4z76\") pod \"redhat-marketplace-zsjpd\" (UID: \"cfd3d432-0e28-4058-b6b1-ada65114f081\") " pod="openshift-marketplace/redhat-marketplace-zsjpd" Oct 01 15:55:15 crc kubenswrapper[4949]: I1001 15:55:15.642062 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zsjpd" Oct 01 15:55:16 crc kubenswrapper[4949]: I1001 15:55:16.036838 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-index-2hj4v" Oct 01 15:55:16 crc kubenswrapper[4949]: I1001 15:55:16.037117 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/openstack-operator-index-2hj4v" Oct 01 15:55:16 crc kubenswrapper[4949]: I1001 15:55:16.064242 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/openstack-operator-index-2hj4v" Oct 01 15:55:16 crc kubenswrapper[4949]: I1001 15:55:16.099687 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-zsjpd"] Oct 01 15:55:16 crc kubenswrapper[4949]: I1001 15:55:16.976199 4949 generic.go:334] "Generic (PLEG): container finished" podID="cfd3d432-0e28-4058-b6b1-ada65114f081" containerID="a7528f557ae3f1a6322df6d7f84c50cc19af9fa83553dcaad9a1ca5217bfdf27" exitCode=0 Oct 01 15:55:16 crc kubenswrapper[4949]: I1001 15:55:16.976301 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zsjpd" event={"ID":"cfd3d432-0e28-4058-b6b1-ada65114f081","Type":"ContainerDied","Data":"a7528f557ae3f1a6322df6d7f84c50cc19af9fa83553dcaad9a1ca5217bfdf27"} Oct 01 15:55:16 crc kubenswrapper[4949]: I1001 15:55:16.977057 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zsjpd" event={"ID":"cfd3d432-0e28-4058-b6b1-ada65114f081","Type":"ContainerStarted","Data":"7f5fe3db9171f3a12d6a7a8affaac52bda0dacb978e74a8f06da74a8b3aa4e22"} Oct 01 15:55:17 crc kubenswrapper[4949]: I1001 15:55:17.003359 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-index-2hj4v" Oct 01 15:55:18 crc kubenswrapper[4949]: I1001 15:55:18.997703 4949 generic.go:334] "Generic (PLEG): container finished" podID="cfd3d432-0e28-4058-b6b1-ada65114f081" containerID="34e6dffd0f96924ed1026ed5ba16b5455b2b5518bab9949ba9566edbb6906ffc" exitCode=0 Oct 01 15:55:18 crc kubenswrapper[4949]: I1001 15:55:18.997770 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zsjpd" event={"ID":"cfd3d432-0e28-4058-b6b1-ada65114f081","Type":"ContainerDied","Data":"34e6dffd0f96924ed1026ed5ba16b5455b2b5518bab9949ba9566edbb6906ffc"} Oct 01 15:55:19 crc kubenswrapper[4949]: I1001 15:55:19.117419 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cltnt"] Oct 01 15:55:19 crc kubenswrapper[4949]: I1001 15:55:19.121114 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cltnt" Oct 01 15:55:19 crc kubenswrapper[4949]: I1001 15:55:19.130248 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cltnt"] Oct 01 15:55:19 crc kubenswrapper[4949]: I1001 15:55:19.308015 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12f7bdb2-3fa0-4328-83e7-0638451fbd7f-catalog-content\") pod \"certified-operators-cltnt\" (UID: \"12f7bdb2-3fa0-4328-83e7-0638451fbd7f\") " pod="openshift-marketplace/certified-operators-cltnt" Oct 01 15:55:19 crc kubenswrapper[4949]: I1001 15:55:19.308144 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbc7h\" (UniqueName: \"kubernetes.io/projected/12f7bdb2-3fa0-4328-83e7-0638451fbd7f-kube-api-access-fbc7h\") pod \"certified-operators-cltnt\" (UID: \"12f7bdb2-3fa0-4328-83e7-0638451fbd7f\") " pod="openshift-marketplace/certified-operators-cltnt" Oct 01 15:55:19 crc kubenswrapper[4949]: I1001 15:55:19.308310 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12f7bdb2-3fa0-4328-83e7-0638451fbd7f-utilities\") pod \"certified-operators-cltnt\" (UID: \"12f7bdb2-3fa0-4328-83e7-0638451fbd7f\") " pod="openshift-marketplace/certified-operators-cltnt" Oct 01 15:55:19 crc kubenswrapper[4949]: I1001 15:55:19.409759 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbc7h\" (UniqueName: \"kubernetes.io/projected/12f7bdb2-3fa0-4328-83e7-0638451fbd7f-kube-api-access-fbc7h\") pod \"certified-operators-cltnt\" (UID: \"12f7bdb2-3fa0-4328-83e7-0638451fbd7f\") " pod="openshift-marketplace/certified-operators-cltnt" Oct 01 15:55:19 crc kubenswrapper[4949]: I1001 15:55:19.409836 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12f7bdb2-3fa0-4328-83e7-0638451fbd7f-utilities\") pod \"certified-operators-cltnt\" (UID: \"12f7bdb2-3fa0-4328-83e7-0638451fbd7f\") " pod="openshift-marketplace/certified-operators-cltnt" Oct 01 15:55:19 crc kubenswrapper[4949]: I1001 15:55:19.409867 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12f7bdb2-3fa0-4328-83e7-0638451fbd7f-catalog-content\") pod \"certified-operators-cltnt\" (UID: \"12f7bdb2-3fa0-4328-83e7-0638451fbd7f\") " pod="openshift-marketplace/certified-operators-cltnt" Oct 01 15:55:19 crc kubenswrapper[4949]: I1001 15:55:19.410400 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12f7bdb2-3fa0-4328-83e7-0638451fbd7f-catalog-content\") pod \"certified-operators-cltnt\" (UID: \"12f7bdb2-3fa0-4328-83e7-0638451fbd7f\") " pod="openshift-marketplace/certified-operators-cltnt" Oct 01 15:55:19 crc kubenswrapper[4949]: I1001 15:55:19.410935 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12f7bdb2-3fa0-4328-83e7-0638451fbd7f-utilities\") pod \"certified-operators-cltnt\" (UID: \"12f7bdb2-3fa0-4328-83e7-0638451fbd7f\") " pod="openshift-marketplace/certified-operators-cltnt" Oct 01 15:55:19 crc kubenswrapper[4949]: I1001 15:55:19.429665 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbc7h\" (UniqueName: \"kubernetes.io/projected/12f7bdb2-3fa0-4328-83e7-0638451fbd7f-kube-api-access-fbc7h\") pod \"certified-operators-cltnt\" (UID: \"12f7bdb2-3fa0-4328-83e7-0638451fbd7f\") " pod="openshift-marketplace/certified-operators-cltnt" Oct 01 15:55:19 crc kubenswrapper[4949]: I1001 15:55:19.449935 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cltnt" Oct 01 15:55:19 crc kubenswrapper[4949]: I1001 15:55:19.939146 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cltnt"] Oct 01 15:55:19 crc kubenswrapper[4949]: W1001 15:55:19.946640 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod12f7bdb2_3fa0_4328_83e7_0638451fbd7f.slice/crio-9eee7e96e5a65b9d63ee310f8fcf63ae12c0489f9775b8386043b39281731164 WatchSource:0}: Error finding container 9eee7e96e5a65b9d63ee310f8fcf63ae12c0489f9775b8386043b39281731164: Status 404 returned error can't find the container with id 9eee7e96e5a65b9d63ee310f8fcf63ae12c0489f9775b8386043b39281731164 Oct 01 15:55:20 crc kubenswrapper[4949]: I1001 15:55:20.006415 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cltnt" event={"ID":"12f7bdb2-3fa0-4328-83e7-0638451fbd7f","Type":"ContainerStarted","Data":"9eee7e96e5a65b9d63ee310f8fcf63ae12c0489f9775b8386043b39281731164"} Oct 01 15:55:20 crc kubenswrapper[4949]: I1001 15:55:20.008834 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zsjpd" event={"ID":"cfd3d432-0e28-4058-b6b1-ada65114f081","Type":"ContainerStarted","Data":"35ec2fab6a856e398e85d64b9d8f3252e144e6635265f6be3cce5f47bb81fd2c"} Oct 01 15:55:20 crc kubenswrapper[4949]: I1001 15:55:20.025597 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-zsjpd" podStartSLOduration=2.625036983 podStartE2EDuration="5.025552618s" podCreationTimestamp="2025-10-01 15:55:15 +0000 UTC" firstStartedPulling="2025-10-01 15:55:16.979441292 +0000 UTC m=+816.285047483" lastFinishedPulling="2025-10-01 15:55:19.379956927 +0000 UTC m=+818.685563118" observedRunningTime="2025-10-01 15:55:20.024904661 +0000 UTC m=+819.330510852" watchObservedRunningTime="2025-10-01 15:55:20.025552618 +0000 UTC m=+819.331158809" Oct 01 15:55:21 crc kubenswrapper[4949]: I1001 15:55:21.017113 4949 generic.go:334] "Generic (PLEG): container finished" podID="12f7bdb2-3fa0-4328-83e7-0638451fbd7f" containerID="524e22d9cbe9a7cc4ad23b6a8a6d56f2af27269655ead93571cd7bf2dbf22a9d" exitCode=0 Oct 01 15:55:21 crc kubenswrapper[4949]: I1001 15:55:21.017236 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cltnt" event={"ID":"12f7bdb2-3fa0-4328-83e7-0638451fbd7f","Type":"ContainerDied","Data":"524e22d9cbe9a7cc4ad23b6a8a6d56f2af27269655ead93571cd7bf2dbf22a9d"} Oct 01 15:55:22 crc kubenswrapper[4949]: I1001 15:55:22.025668 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cltnt" event={"ID":"12f7bdb2-3fa0-4328-83e7-0638451fbd7f","Type":"ContainerDied","Data":"fa5568612ac95cc0e5b97335425fd21fb4eb7fd2b1aae3c18644a7896f7bd353"} Oct 01 15:55:22 crc kubenswrapper[4949]: I1001 15:55:22.025515 4949 generic.go:334] "Generic (PLEG): container finished" podID="12f7bdb2-3fa0-4328-83e7-0638451fbd7f" containerID="fa5568612ac95cc0e5b97335425fd21fb4eb7fd2b1aae3c18644a7896f7bd353" exitCode=0 Oct 01 15:55:22 crc kubenswrapper[4949]: I1001 15:55:22.716233 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-2vmvl"] Oct 01 15:55:22 crc kubenswrapper[4949]: I1001 15:55:22.717879 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2vmvl" Oct 01 15:55:22 crc kubenswrapper[4949]: I1001 15:55:22.727823 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2vmvl"] Oct 01 15:55:22 crc kubenswrapper[4949]: I1001 15:55:22.856410 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/345b5579-9684-4d5d-90f2-d192681bc18f-utilities\") pod \"redhat-operators-2vmvl\" (UID: \"345b5579-9684-4d5d-90f2-d192681bc18f\") " pod="openshift-marketplace/redhat-operators-2vmvl" Oct 01 15:55:22 crc kubenswrapper[4949]: I1001 15:55:22.856466 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/345b5579-9684-4d5d-90f2-d192681bc18f-catalog-content\") pod \"redhat-operators-2vmvl\" (UID: \"345b5579-9684-4d5d-90f2-d192681bc18f\") " pod="openshift-marketplace/redhat-operators-2vmvl" Oct 01 15:55:22 crc kubenswrapper[4949]: I1001 15:55:22.856486 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r662p\" (UniqueName: \"kubernetes.io/projected/345b5579-9684-4d5d-90f2-d192681bc18f-kube-api-access-r662p\") pod \"redhat-operators-2vmvl\" (UID: \"345b5579-9684-4d5d-90f2-d192681bc18f\") " pod="openshift-marketplace/redhat-operators-2vmvl" Oct 01 15:55:22 crc kubenswrapper[4949]: I1001 15:55:22.957366 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/345b5579-9684-4d5d-90f2-d192681bc18f-utilities\") pod \"redhat-operators-2vmvl\" (UID: \"345b5579-9684-4d5d-90f2-d192681bc18f\") " pod="openshift-marketplace/redhat-operators-2vmvl" Oct 01 15:55:22 crc kubenswrapper[4949]: I1001 15:55:22.957436 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/345b5579-9684-4d5d-90f2-d192681bc18f-catalog-content\") pod \"redhat-operators-2vmvl\" (UID: \"345b5579-9684-4d5d-90f2-d192681bc18f\") " pod="openshift-marketplace/redhat-operators-2vmvl" Oct 01 15:55:22 crc kubenswrapper[4949]: I1001 15:55:22.957464 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r662p\" (UniqueName: \"kubernetes.io/projected/345b5579-9684-4d5d-90f2-d192681bc18f-kube-api-access-r662p\") pod \"redhat-operators-2vmvl\" (UID: \"345b5579-9684-4d5d-90f2-d192681bc18f\") " pod="openshift-marketplace/redhat-operators-2vmvl" Oct 01 15:55:22 crc kubenswrapper[4949]: I1001 15:55:22.957960 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/345b5579-9684-4d5d-90f2-d192681bc18f-catalog-content\") pod \"redhat-operators-2vmvl\" (UID: \"345b5579-9684-4d5d-90f2-d192681bc18f\") " pod="openshift-marketplace/redhat-operators-2vmvl" Oct 01 15:55:22 crc kubenswrapper[4949]: I1001 15:55:22.957994 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/345b5579-9684-4d5d-90f2-d192681bc18f-utilities\") pod \"redhat-operators-2vmvl\" (UID: \"345b5579-9684-4d5d-90f2-d192681bc18f\") " pod="openshift-marketplace/redhat-operators-2vmvl" Oct 01 15:55:22 crc kubenswrapper[4949]: I1001 15:55:22.981460 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r662p\" (UniqueName: \"kubernetes.io/projected/345b5579-9684-4d5d-90f2-d192681bc18f-kube-api-access-r662p\") pod \"redhat-operators-2vmvl\" (UID: \"345b5579-9684-4d5d-90f2-d192681bc18f\") " pod="openshift-marketplace/redhat-operators-2vmvl" Oct 01 15:55:23 crc kubenswrapper[4949]: I1001 15:55:23.034168 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2vmvl" Oct 01 15:55:23 crc kubenswrapper[4949]: I1001 15:55:23.034191 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cltnt" event={"ID":"12f7bdb2-3fa0-4328-83e7-0638451fbd7f","Type":"ContainerStarted","Data":"b5270224f8594c43df03d1f541377692b4dd57279b169d1850b498df5d25e0d7"} Oct 01 15:55:23 crc kubenswrapper[4949]: I1001 15:55:23.065481 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cltnt" podStartSLOduration=2.581761106 podStartE2EDuration="4.065465855s" podCreationTimestamp="2025-10-01 15:55:19 +0000 UTC" firstStartedPulling="2025-10-01 15:55:21.019477683 +0000 UTC m=+820.325083894" lastFinishedPulling="2025-10-01 15:55:22.503182452 +0000 UTC m=+821.808788643" observedRunningTime="2025-10-01 15:55:23.060539048 +0000 UTC m=+822.366145239" watchObservedRunningTime="2025-10-01 15:55:23.065465855 +0000 UTC m=+822.371072046" Oct 01 15:55:23 crc kubenswrapper[4949]: I1001 15:55:23.357324 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/7cdf3127ac6a3925f4fb2d937c27e2845d0915122c2baebe66bf49073a6slsv"] Oct 01 15:55:23 crc kubenswrapper[4949]: I1001 15:55:23.359557 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7cdf3127ac6a3925f4fb2d937c27e2845d0915122c2baebe66bf49073a6slsv" Oct 01 15:55:23 crc kubenswrapper[4949]: I1001 15:55:23.363185 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-nd6n6" Oct 01 15:55:23 crc kubenswrapper[4949]: I1001 15:55:23.377455 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/7cdf3127ac6a3925f4fb2d937c27e2845d0915122c2baebe66bf49073a6slsv"] Oct 01 15:55:23 crc kubenswrapper[4949]: I1001 15:55:23.463677 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fhbpq\" (UniqueName: \"kubernetes.io/projected/9f616ad0-6bc3-46a6-a7c9-4c77256f8660-kube-api-access-fhbpq\") pod \"7cdf3127ac6a3925f4fb2d937c27e2845d0915122c2baebe66bf49073a6slsv\" (UID: \"9f616ad0-6bc3-46a6-a7c9-4c77256f8660\") " pod="openstack-operators/7cdf3127ac6a3925f4fb2d937c27e2845d0915122c2baebe66bf49073a6slsv" Oct 01 15:55:23 crc kubenswrapper[4949]: I1001 15:55:23.463817 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9f616ad0-6bc3-46a6-a7c9-4c77256f8660-util\") pod \"7cdf3127ac6a3925f4fb2d937c27e2845d0915122c2baebe66bf49073a6slsv\" (UID: \"9f616ad0-6bc3-46a6-a7c9-4c77256f8660\") " pod="openstack-operators/7cdf3127ac6a3925f4fb2d937c27e2845d0915122c2baebe66bf49073a6slsv" Oct 01 15:55:23 crc kubenswrapper[4949]: I1001 15:55:23.463905 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9f616ad0-6bc3-46a6-a7c9-4c77256f8660-bundle\") pod \"7cdf3127ac6a3925f4fb2d937c27e2845d0915122c2baebe66bf49073a6slsv\" (UID: \"9f616ad0-6bc3-46a6-a7c9-4c77256f8660\") " pod="openstack-operators/7cdf3127ac6a3925f4fb2d937c27e2845d0915122c2baebe66bf49073a6slsv" Oct 01 15:55:23 crc kubenswrapper[4949]: I1001 15:55:23.473012 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-2vmvl"] Oct 01 15:55:23 crc kubenswrapper[4949]: I1001 15:55:23.565543 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fhbpq\" (UniqueName: \"kubernetes.io/projected/9f616ad0-6bc3-46a6-a7c9-4c77256f8660-kube-api-access-fhbpq\") pod \"7cdf3127ac6a3925f4fb2d937c27e2845d0915122c2baebe66bf49073a6slsv\" (UID: \"9f616ad0-6bc3-46a6-a7c9-4c77256f8660\") " pod="openstack-operators/7cdf3127ac6a3925f4fb2d937c27e2845d0915122c2baebe66bf49073a6slsv" Oct 01 15:55:23 crc kubenswrapper[4949]: I1001 15:55:23.566236 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9f616ad0-6bc3-46a6-a7c9-4c77256f8660-util\") pod \"7cdf3127ac6a3925f4fb2d937c27e2845d0915122c2baebe66bf49073a6slsv\" (UID: \"9f616ad0-6bc3-46a6-a7c9-4c77256f8660\") " pod="openstack-operators/7cdf3127ac6a3925f4fb2d937c27e2845d0915122c2baebe66bf49073a6slsv" Oct 01 15:55:23 crc kubenswrapper[4949]: I1001 15:55:23.566865 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9f616ad0-6bc3-46a6-a7c9-4c77256f8660-util\") pod \"7cdf3127ac6a3925f4fb2d937c27e2845d0915122c2baebe66bf49073a6slsv\" (UID: \"9f616ad0-6bc3-46a6-a7c9-4c77256f8660\") " pod="openstack-operators/7cdf3127ac6a3925f4fb2d937c27e2845d0915122c2baebe66bf49073a6slsv" Oct 01 15:55:23 crc kubenswrapper[4949]: I1001 15:55:23.566993 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9f616ad0-6bc3-46a6-a7c9-4c77256f8660-bundle\") pod \"7cdf3127ac6a3925f4fb2d937c27e2845d0915122c2baebe66bf49073a6slsv\" (UID: \"9f616ad0-6bc3-46a6-a7c9-4c77256f8660\") " pod="openstack-operators/7cdf3127ac6a3925f4fb2d937c27e2845d0915122c2baebe66bf49073a6slsv" Oct 01 15:55:23 crc kubenswrapper[4949]: I1001 15:55:23.567530 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9f616ad0-6bc3-46a6-a7c9-4c77256f8660-bundle\") pod \"7cdf3127ac6a3925f4fb2d937c27e2845d0915122c2baebe66bf49073a6slsv\" (UID: \"9f616ad0-6bc3-46a6-a7c9-4c77256f8660\") " pod="openstack-operators/7cdf3127ac6a3925f4fb2d937c27e2845d0915122c2baebe66bf49073a6slsv" Oct 01 15:55:23 crc kubenswrapper[4949]: I1001 15:55:23.589599 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fhbpq\" (UniqueName: \"kubernetes.io/projected/9f616ad0-6bc3-46a6-a7c9-4c77256f8660-kube-api-access-fhbpq\") pod \"7cdf3127ac6a3925f4fb2d937c27e2845d0915122c2baebe66bf49073a6slsv\" (UID: \"9f616ad0-6bc3-46a6-a7c9-4c77256f8660\") " pod="openstack-operators/7cdf3127ac6a3925f4fb2d937c27e2845d0915122c2baebe66bf49073a6slsv" Oct 01 15:55:23 crc kubenswrapper[4949]: I1001 15:55:23.678428 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7cdf3127ac6a3925f4fb2d937c27e2845d0915122c2baebe66bf49073a6slsv" Oct 01 15:55:24 crc kubenswrapper[4949]: I1001 15:55:24.041324 4949 generic.go:334] "Generic (PLEG): container finished" podID="345b5579-9684-4d5d-90f2-d192681bc18f" containerID="4efbf5f9fce82c246ecf95217bc49bf7af509a14ed2bb9e581ee63a1f023d999" exitCode=0 Oct 01 15:55:24 crc kubenswrapper[4949]: I1001 15:55:24.042162 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vmvl" event={"ID":"345b5579-9684-4d5d-90f2-d192681bc18f","Type":"ContainerDied","Data":"4efbf5f9fce82c246ecf95217bc49bf7af509a14ed2bb9e581ee63a1f023d999"} Oct 01 15:55:24 crc kubenswrapper[4949]: I1001 15:55:24.042185 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vmvl" event={"ID":"345b5579-9684-4d5d-90f2-d192681bc18f","Type":"ContainerStarted","Data":"23d8ed15cd6039156aba888c7a955bae1c41e61736c64e748c95e3f0d6bfe98d"} Oct 01 15:55:24 crc kubenswrapper[4949]: I1001 15:55:24.121246 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/7cdf3127ac6a3925f4fb2d937c27e2845d0915122c2baebe66bf49073a6slsv"] Oct 01 15:55:25 crc kubenswrapper[4949]: I1001 15:55:25.057514 4949 generic.go:334] "Generic (PLEG): container finished" podID="9f616ad0-6bc3-46a6-a7c9-4c77256f8660" containerID="a69aefe9d1e14ed651f8a5c7400d7a960ab9555556f88d101172c17b11f167e9" exitCode=0 Oct 01 15:55:25 crc kubenswrapper[4949]: I1001 15:55:25.057571 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7cdf3127ac6a3925f4fb2d937c27e2845d0915122c2baebe66bf49073a6slsv" event={"ID":"9f616ad0-6bc3-46a6-a7c9-4c77256f8660","Type":"ContainerDied","Data":"a69aefe9d1e14ed651f8a5c7400d7a960ab9555556f88d101172c17b11f167e9"} Oct 01 15:55:25 crc kubenswrapper[4949]: I1001 15:55:25.057950 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7cdf3127ac6a3925f4fb2d937c27e2845d0915122c2baebe66bf49073a6slsv" event={"ID":"9f616ad0-6bc3-46a6-a7c9-4c77256f8660","Type":"ContainerStarted","Data":"729e9a3f7f6b50c11f1f929e116372403cb41c2c99dfe822d02772e3197e171f"} Oct 01 15:55:25 crc kubenswrapper[4949]: I1001 15:55:25.642788 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-zsjpd" Oct 01 15:55:25 crc kubenswrapper[4949]: I1001 15:55:25.642864 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-zsjpd" Oct 01 15:55:25 crc kubenswrapper[4949]: I1001 15:55:25.685542 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-zsjpd" Oct 01 15:55:26 crc kubenswrapper[4949]: I1001 15:55:26.066385 4949 generic.go:334] "Generic (PLEG): container finished" podID="345b5579-9684-4d5d-90f2-d192681bc18f" containerID="3b79a7d070165fe19413c1e8eef0b904d1c9979a2ebee62bd7358907db46cb66" exitCode=0 Oct 01 15:55:26 crc kubenswrapper[4949]: I1001 15:55:26.066445 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vmvl" event={"ID":"345b5579-9684-4d5d-90f2-d192681bc18f","Type":"ContainerDied","Data":"3b79a7d070165fe19413c1e8eef0b904d1c9979a2ebee62bd7358907db46cb66"} Oct 01 15:55:26 crc kubenswrapper[4949]: I1001 15:55:26.070994 4949 generic.go:334] "Generic (PLEG): container finished" podID="9f616ad0-6bc3-46a6-a7c9-4c77256f8660" containerID="886c0792fa2f3323442b77a6de5d0a1e95abd66eff6e0f30656b98df97251c37" exitCode=0 Oct 01 15:55:26 crc kubenswrapper[4949]: I1001 15:55:26.071785 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7cdf3127ac6a3925f4fb2d937c27e2845d0915122c2baebe66bf49073a6slsv" event={"ID":"9f616ad0-6bc3-46a6-a7c9-4c77256f8660","Type":"ContainerDied","Data":"886c0792fa2f3323442b77a6de5d0a1e95abd66eff6e0f30656b98df97251c37"} Oct 01 15:55:26 crc kubenswrapper[4949]: I1001 15:55:26.136094 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-zsjpd" Oct 01 15:55:27 crc kubenswrapper[4949]: I1001 15:55:27.080507 4949 generic.go:334] "Generic (PLEG): container finished" podID="9f616ad0-6bc3-46a6-a7c9-4c77256f8660" containerID="0cbda65816c860c2024b6bb7dae2c07dfa5924e3902b76c2fd010eba7befb6cb" exitCode=0 Oct 01 15:55:27 crc kubenswrapper[4949]: I1001 15:55:27.080596 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7cdf3127ac6a3925f4fb2d937c27e2845d0915122c2baebe66bf49073a6slsv" event={"ID":"9f616ad0-6bc3-46a6-a7c9-4c77256f8660","Type":"ContainerDied","Data":"0cbda65816c860c2024b6bb7dae2c07dfa5924e3902b76c2fd010eba7befb6cb"} Oct 01 15:55:27 crc kubenswrapper[4949]: I1001 15:55:27.083300 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vmvl" event={"ID":"345b5579-9684-4d5d-90f2-d192681bc18f","Type":"ContainerStarted","Data":"7acc3f13c5c6ec6117ea1dd67562e512c31d93def628672f973622fdf5a042ae"} Oct 01 15:55:27 crc kubenswrapper[4949]: I1001 15:55:27.705311 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-2vmvl" podStartSLOduration=3.063209698 podStartE2EDuration="5.70529398s" podCreationTimestamp="2025-10-01 15:55:22 +0000 UTC" firstStartedPulling="2025-10-01 15:55:24.042977354 +0000 UTC m=+823.348583545" lastFinishedPulling="2025-10-01 15:55:26.685061616 +0000 UTC m=+825.990667827" observedRunningTime="2025-10-01 15:55:27.11608161 +0000 UTC m=+826.421687801" watchObservedRunningTime="2025-10-01 15:55:27.70529398 +0000 UTC m=+827.010900171" Oct 01 15:55:27 crc kubenswrapper[4949]: I1001 15:55:27.707174 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zsjpd"] Oct 01 15:55:28 crc kubenswrapper[4949]: I1001 15:55:28.087707 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-zsjpd" podUID="cfd3d432-0e28-4058-b6b1-ada65114f081" containerName="registry-server" containerID="cri-o://35ec2fab6a856e398e85d64b9d8f3252e144e6635265f6be3cce5f47bb81fd2c" gracePeriod=2 Oct 01 15:55:28 crc kubenswrapper[4949]: I1001 15:55:28.356568 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7cdf3127ac6a3925f4fb2d937c27e2845d0915122c2baebe66bf49073a6slsv" Oct 01 15:55:28 crc kubenswrapper[4949]: I1001 15:55:28.443693 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fhbpq\" (UniqueName: \"kubernetes.io/projected/9f616ad0-6bc3-46a6-a7c9-4c77256f8660-kube-api-access-fhbpq\") pod \"9f616ad0-6bc3-46a6-a7c9-4c77256f8660\" (UID: \"9f616ad0-6bc3-46a6-a7c9-4c77256f8660\") " Oct 01 15:55:28 crc kubenswrapper[4949]: I1001 15:55:28.443733 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9f616ad0-6bc3-46a6-a7c9-4c77256f8660-bundle\") pod \"9f616ad0-6bc3-46a6-a7c9-4c77256f8660\" (UID: \"9f616ad0-6bc3-46a6-a7c9-4c77256f8660\") " Oct 01 15:55:28 crc kubenswrapper[4949]: I1001 15:55:28.443826 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9f616ad0-6bc3-46a6-a7c9-4c77256f8660-util\") pod \"9f616ad0-6bc3-46a6-a7c9-4c77256f8660\" (UID: \"9f616ad0-6bc3-46a6-a7c9-4c77256f8660\") " Oct 01 15:55:28 crc kubenswrapper[4949]: I1001 15:55:28.444485 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f616ad0-6bc3-46a6-a7c9-4c77256f8660-bundle" (OuterVolumeSpecName: "bundle") pod "9f616ad0-6bc3-46a6-a7c9-4c77256f8660" (UID: "9f616ad0-6bc3-46a6-a7c9-4c77256f8660"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 15:55:28 crc kubenswrapper[4949]: I1001 15:55:28.448894 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f616ad0-6bc3-46a6-a7c9-4c77256f8660-kube-api-access-fhbpq" (OuterVolumeSpecName: "kube-api-access-fhbpq") pod "9f616ad0-6bc3-46a6-a7c9-4c77256f8660" (UID: "9f616ad0-6bc3-46a6-a7c9-4c77256f8660"). InnerVolumeSpecName "kube-api-access-fhbpq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:55:28 crc kubenswrapper[4949]: I1001 15:55:28.457467 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fhbpq\" (UniqueName: \"kubernetes.io/projected/9f616ad0-6bc3-46a6-a7c9-4c77256f8660-kube-api-access-fhbpq\") on node \"crc\" DevicePath \"\"" Oct 01 15:55:28 crc kubenswrapper[4949]: I1001 15:55:28.457498 4949 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/9f616ad0-6bc3-46a6-a7c9-4c77256f8660-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 15:55:28 crc kubenswrapper[4949]: I1001 15:55:28.457570 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f616ad0-6bc3-46a6-a7c9-4c77256f8660-util" (OuterVolumeSpecName: "util") pod "9f616ad0-6bc3-46a6-a7c9-4c77256f8660" (UID: "9f616ad0-6bc3-46a6-a7c9-4c77256f8660"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 15:55:28 crc kubenswrapper[4949]: I1001 15:55:28.558534 4949 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/9f616ad0-6bc3-46a6-a7c9-4c77256f8660-util\") on node \"crc\" DevicePath \"\"" Oct 01 15:55:29 crc kubenswrapper[4949]: I1001 15:55:29.008415 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zsjpd" Oct 01 15:55:29 crc kubenswrapper[4949]: I1001 15:55:29.063547 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfd3d432-0e28-4058-b6b1-ada65114f081-utilities\") pod \"cfd3d432-0e28-4058-b6b1-ada65114f081\" (UID: \"cfd3d432-0e28-4058-b6b1-ada65114f081\") " Oct 01 15:55:29 crc kubenswrapper[4949]: I1001 15:55:29.063695 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m4z76\" (UniqueName: \"kubernetes.io/projected/cfd3d432-0e28-4058-b6b1-ada65114f081-kube-api-access-m4z76\") pod \"cfd3d432-0e28-4058-b6b1-ada65114f081\" (UID: \"cfd3d432-0e28-4058-b6b1-ada65114f081\") " Oct 01 15:55:29 crc kubenswrapper[4949]: I1001 15:55:29.063734 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfd3d432-0e28-4058-b6b1-ada65114f081-catalog-content\") pod \"cfd3d432-0e28-4058-b6b1-ada65114f081\" (UID: \"cfd3d432-0e28-4058-b6b1-ada65114f081\") " Oct 01 15:55:29 crc kubenswrapper[4949]: I1001 15:55:29.064365 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfd3d432-0e28-4058-b6b1-ada65114f081-utilities" (OuterVolumeSpecName: "utilities") pod "cfd3d432-0e28-4058-b6b1-ada65114f081" (UID: "cfd3d432-0e28-4058-b6b1-ada65114f081"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 15:55:29 crc kubenswrapper[4949]: I1001 15:55:29.066607 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cfd3d432-0e28-4058-b6b1-ada65114f081-kube-api-access-m4z76" (OuterVolumeSpecName: "kube-api-access-m4z76") pod "cfd3d432-0e28-4058-b6b1-ada65114f081" (UID: "cfd3d432-0e28-4058-b6b1-ada65114f081"). InnerVolumeSpecName "kube-api-access-m4z76". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:55:29 crc kubenswrapper[4949]: I1001 15:55:29.076313 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cfd3d432-0e28-4058-b6b1-ada65114f081-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cfd3d432-0e28-4058-b6b1-ada65114f081" (UID: "cfd3d432-0e28-4058-b6b1-ada65114f081"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 15:55:29 crc kubenswrapper[4949]: I1001 15:55:29.095825 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/7cdf3127ac6a3925f4fb2d937c27e2845d0915122c2baebe66bf49073a6slsv" Oct 01 15:55:29 crc kubenswrapper[4949]: I1001 15:55:29.095829 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/7cdf3127ac6a3925f4fb2d937c27e2845d0915122c2baebe66bf49073a6slsv" event={"ID":"9f616ad0-6bc3-46a6-a7c9-4c77256f8660","Type":"ContainerDied","Data":"729e9a3f7f6b50c11f1f929e116372403cb41c2c99dfe822d02772e3197e171f"} Oct 01 15:55:29 crc kubenswrapper[4949]: I1001 15:55:29.095883 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="729e9a3f7f6b50c11f1f929e116372403cb41c2c99dfe822d02772e3197e171f" Oct 01 15:55:29 crc kubenswrapper[4949]: I1001 15:55:29.098720 4949 generic.go:334] "Generic (PLEG): container finished" podID="cfd3d432-0e28-4058-b6b1-ada65114f081" containerID="35ec2fab6a856e398e85d64b9d8f3252e144e6635265f6be3cce5f47bb81fd2c" exitCode=0 Oct 01 15:55:29 crc kubenswrapper[4949]: I1001 15:55:29.098765 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zsjpd" event={"ID":"cfd3d432-0e28-4058-b6b1-ada65114f081","Type":"ContainerDied","Data":"35ec2fab6a856e398e85d64b9d8f3252e144e6635265f6be3cce5f47bb81fd2c"} Oct 01 15:55:29 crc kubenswrapper[4949]: I1001 15:55:29.098784 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-zsjpd" Oct 01 15:55:29 crc kubenswrapper[4949]: I1001 15:55:29.098803 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-zsjpd" event={"ID":"cfd3d432-0e28-4058-b6b1-ada65114f081","Type":"ContainerDied","Data":"7f5fe3db9171f3a12d6a7a8affaac52bda0dacb978e74a8f06da74a8b3aa4e22"} Oct 01 15:55:29 crc kubenswrapper[4949]: I1001 15:55:29.098842 4949 scope.go:117] "RemoveContainer" containerID="35ec2fab6a856e398e85d64b9d8f3252e144e6635265f6be3cce5f47bb81fd2c" Oct 01 15:55:29 crc kubenswrapper[4949]: I1001 15:55:29.116241 4949 scope.go:117] "RemoveContainer" containerID="34e6dffd0f96924ed1026ed5ba16b5455b2b5518bab9949ba9566edbb6906ffc" Oct 01 15:55:29 crc kubenswrapper[4949]: I1001 15:55:29.140690 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-zsjpd"] Oct 01 15:55:29 crc kubenswrapper[4949]: I1001 15:55:29.141658 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-zsjpd"] Oct 01 15:55:29 crc kubenswrapper[4949]: I1001 15:55:29.143557 4949 scope.go:117] "RemoveContainer" containerID="a7528f557ae3f1a6322df6d7f84c50cc19af9fa83553dcaad9a1ca5217bfdf27" Oct 01 15:55:29 crc kubenswrapper[4949]: I1001 15:55:29.157890 4949 scope.go:117] "RemoveContainer" containerID="35ec2fab6a856e398e85d64b9d8f3252e144e6635265f6be3cce5f47bb81fd2c" Oct 01 15:55:29 crc kubenswrapper[4949]: E1001 15:55:29.160587 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35ec2fab6a856e398e85d64b9d8f3252e144e6635265f6be3cce5f47bb81fd2c\": container with ID starting with 35ec2fab6a856e398e85d64b9d8f3252e144e6635265f6be3cce5f47bb81fd2c not found: ID does not exist" containerID="35ec2fab6a856e398e85d64b9d8f3252e144e6635265f6be3cce5f47bb81fd2c" Oct 01 15:55:29 crc kubenswrapper[4949]: I1001 15:55:29.160630 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35ec2fab6a856e398e85d64b9d8f3252e144e6635265f6be3cce5f47bb81fd2c"} err="failed to get container status \"35ec2fab6a856e398e85d64b9d8f3252e144e6635265f6be3cce5f47bb81fd2c\": rpc error: code = NotFound desc = could not find container \"35ec2fab6a856e398e85d64b9d8f3252e144e6635265f6be3cce5f47bb81fd2c\": container with ID starting with 35ec2fab6a856e398e85d64b9d8f3252e144e6635265f6be3cce5f47bb81fd2c not found: ID does not exist" Oct 01 15:55:29 crc kubenswrapper[4949]: I1001 15:55:29.160659 4949 scope.go:117] "RemoveContainer" containerID="34e6dffd0f96924ed1026ed5ba16b5455b2b5518bab9949ba9566edbb6906ffc" Oct 01 15:55:29 crc kubenswrapper[4949]: E1001 15:55:29.160905 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"34e6dffd0f96924ed1026ed5ba16b5455b2b5518bab9949ba9566edbb6906ffc\": container with ID starting with 34e6dffd0f96924ed1026ed5ba16b5455b2b5518bab9949ba9566edbb6906ffc not found: ID does not exist" containerID="34e6dffd0f96924ed1026ed5ba16b5455b2b5518bab9949ba9566edbb6906ffc" Oct 01 15:55:29 crc kubenswrapper[4949]: I1001 15:55:29.160931 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"34e6dffd0f96924ed1026ed5ba16b5455b2b5518bab9949ba9566edbb6906ffc"} err="failed to get container status \"34e6dffd0f96924ed1026ed5ba16b5455b2b5518bab9949ba9566edbb6906ffc\": rpc error: code = NotFound desc = could not find container \"34e6dffd0f96924ed1026ed5ba16b5455b2b5518bab9949ba9566edbb6906ffc\": container with ID starting with 34e6dffd0f96924ed1026ed5ba16b5455b2b5518bab9949ba9566edbb6906ffc not found: ID does not exist" Oct 01 15:55:29 crc kubenswrapper[4949]: I1001 15:55:29.160948 4949 scope.go:117] "RemoveContainer" containerID="a7528f557ae3f1a6322df6d7f84c50cc19af9fa83553dcaad9a1ca5217bfdf27" Oct 01 15:55:29 crc kubenswrapper[4949]: E1001 15:55:29.161360 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7528f557ae3f1a6322df6d7f84c50cc19af9fa83553dcaad9a1ca5217bfdf27\": container with ID starting with a7528f557ae3f1a6322df6d7f84c50cc19af9fa83553dcaad9a1ca5217bfdf27 not found: ID does not exist" containerID="a7528f557ae3f1a6322df6d7f84c50cc19af9fa83553dcaad9a1ca5217bfdf27" Oct 01 15:55:29 crc kubenswrapper[4949]: I1001 15:55:29.161388 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7528f557ae3f1a6322df6d7f84c50cc19af9fa83553dcaad9a1ca5217bfdf27"} err="failed to get container status \"a7528f557ae3f1a6322df6d7f84c50cc19af9fa83553dcaad9a1ca5217bfdf27\": rpc error: code = NotFound desc = could not find container \"a7528f557ae3f1a6322df6d7f84c50cc19af9fa83553dcaad9a1ca5217bfdf27\": container with ID starting with a7528f557ae3f1a6322df6d7f84c50cc19af9fa83553dcaad9a1ca5217bfdf27 not found: ID does not exist" Oct 01 15:55:29 crc kubenswrapper[4949]: I1001 15:55:29.165222 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m4z76\" (UniqueName: \"kubernetes.io/projected/cfd3d432-0e28-4058-b6b1-ada65114f081-kube-api-access-m4z76\") on node \"crc\" DevicePath \"\"" Oct 01 15:55:29 crc kubenswrapper[4949]: I1001 15:55:29.165249 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cfd3d432-0e28-4058-b6b1-ada65114f081-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 01 15:55:29 crc kubenswrapper[4949]: I1001 15:55:29.165258 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cfd3d432-0e28-4058-b6b1-ada65114f081-utilities\") on node \"crc\" DevicePath \"\"" Oct 01 15:55:29 crc kubenswrapper[4949]: I1001 15:55:29.450751 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-cltnt" Oct 01 15:55:29 crc kubenswrapper[4949]: I1001 15:55:29.450827 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cltnt" Oct 01 15:55:29 crc kubenswrapper[4949]: I1001 15:55:29.491515 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cltnt" Oct 01 15:55:29 crc kubenswrapper[4949]: I1001 15:55:29.609258 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cfd3d432-0e28-4058-b6b1-ada65114f081" path="/var/lib/kubelet/pods/cfd3d432-0e28-4058-b6b1-ada65114f081/volumes" Oct 01 15:55:30 crc kubenswrapper[4949]: I1001 15:55:30.141231 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cltnt" Oct 01 15:55:32 crc kubenswrapper[4949]: I1001 15:55:32.304070 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cltnt"] Oct 01 15:55:32 crc kubenswrapper[4949]: I1001 15:55:32.304548 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cltnt" podUID="12f7bdb2-3fa0-4328-83e7-0638451fbd7f" containerName="registry-server" containerID="cri-o://b5270224f8594c43df03d1f541377692b4dd57279b169d1850b498df5d25e0d7" gracePeriod=2 Oct 01 15:55:33 crc kubenswrapper[4949]: I1001 15:55:33.034986 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-2vmvl" Oct 01 15:55:33 crc kubenswrapper[4949]: I1001 15:55:33.035279 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-2vmvl" Oct 01 15:55:33 crc kubenswrapper[4949]: I1001 15:55:33.082327 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-2vmvl" Oct 01 15:55:33 crc kubenswrapper[4949]: I1001 15:55:33.156378 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-2vmvl" Oct 01 15:55:34 crc kubenswrapper[4949]: I1001 15:55:34.089390 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-operator-59598b58b7-k47pt"] Oct 01 15:55:34 crc kubenswrapper[4949]: E1001 15:55:34.089718 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfd3d432-0e28-4058-b6b1-ada65114f081" containerName="extract-utilities" Oct 01 15:55:34 crc kubenswrapper[4949]: I1001 15:55:34.089733 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfd3d432-0e28-4058-b6b1-ada65114f081" containerName="extract-utilities" Oct 01 15:55:34 crc kubenswrapper[4949]: E1001 15:55:34.089742 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f616ad0-6bc3-46a6-a7c9-4c77256f8660" containerName="pull" Oct 01 15:55:34 crc kubenswrapper[4949]: I1001 15:55:34.089750 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f616ad0-6bc3-46a6-a7c9-4c77256f8660" containerName="pull" Oct 01 15:55:34 crc kubenswrapper[4949]: E1001 15:55:34.089762 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfd3d432-0e28-4058-b6b1-ada65114f081" containerName="extract-content" Oct 01 15:55:34 crc kubenswrapper[4949]: I1001 15:55:34.089772 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfd3d432-0e28-4058-b6b1-ada65114f081" containerName="extract-content" Oct 01 15:55:34 crc kubenswrapper[4949]: E1001 15:55:34.089787 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cfd3d432-0e28-4058-b6b1-ada65114f081" containerName="registry-server" Oct 01 15:55:34 crc kubenswrapper[4949]: I1001 15:55:34.089797 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="cfd3d432-0e28-4058-b6b1-ada65114f081" containerName="registry-server" Oct 01 15:55:34 crc kubenswrapper[4949]: E1001 15:55:34.089813 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f616ad0-6bc3-46a6-a7c9-4c77256f8660" containerName="extract" Oct 01 15:55:34 crc kubenswrapper[4949]: I1001 15:55:34.089820 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f616ad0-6bc3-46a6-a7c9-4c77256f8660" containerName="extract" Oct 01 15:55:34 crc kubenswrapper[4949]: E1001 15:55:34.089837 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f616ad0-6bc3-46a6-a7c9-4c77256f8660" containerName="util" Oct 01 15:55:34 crc kubenswrapper[4949]: I1001 15:55:34.089844 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f616ad0-6bc3-46a6-a7c9-4c77256f8660" containerName="util" Oct 01 15:55:34 crc kubenswrapper[4949]: I1001 15:55:34.089973 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="cfd3d432-0e28-4058-b6b1-ada65114f081" containerName="registry-server" Oct 01 15:55:34 crc kubenswrapper[4949]: I1001 15:55:34.089987 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f616ad0-6bc3-46a6-a7c9-4c77256f8660" containerName="extract" Oct 01 15:55:34 crc kubenswrapper[4949]: I1001 15:55:34.090740 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-59598b58b7-k47pt" Oct 01 15:55:34 crc kubenswrapper[4949]: I1001 15:55:34.092354 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-operator-dockercfg-fvg8k" Oct 01 15:55:34 crc kubenswrapper[4949]: I1001 15:55:34.109167 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-59598b58b7-k47pt"] Oct 01 15:55:34 crc kubenswrapper[4949]: I1001 15:55:34.227728 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-49knf\" (UniqueName: \"kubernetes.io/projected/ed7412ec-ed40-43b5-b045-b14f81da4090-kube-api-access-49knf\") pod \"openstack-operator-controller-operator-59598b58b7-k47pt\" (UID: \"ed7412ec-ed40-43b5-b045-b14f81da4090\") " pod="openstack-operators/openstack-operator-controller-operator-59598b58b7-k47pt" Oct 01 15:55:34 crc kubenswrapper[4949]: I1001 15:55:34.328816 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-49knf\" (UniqueName: \"kubernetes.io/projected/ed7412ec-ed40-43b5-b045-b14f81da4090-kube-api-access-49knf\") pod \"openstack-operator-controller-operator-59598b58b7-k47pt\" (UID: \"ed7412ec-ed40-43b5-b045-b14f81da4090\") " pod="openstack-operators/openstack-operator-controller-operator-59598b58b7-k47pt" Oct 01 15:55:34 crc kubenswrapper[4949]: I1001 15:55:34.354101 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-49knf\" (UniqueName: \"kubernetes.io/projected/ed7412ec-ed40-43b5-b045-b14f81da4090-kube-api-access-49knf\") pod \"openstack-operator-controller-operator-59598b58b7-k47pt\" (UID: \"ed7412ec-ed40-43b5-b045-b14f81da4090\") " pod="openstack-operators/openstack-operator-controller-operator-59598b58b7-k47pt" Oct 01 15:55:34 crc kubenswrapper[4949]: I1001 15:55:34.408150 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-operator-59598b58b7-k47pt" Oct 01 15:55:34 crc kubenswrapper[4949]: I1001 15:55:34.819280 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-operator-59598b58b7-k47pt"] Oct 01 15:55:34 crc kubenswrapper[4949]: W1001 15:55:34.826802 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poded7412ec_ed40_43b5_b045_b14f81da4090.slice/crio-3d621c6cc6364313a071f356f534eb956852013ad1551326d1fa1b9a90f08464 WatchSource:0}: Error finding container 3d621c6cc6364313a071f356f534eb956852013ad1551326d1fa1b9a90f08464: Status 404 returned error can't find the container with id 3d621c6cc6364313a071f356f534eb956852013ad1551326d1fa1b9a90f08464 Oct 01 15:55:35 crc kubenswrapper[4949]: I1001 15:55:35.134445 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-59598b58b7-k47pt" event={"ID":"ed7412ec-ed40-43b5-b045-b14f81da4090","Type":"ContainerStarted","Data":"3d621c6cc6364313a071f356f534eb956852013ad1551326d1fa1b9a90f08464"} Oct 01 15:55:35 crc kubenswrapper[4949]: I1001 15:55:35.137773 4949 generic.go:334] "Generic (PLEG): container finished" podID="12f7bdb2-3fa0-4328-83e7-0638451fbd7f" containerID="b5270224f8594c43df03d1f541377692b4dd57279b169d1850b498df5d25e0d7" exitCode=0 Oct 01 15:55:35 crc kubenswrapper[4949]: I1001 15:55:35.138567 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cltnt" event={"ID":"12f7bdb2-3fa0-4328-83e7-0638451fbd7f","Type":"ContainerDied","Data":"b5270224f8594c43df03d1f541377692b4dd57279b169d1850b498df5d25e0d7"} Oct 01 15:55:35 crc kubenswrapper[4949]: I1001 15:55:35.138600 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cltnt" event={"ID":"12f7bdb2-3fa0-4328-83e7-0638451fbd7f","Type":"ContainerDied","Data":"9eee7e96e5a65b9d63ee310f8fcf63ae12c0489f9775b8386043b39281731164"} Oct 01 15:55:35 crc kubenswrapper[4949]: I1001 15:55:35.138613 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9eee7e96e5a65b9d63ee310f8fcf63ae12c0489f9775b8386043b39281731164" Oct 01 15:55:35 crc kubenswrapper[4949]: I1001 15:55:35.168562 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cltnt" Oct 01 15:55:35 crc kubenswrapper[4949]: I1001 15:55:35.347160 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12f7bdb2-3fa0-4328-83e7-0638451fbd7f-utilities\") pod \"12f7bdb2-3fa0-4328-83e7-0638451fbd7f\" (UID: \"12f7bdb2-3fa0-4328-83e7-0638451fbd7f\") " Oct 01 15:55:35 crc kubenswrapper[4949]: I1001 15:55:35.347273 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12f7bdb2-3fa0-4328-83e7-0638451fbd7f-catalog-content\") pod \"12f7bdb2-3fa0-4328-83e7-0638451fbd7f\" (UID: \"12f7bdb2-3fa0-4328-83e7-0638451fbd7f\") " Oct 01 15:55:35 crc kubenswrapper[4949]: I1001 15:55:35.347319 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbc7h\" (UniqueName: \"kubernetes.io/projected/12f7bdb2-3fa0-4328-83e7-0638451fbd7f-kube-api-access-fbc7h\") pod \"12f7bdb2-3fa0-4328-83e7-0638451fbd7f\" (UID: \"12f7bdb2-3fa0-4328-83e7-0638451fbd7f\") " Oct 01 15:55:35 crc kubenswrapper[4949]: I1001 15:55:35.347879 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12f7bdb2-3fa0-4328-83e7-0638451fbd7f-utilities" (OuterVolumeSpecName: "utilities") pod "12f7bdb2-3fa0-4328-83e7-0638451fbd7f" (UID: "12f7bdb2-3fa0-4328-83e7-0638451fbd7f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 15:55:35 crc kubenswrapper[4949]: I1001 15:55:35.352918 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12f7bdb2-3fa0-4328-83e7-0638451fbd7f-kube-api-access-fbc7h" (OuterVolumeSpecName: "kube-api-access-fbc7h") pod "12f7bdb2-3fa0-4328-83e7-0638451fbd7f" (UID: "12f7bdb2-3fa0-4328-83e7-0638451fbd7f"). InnerVolumeSpecName "kube-api-access-fbc7h". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:55:35 crc kubenswrapper[4949]: I1001 15:55:35.398029 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/12f7bdb2-3fa0-4328-83e7-0638451fbd7f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "12f7bdb2-3fa0-4328-83e7-0638451fbd7f" (UID: "12f7bdb2-3fa0-4328-83e7-0638451fbd7f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 15:55:35 crc kubenswrapper[4949]: I1001 15:55:35.448757 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/12f7bdb2-3fa0-4328-83e7-0638451fbd7f-utilities\") on node \"crc\" DevicePath \"\"" Oct 01 15:55:35 crc kubenswrapper[4949]: I1001 15:55:35.448799 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/12f7bdb2-3fa0-4328-83e7-0638451fbd7f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 01 15:55:35 crc kubenswrapper[4949]: I1001 15:55:35.448815 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbc7h\" (UniqueName: \"kubernetes.io/projected/12f7bdb2-3fa0-4328-83e7-0638451fbd7f-kube-api-access-fbc7h\") on node \"crc\" DevicePath \"\"" Oct 01 15:55:35 crc kubenswrapper[4949]: I1001 15:55:35.508636 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2vmvl"] Oct 01 15:55:36 crc kubenswrapper[4949]: I1001 15:55:36.143592 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-2vmvl" podUID="345b5579-9684-4d5d-90f2-d192681bc18f" containerName="registry-server" containerID="cri-o://7acc3f13c5c6ec6117ea1dd67562e512c31d93def628672f973622fdf5a042ae" gracePeriod=2 Oct 01 15:55:36 crc kubenswrapper[4949]: I1001 15:55:36.145036 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cltnt" Oct 01 15:55:36 crc kubenswrapper[4949]: I1001 15:55:36.166577 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cltnt"] Oct 01 15:55:36 crc kubenswrapper[4949]: I1001 15:55:36.186166 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cltnt"] Oct 01 15:55:37 crc kubenswrapper[4949]: I1001 15:55:37.001339 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2vmvl" Oct 01 15:55:37 crc kubenswrapper[4949]: I1001 15:55:37.151945 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-2vmvl" Oct 01 15:55:37 crc kubenswrapper[4949]: I1001 15:55:37.152355 4949 generic.go:334] "Generic (PLEG): container finished" podID="345b5579-9684-4d5d-90f2-d192681bc18f" containerID="7acc3f13c5c6ec6117ea1dd67562e512c31d93def628672f973622fdf5a042ae" exitCode=0 Oct 01 15:55:37 crc kubenswrapper[4949]: I1001 15:55:37.152399 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vmvl" event={"ID":"345b5579-9684-4d5d-90f2-d192681bc18f","Type":"ContainerDied","Data":"7acc3f13c5c6ec6117ea1dd67562e512c31d93def628672f973622fdf5a042ae"} Oct 01 15:55:37 crc kubenswrapper[4949]: I1001 15:55:37.152427 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-2vmvl" event={"ID":"345b5579-9684-4d5d-90f2-d192681bc18f","Type":"ContainerDied","Data":"23d8ed15cd6039156aba888c7a955bae1c41e61736c64e748c95e3f0d6bfe98d"} Oct 01 15:55:37 crc kubenswrapper[4949]: I1001 15:55:37.152442 4949 scope.go:117] "RemoveContainer" containerID="7acc3f13c5c6ec6117ea1dd67562e512c31d93def628672f973622fdf5a042ae" Oct 01 15:55:37 crc kubenswrapper[4949]: I1001 15:55:37.170822 4949 scope.go:117] "RemoveContainer" containerID="3b79a7d070165fe19413c1e8eef0b904d1c9979a2ebee62bd7358907db46cb66" Oct 01 15:55:37 crc kubenswrapper[4949]: I1001 15:55:37.172167 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/345b5579-9684-4d5d-90f2-d192681bc18f-catalog-content\") pod \"345b5579-9684-4d5d-90f2-d192681bc18f\" (UID: \"345b5579-9684-4d5d-90f2-d192681bc18f\") " Oct 01 15:55:37 crc kubenswrapper[4949]: I1001 15:55:37.172275 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/345b5579-9684-4d5d-90f2-d192681bc18f-utilities\") pod \"345b5579-9684-4d5d-90f2-d192681bc18f\" (UID: \"345b5579-9684-4d5d-90f2-d192681bc18f\") " Oct 01 15:55:37 crc kubenswrapper[4949]: I1001 15:55:37.172355 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r662p\" (UniqueName: \"kubernetes.io/projected/345b5579-9684-4d5d-90f2-d192681bc18f-kube-api-access-r662p\") pod \"345b5579-9684-4d5d-90f2-d192681bc18f\" (UID: \"345b5579-9684-4d5d-90f2-d192681bc18f\") " Oct 01 15:55:37 crc kubenswrapper[4949]: I1001 15:55:37.173304 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/345b5579-9684-4d5d-90f2-d192681bc18f-utilities" (OuterVolumeSpecName: "utilities") pod "345b5579-9684-4d5d-90f2-d192681bc18f" (UID: "345b5579-9684-4d5d-90f2-d192681bc18f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 15:55:37 crc kubenswrapper[4949]: I1001 15:55:37.178916 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/345b5579-9684-4d5d-90f2-d192681bc18f-kube-api-access-r662p" (OuterVolumeSpecName: "kube-api-access-r662p") pod "345b5579-9684-4d5d-90f2-d192681bc18f" (UID: "345b5579-9684-4d5d-90f2-d192681bc18f"). InnerVolumeSpecName "kube-api-access-r662p". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:55:37 crc kubenswrapper[4949]: I1001 15:55:37.229142 4949 scope.go:117] "RemoveContainer" containerID="4efbf5f9fce82c246ecf95217bc49bf7af509a14ed2bb9e581ee63a1f023d999" Oct 01 15:55:37 crc kubenswrapper[4949]: I1001 15:55:37.246674 4949 scope.go:117] "RemoveContainer" containerID="7acc3f13c5c6ec6117ea1dd67562e512c31d93def628672f973622fdf5a042ae" Oct 01 15:55:37 crc kubenswrapper[4949]: E1001 15:55:37.247154 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7acc3f13c5c6ec6117ea1dd67562e512c31d93def628672f973622fdf5a042ae\": container with ID starting with 7acc3f13c5c6ec6117ea1dd67562e512c31d93def628672f973622fdf5a042ae not found: ID does not exist" containerID="7acc3f13c5c6ec6117ea1dd67562e512c31d93def628672f973622fdf5a042ae" Oct 01 15:55:37 crc kubenswrapper[4949]: I1001 15:55:37.247190 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7acc3f13c5c6ec6117ea1dd67562e512c31d93def628672f973622fdf5a042ae"} err="failed to get container status \"7acc3f13c5c6ec6117ea1dd67562e512c31d93def628672f973622fdf5a042ae\": rpc error: code = NotFound desc = could not find container \"7acc3f13c5c6ec6117ea1dd67562e512c31d93def628672f973622fdf5a042ae\": container with ID starting with 7acc3f13c5c6ec6117ea1dd67562e512c31d93def628672f973622fdf5a042ae not found: ID does not exist" Oct 01 15:55:37 crc kubenswrapper[4949]: I1001 15:55:37.247211 4949 scope.go:117] "RemoveContainer" containerID="3b79a7d070165fe19413c1e8eef0b904d1c9979a2ebee62bd7358907db46cb66" Oct 01 15:55:37 crc kubenswrapper[4949]: E1001 15:55:37.247511 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b79a7d070165fe19413c1e8eef0b904d1c9979a2ebee62bd7358907db46cb66\": container with ID starting with 3b79a7d070165fe19413c1e8eef0b904d1c9979a2ebee62bd7358907db46cb66 not found: ID does not exist" containerID="3b79a7d070165fe19413c1e8eef0b904d1c9979a2ebee62bd7358907db46cb66" Oct 01 15:55:37 crc kubenswrapper[4949]: I1001 15:55:37.247539 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b79a7d070165fe19413c1e8eef0b904d1c9979a2ebee62bd7358907db46cb66"} err="failed to get container status \"3b79a7d070165fe19413c1e8eef0b904d1c9979a2ebee62bd7358907db46cb66\": rpc error: code = NotFound desc = could not find container \"3b79a7d070165fe19413c1e8eef0b904d1c9979a2ebee62bd7358907db46cb66\": container with ID starting with 3b79a7d070165fe19413c1e8eef0b904d1c9979a2ebee62bd7358907db46cb66 not found: ID does not exist" Oct 01 15:55:37 crc kubenswrapper[4949]: I1001 15:55:37.247553 4949 scope.go:117] "RemoveContainer" containerID="4efbf5f9fce82c246ecf95217bc49bf7af509a14ed2bb9e581ee63a1f023d999" Oct 01 15:55:37 crc kubenswrapper[4949]: E1001 15:55:37.247797 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4efbf5f9fce82c246ecf95217bc49bf7af509a14ed2bb9e581ee63a1f023d999\": container with ID starting with 4efbf5f9fce82c246ecf95217bc49bf7af509a14ed2bb9e581ee63a1f023d999 not found: ID does not exist" containerID="4efbf5f9fce82c246ecf95217bc49bf7af509a14ed2bb9e581ee63a1f023d999" Oct 01 15:55:37 crc kubenswrapper[4949]: I1001 15:55:37.247819 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4efbf5f9fce82c246ecf95217bc49bf7af509a14ed2bb9e581ee63a1f023d999"} err="failed to get container status \"4efbf5f9fce82c246ecf95217bc49bf7af509a14ed2bb9e581ee63a1f023d999\": rpc error: code = NotFound desc = could not find container \"4efbf5f9fce82c246ecf95217bc49bf7af509a14ed2bb9e581ee63a1f023d999\": container with ID starting with 4efbf5f9fce82c246ecf95217bc49bf7af509a14ed2bb9e581ee63a1f023d999 not found: ID does not exist" Oct 01 15:55:37 crc kubenswrapper[4949]: I1001 15:55:37.261679 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/345b5579-9684-4d5d-90f2-d192681bc18f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "345b5579-9684-4d5d-90f2-d192681bc18f" (UID: "345b5579-9684-4d5d-90f2-d192681bc18f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 15:55:37 crc kubenswrapper[4949]: I1001 15:55:37.273701 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r662p\" (UniqueName: \"kubernetes.io/projected/345b5579-9684-4d5d-90f2-d192681bc18f-kube-api-access-r662p\") on node \"crc\" DevicePath \"\"" Oct 01 15:55:37 crc kubenswrapper[4949]: I1001 15:55:37.273732 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/345b5579-9684-4d5d-90f2-d192681bc18f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 01 15:55:37 crc kubenswrapper[4949]: I1001 15:55:37.273770 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/345b5579-9684-4d5d-90f2-d192681bc18f-utilities\") on node \"crc\" DevicePath \"\"" Oct 01 15:55:37 crc kubenswrapper[4949]: I1001 15:55:37.480434 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-2vmvl"] Oct 01 15:55:37 crc kubenswrapper[4949]: I1001 15:55:37.484599 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-2vmvl"] Oct 01 15:55:37 crc kubenswrapper[4949]: I1001 15:55:37.610328 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12f7bdb2-3fa0-4328-83e7-0638451fbd7f" path="/var/lib/kubelet/pods/12f7bdb2-3fa0-4328-83e7-0638451fbd7f/volumes" Oct 01 15:55:37 crc kubenswrapper[4949]: I1001 15:55:37.611346 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="345b5579-9684-4d5d-90f2-d192681bc18f" path="/var/lib/kubelet/pods/345b5579-9684-4d5d-90f2-d192681bc18f/volumes" Oct 01 15:55:40 crc kubenswrapper[4949]: I1001 15:55:40.186408 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-59598b58b7-k47pt" event={"ID":"ed7412ec-ed40-43b5-b045-b14f81da4090","Type":"ContainerStarted","Data":"8ce3fd6ea38b854f505c3ddc406f7b7cf396f55975164dc474c0a84f279012af"} Oct 01 15:55:43 crc kubenswrapper[4949]: I1001 15:55:43.209115 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-operator-59598b58b7-k47pt" event={"ID":"ed7412ec-ed40-43b5-b045-b14f81da4090","Type":"ContainerStarted","Data":"d7964e6edd32aefd353f34341f9e9a41dee7c9f5b74be3fbc0ea7e31159c2c53"} Oct 01 15:55:43 crc kubenswrapper[4949]: I1001 15:55:43.209665 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-operator-59598b58b7-k47pt" Oct 01 15:55:43 crc kubenswrapper[4949]: I1001 15:55:43.238997 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-operator-59598b58b7-k47pt" podStartSLOduration=1.409237431 podStartE2EDuration="9.23897892s" podCreationTimestamp="2025-10-01 15:55:34 +0000 UTC" firstStartedPulling="2025-10-01 15:55:34.82894606 +0000 UTC m=+834.134552251" lastFinishedPulling="2025-10-01 15:55:42.658687549 +0000 UTC m=+841.964293740" observedRunningTime="2025-10-01 15:55:43.236869031 +0000 UTC m=+842.542475222" watchObservedRunningTime="2025-10-01 15:55:43.23897892 +0000 UTC m=+842.544585111" Oct 01 15:55:54 crc kubenswrapper[4949]: I1001 15:55:54.411645 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-operator-59598b58b7-k47pt" Oct 01 15:56:10 crc kubenswrapper[4949]: I1001 15:56:10.995848 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6ff8b75857-2dwb2"] Oct 01 15:56:10 crc kubenswrapper[4949]: E1001 15:56:10.997855 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12f7bdb2-3fa0-4328-83e7-0638451fbd7f" containerName="extract-content" Oct 01 15:56:10 crc kubenswrapper[4949]: I1001 15:56:10.997924 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="12f7bdb2-3fa0-4328-83e7-0638451fbd7f" containerName="extract-content" Oct 01 15:56:10 crc kubenswrapper[4949]: E1001 15:56:10.998005 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="345b5579-9684-4d5d-90f2-d192681bc18f" containerName="extract-content" Oct 01 15:56:10 crc kubenswrapper[4949]: I1001 15:56:10.998080 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="345b5579-9684-4d5d-90f2-d192681bc18f" containerName="extract-content" Oct 01 15:56:10 crc kubenswrapper[4949]: E1001 15:56:10.998250 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="345b5579-9684-4d5d-90f2-d192681bc18f" containerName="extract-utilities" Oct 01 15:56:10 crc kubenswrapper[4949]: I1001 15:56:10.998328 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="345b5579-9684-4d5d-90f2-d192681bc18f" containerName="extract-utilities" Oct 01 15:56:10 crc kubenswrapper[4949]: E1001 15:56:10.998399 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="345b5579-9684-4d5d-90f2-d192681bc18f" containerName="registry-server" Oct 01 15:56:10 crc kubenswrapper[4949]: I1001 15:56:10.998453 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="345b5579-9684-4d5d-90f2-d192681bc18f" containerName="registry-server" Oct 01 15:56:10 crc kubenswrapper[4949]: E1001 15:56:10.998516 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12f7bdb2-3fa0-4328-83e7-0638451fbd7f" containerName="registry-server" Oct 01 15:56:10 crc kubenswrapper[4949]: I1001 15:56:10.998584 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="12f7bdb2-3fa0-4328-83e7-0638451fbd7f" containerName="registry-server" Oct 01 15:56:10 crc kubenswrapper[4949]: E1001 15:56:10.998655 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12f7bdb2-3fa0-4328-83e7-0638451fbd7f" containerName="extract-utilities" Oct 01 15:56:10 crc kubenswrapper[4949]: I1001 15:56:10.998720 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="12f7bdb2-3fa0-4328-83e7-0638451fbd7f" containerName="extract-utilities" Oct 01 15:56:10 crc kubenswrapper[4949]: I1001 15:56:10.998928 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="12f7bdb2-3fa0-4328-83e7-0638451fbd7f" containerName="registry-server" Oct 01 15:56:10 crc kubenswrapper[4949]: I1001 15:56:10.999027 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="345b5579-9684-4d5d-90f2-d192681bc18f" containerName="registry-server" Oct 01 15:56:10 crc kubenswrapper[4949]: I1001 15:56:10.999738 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-2dwb2" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.009267 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"barbican-operator-controller-manager-dockercfg-z2cqm" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.014626 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6ff8b75857-2dwb2"] Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.026315 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/cinder-operator-controller-manager-795d876f9c-8wvgx"] Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.027434 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-795d876f9c-8wvgx" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.030732 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"cinder-operator-controller-manager-dockercfg-sgfzm" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.037715 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/designate-operator-controller-manager-84f4f7b77b-rbgd2"] Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.040287 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-rbgd2" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.048848 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"designate-operator-controller-manager-dockercfg-kt7kh" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.067461 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-795d876f9c-8wvgx"] Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.072381 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-84f4f7b77b-rbgd2"] Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.094167 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-84958c4d49-jshxk"] Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.095161 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-84958c4d49-jshxk" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.098946 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-84958c4d49-jshxk"] Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.107263 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-cdxkw" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.117405 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cncc5\" (UniqueName: \"kubernetes.io/projected/fd0ea56f-b7dd-4d0d-b6bf-5ca133fa2ea4-kube-api-access-cncc5\") pod \"barbican-operator-controller-manager-6ff8b75857-2dwb2\" (UID: \"fd0ea56f-b7dd-4d0d-b6bf-5ca133fa2ea4\") " pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-2dwb2" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.117477 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkv24\" (UniqueName: \"kubernetes.io/projected/e5ed691c-da8b-4bae-8d20-e92c1e062ea2-kube-api-access-qkv24\") pod \"cinder-operator-controller-manager-795d876f9c-8wvgx\" (UID: \"e5ed691c-da8b-4bae-8d20-e92c1e062ea2\") " pod="openstack-operators/cinder-operator-controller-manager-795d876f9c-8wvgx" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.117572 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jvjqv\" (UniqueName: \"kubernetes.io/projected/6bec54a7-c02d-45e5-bc1a-22ca4e0d2229-kube-api-access-jvjqv\") pod \"designate-operator-controller-manager-84f4f7b77b-rbgd2\" (UID: \"6bec54a7-c02d-45e5-bc1a-22ca4e0d2229\") " pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-rbgd2" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.184290 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-9f4696d94-5w8c4"] Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.185678 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-5w8c4" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.198645 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-c9xfc" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.203808 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/heat-operator-controller-manager-5d889d78cf-4k7z5"] Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.205170 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-4k7z5" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.218697 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"heat-operator-controller-manager-dockercfg-hfknq" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.220019 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cncc5\" (UniqueName: \"kubernetes.io/projected/fd0ea56f-b7dd-4d0d-b6bf-5ca133fa2ea4-kube-api-access-cncc5\") pod \"barbican-operator-controller-manager-6ff8b75857-2dwb2\" (UID: \"fd0ea56f-b7dd-4d0d-b6bf-5ca133fa2ea4\") " pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-2dwb2" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.220067 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkv24\" (UniqueName: \"kubernetes.io/projected/e5ed691c-da8b-4bae-8d20-e92c1e062ea2-kube-api-access-qkv24\") pod \"cinder-operator-controller-manager-795d876f9c-8wvgx\" (UID: \"e5ed691c-da8b-4bae-8d20-e92c1e062ea2\") " pod="openstack-operators/cinder-operator-controller-manager-795d876f9c-8wvgx" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.220110 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24mwb\" (UniqueName: \"kubernetes.io/projected/e8658648-cf73-468c-8291-7b6ad0f265e6-kube-api-access-24mwb\") pod \"glance-operator-controller-manager-84958c4d49-jshxk\" (UID: \"e8658648-cf73-468c-8291-7b6ad0f265e6\") " pod="openstack-operators/glance-operator-controller-manager-84958c4d49-jshxk" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.220182 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jvjqv\" (UniqueName: \"kubernetes.io/projected/6bec54a7-c02d-45e5-bc1a-22ca4e0d2229-kube-api-access-jvjqv\") pod \"designate-operator-controller-manager-84f4f7b77b-rbgd2\" (UID: \"6bec54a7-c02d-45e5-bc1a-22ca4e0d2229\") " pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-rbgd2" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.221761 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-9d6c5db85-cg7tn"] Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.223070 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-9d6c5db85-cg7tn" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.230537 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-hjdt6" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.230903 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-webhook-server-cert" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.246837 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkv24\" (UniqueName: \"kubernetes.io/projected/e5ed691c-da8b-4bae-8d20-e92c1e062ea2-kube-api-access-qkv24\") pod \"cinder-operator-controller-manager-795d876f9c-8wvgx\" (UID: \"e5ed691c-da8b-4bae-8d20-e92c1e062ea2\") " pod="openstack-operators/cinder-operator-controller-manager-795d876f9c-8wvgx" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.248001 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cncc5\" (UniqueName: \"kubernetes.io/projected/fd0ea56f-b7dd-4d0d-b6bf-5ca133fa2ea4-kube-api-access-cncc5\") pod \"barbican-operator-controller-manager-6ff8b75857-2dwb2\" (UID: \"fd0ea56f-b7dd-4d0d-b6bf-5ca133fa2ea4\") " pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-2dwb2" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.250059 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jvjqv\" (UniqueName: \"kubernetes.io/projected/6bec54a7-c02d-45e5-bc1a-22ca4e0d2229-kube-api-access-jvjqv\") pod \"designate-operator-controller-manager-84f4f7b77b-rbgd2\" (UID: \"6bec54a7-c02d-45e5-bc1a-22ca4e0d2229\") " pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-rbgd2" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.264601 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-9f4696d94-5w8c4"] Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.275174 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5d889d78cf-4k7z5"] Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.279762 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-9d6c5db85-cg7tn"] Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.291261 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5cd4858477-5pcwg"] Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.292430 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5cd4858477-5pcwg" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.295531 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ironic-operator-controller-manager-dockercfg-fc674" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.300964 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5cd4858477-5pcwg"] Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.313204 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-5bd55b4bff-mrm9t"] Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.314593 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-mrm9t" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.317513 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-tcfvn" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.321480 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/611a7063-c924-4b35-b2d7-d4d48a7e8f7a-cert\") pod \"infra-operator-controller-manager-9d6c5db85-cg7tn\" (UID: \"611a7063-c924-4b35-b2d7-d4d48a7e8f7a\") " pod="openstack-operators/infra-operator-controller-manager-9d6c5db85-cg7tn" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.321518 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hj749\" (UniqueName: \"kubernetes.io/projected/36350170-f4ac-4f4b-ba23-a05d9300f63a-kube-api-access-hj749\") pod \"heat-operator-controller-manager-5d889d78cf-4k7z5\" (UID: \"36350170-f4ac-4f4b-ba23-a05d9300f63a\") " pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-4k7z5" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.321557 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmfst\" (UniqueName: \"kubernetes.io/projected/611a7063-c924-4b35-b2d7-d4d48a7e8f7a-kube-api-access-tmfst\") pod \"infra-operator-controller-manager-9d6c5db85-cg7tn\" (UID: \"611a7063-c924-4b35-b2d7-d4d48a7e8f7a\") " pod="openstack-operators/infra-operator-controller-manager-9d6c5db85-cg7tn" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.321616 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24mwb\" (UniqueName: \"kubernetes.io/projected/e8658648-cf73-468c-8291-7b6ad0f265e6-kube-api-access-24mwb\") pod \"glance-operator-controller-manager-84958c4d49-jshxk\" (UID: \"e8658648-cf73-468c-8291-7b6ad0f265e6\") " pod="openstack-operators/glance-operator-controller-manager-84958c4d49-jshxk" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.321636 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbwhp\" (UniqueName: \"kubernetes.io/projected/ad53e0de-7d24-447d-82c6-ab0a523c913a-kube-api-access-nbwhp\") pod \"horizon-operator-controller-manager-9f4696d94-5w8c4\" (UID: \"ad53e0de-7d24-447d-82c6-ab0a523c913a\") " pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-5w8c4" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.321758 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-2dwb2" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.333074 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/manila-operator-controller-manager-6d68dbc695-mtq8j"] Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.334470 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-mtq8j" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.340747 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"manila-operator-controller-manager-dockercfg-g9555" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.350742 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/cinder-operator-controller-manager-795d876f9c-8wvgx" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.354010 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-5bd55b4bff-mrm9t"] Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.361638 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-rbgd2" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.377989 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24mwb\" (UniqueName: \"kubernetes.io/projected/e8658648-cf73-468c-8291-7b6ad0f265e6-kube-api-access-24mwb\") pod \"glance-operator-controller-manager-84958c4d49-jshxk\" (UID: \"e8658648-cf73-468c-8291-7b6ad0f265e6\") " pod="openstack-operators/glance-operator-controller-manager-84958c4d49-jshxk" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.380804 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-88c7-98jct"] Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.389428 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-88c7-98jct" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.390371 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-6d68dbc695-mtq8j"] Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.397635 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-rnz2c" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.412420 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-88c7-98jct"] Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.423440 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbwhp\" (UniqueName: \"kubernetes.io/projected/ad53e0de-7d24-447d-82c6-ab0a523c913a-kube-api-access-nbwhp\") pod \"horizon-operator-controller-manager-9f4696d94-5w8c4\" (UID: \"ad53e0de-7d24-447d-82c6-ab0a523c913a\") " pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-5w8c4" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.423539 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77pj6\" (UniqueName: \"kubernetes.io/projected/58d1b185-0213-427d-8f85-2b2636e0d121-kube-api-access-77pj6\") pod \"ironic-operator-controller-manager-5cd4858477-5pcwg\" (UID: \"58d1b185-0213-427d-8f85-2b2636e0d121\") " pod="openstack-operators/ironic-operator-controller-manager-5cd4858477-5pcwg" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.423579 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tx4k\" (UniqueName: \"kubernetes.io/projected/1f506f8f-047a-4efa-8ff2-dbe310d0e12e-kube-api-access-8tx4k\") pod \"keystone-operator-controller-manager-5bd55b4bff-mrm9t\" (UID: \"1f506f8f-047a-4efa-8ff2-dbe310d0e12e\") " pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-mrm9t" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.423613 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/611a7063-c924-4b35-b2d7-d4d48a7e8f7a-cert\") pod \"infra-operator-controller-manager-9d6c5db85-cg7tn\" (UID: \"611a7063-c924-4b35-b2d7-d4d48a7e8f7a\") " pod="openstack-operators/infra-operator-controller-manager-9d6c5db85-cg7tn" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.423636 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hj749\" (UniqueName: \"kubernetes.io/projected/36350170-f4ac-4f4b-ba23-a05d9300f63a-kube-api-access-hj749\") pod \"heat-operator-controller-manager-5d889d78cf-4k7z5\" (UID: \"36350170-f4ac-4f4b-ba23-a05d9300f63a\") " pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-4k7z5" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.423687 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tmfst\" (UniqueName: \"kubernetes.io/projected/611a7063-c924-4b35-b2d7-d4d48a7e8f7a-kube-api-access-tmfst\") pod \"infra-operator-controller-manager-9d6c5db85-cg7tn\" (UID: \"611a7063-c924-4b35-b2d7-d4d48a7e8f7a\") " pod="openstack-operators/infra-operator-controller-manager-9d6c5db85-cg7tn" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.423709 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p7jpb\" (UniqueName: \"kubernetes.io/projected/61fc51ec-92f9-4dc7-a0ea-793712809ebc-kube-api-access-p7jpb\") pod \"manila-operator-controller-manager-6d68dbc695-mtq8j\" (UID: \"61fc51ec-92f9-4dc7-a0ea-793712809ebc\") " pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-mtq8j" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.425219 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-84958c4d49-jshxk" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.437466 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/611a7063-c924-4b35-b2d7-d4d48a7e8f7a-cert\") pod \"infra-operator-controller-manager-9d6c5db85-cg7tn\" (UID: \"611a7063-c924-4b35-b2d7-d4d48a7e8f7a\") " pod="openstack-operators/infra-operator-controller-manager-9d6c5db85-cg7tn" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.453302 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbwhp\" (UniqueName: \"kubernetes.io/projected/ad53e0de-7d24-447d-82c6-ab0a523c913a-kube-api-access-nbwhp\") pod \"horizon-operator-controller-manager-9f4696d94-5w8c4\" (UID: \"ad53e0de-7d24-447d-82c6-ab0a523c913a\") " pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-5w8c4" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.462042 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hj749\" (UniqueName: \"kubernetes.io/projected/36350170-f4ac-4f4b-ba23-a05d9300f63a-kube-api-access-hj749\") pod \"heat-operator-controller-manager-5d889d78cf-4k7z5\" (UID: \"36350170-f4ac-4f4b-ba23-a05d9300f63a\") " pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-4k7z5" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.470390 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tmfst\" (UniqueName: \"kubernetes.io/projected/611a7063-c924-4b35-b2d7-d4d48a7e8f7a-kube-api-access-tmfst\") pod \"infra-operator-controller-manager-9d6c5db85-cg7tn\" (UID: \"611a7063-c924-4b35-b2d7-d4d48a7e8f7a\") " pod="openstack-operators/infra-operator-controller-manager-9d6c5db85-cg7tn" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.501483 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/nova-operator-controller-manager-64cd67b5cb-9tqls"] Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.502836 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-64cd67b5cb-9tqls" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.506460 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"nova-operator-controller-manager-dockercfg-ptzj9" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.526573 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/neutron-operator-controller-manager-849d5b9b84-lhdzd"] Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.528428 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-849d5b9b84-lhdzd" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.530254 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77pj6\" (UniqueName: \"kubernetes.io/projected/58d1b185-0213-427d-8f85-2b2636e0d121-kube-api-access-77pj6\") pod \"ironic-operator-controller-manager-5cd4858477-5pcwg\" (UID: \"58d1b185-0213-427d-8f85-2b2636e0d121\") " pod="openstack-operators/ironic-operator-controller-manager-5cd4858477-5pcwg" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.530324 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tx4k\" (UniqueName: \"kubernetes.io/projected/1f506f8f-047a-4efa-8ff2-dbe310d0e12e-kube-api-access-8tx4k\") pod \"keystone-operator-controller-manager-5bd55b4bff-mrm9t\" (UID: \"1f506f8f-047a-4efa-8ff2-dbe310d0e12e\") " pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-mrm9t" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.530428 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p7jpb\" (UniqueName: \"kubernetes.io/projected/61fc51ec-92f9-4dc7-a0ea-793712809ebc-kube-api-access-p7jpb\") pod \"manila-operator-controller-manager-6d68dbc695-mtq8j\" (UID: \"61fc51ec-92f9-4dc7-a0ea-793712809ebc\") " pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-mtq8j" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.530475 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qc5nw\" (UniqueName: \"kubernetes.io/projected/6f80de4b-ac33-4b39-b105-5927fd6511fc-kube-api-access-qc5nw\") pod \"mariadb-operator-controller-manager-88c7-98jct\" (UID: \"6f80de4b-ac33-4b39-b105-5927fd6511fc\") " pod="openstack-operators/mariadb-operator-controller-manager-88c7-98jct" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.534147 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-5w8c4" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.537368 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7b787867f4-r5smf"] Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.538754 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7b787867f4-r5smf" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.546931 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-4k7z5" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.547783 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"octavia-operator-controller-manager-dockercfg-jjfc6" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.547960 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"neutron-operator-controller-manager-dockercfg-6djl6" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.551117 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p7jpb\" (UniqueName: \"kubernetes.io/projected/61fc51ec-92f9-4dc7-a0ea-793712809ebc-kube-api-access-p7jpb\") pod \"manila-operator-controller-manager-6d68dbc695-mtq8j\" (UID: \"61fc51ec-92f9-4dc7-a0ea-793712809ebc\") " pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-mtq8j" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.552361 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77pj6\" (UniqueName: \"kubernetes.io/projected/58d1b185-0213-427d-8f85-2b2636e0d121-kube-api-access-77pj6\") pod \"ironic-operator-controller-manager-5cd4858477-5pcwg\" (UID: \"58d1b185-0213-427d-8f85-2b2636e0d121\") " pod="openstack-operators/ironic-operator-controller-manager-5cd4858477-5pcwg" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.553496 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tx4k\" (UniqueName: \"kubernetes.io/projected/1f506f8f-047a-4efa-8ff2-dbe310d0e12e-kube-api-access-8tx4k\") pod \"keystone-operator-controller-manager-5bd55b4bff-mrm9t\" (UID: \"1f506f8f-047a-4efa-8ff2-dbe310d0e12e\") " pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-mrm9t" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.558257 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-64cd67b5cb-9tqls"] Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.566428 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-849d5b9b84-lhdzd"] Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.600620 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7b787867f4-r5smf"] Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.601277 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-9d6c5db85-cg7tn" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.628369 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ironic-operator-controller-manager-5cd4858477-5pcwg" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.631319 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-99rq4\" (UniqueName: \"kubernetes.io/projected/39bff9fb-ac50-40cd-b23c-113763e3527e-kube-api-access-99rq4\") pod \"octavia-operator-controller-manager-7b787867f4-r5smf\" (UID: \"39bff9fb-ac50-40cd-b23c-113763e3527e\") " pod="openstack-operators/octavia-operator-controller-manager-7b787867f4-r5smf" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.631374 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkrgg\" (UniqueName: \"kubernetes.io/projected/acce3e2f-7372-4703-a0e2-3dec09dc5b2d-kube-api-access-fkrgg\") pod \"nova-operator-controller-manager-64cd67b5cb-9tqls\" (UID: \"acce3e2f-7372-4703-a0e2-3dec09dc5b2d\") " pod="openstack-operators/nova-operator-controller-manager-64cd67b5cb-9tqls" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.631430 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qc5nw\" (UniqueName: \"kubernetes.io/projected/6f80de4b-ac33-4b39-b105-5927fd6511fc-kube-api-access-qc5nw\") pod \"mariadb-operator-controller-manager-88c7-98jct\" (UID: \"6f80de4b-ac33-4b39-b105-5927fd6511fc\") " pod="openstack-operators/mariadb-operator-controller-manager-88c7-98jct" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.631450 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stvpk\" (UniqueName: \"kubernetes.io/projected/0853e80f-a3aa-4230-b028-f8a0887afb2f-kube-api-access-stvpk\") pod \"neutron-operator-controller-manager-849d5b9b84-lhdzd\" (UID: \"0853e80f-a3aa-4230-b028-f8a0887afb2f\") " pod="openstack-operators/neutron-operator-controller-manager-849d5b9b84-lhdzd" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.662028 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qc5nw\" (UniqueName: \"kubernetes.io/projected/6f80de4b-ac33-4b39-b105-5927fd6511fc-kube-api-access-qc5nw\") pod \"mariadb-operator-controller-manager-88c7-98jct\" (UID: \"6f80de4b-ac33-4b39-b105-5927fd6511fc\") " pod="openstack-operators/mariadb-operator-controller-manager-88c7-98jct" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.678793 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-mrm9t" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.718250 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-77b9676b8crvdqk"] Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.719586 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-77b9676b8crvdqk"] Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.719605 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/ovn-operator-controller-manager-9976ff44c-mblm4"] Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.719907 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-77b9676b8crvdqk" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.722060 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-9976ff44c-mblm4"] Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.722087 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/placement-operator-controller-manager-589c58c6c-62v4q"] Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.722919 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-84d6b4b759-9qvwb"] Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.723110 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-webhook-server-cert" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.723463 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-baremetal-operator-controller-manager-dockercfg-94mxx" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.723722 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-589c58c6c-62v4q"] Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.723804 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-84d6b4b759-9qvwb" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.724173 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-9976ff44c-mblm4" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.725825 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-589c58c6c-62v4q" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.729829 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-84d6b4b759-9qvwb"] Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.731424 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"ovn-operator-controller-manager-dockercfg-246hb" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.731606 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"placement-operator-controller-manager-dockercfg-hcjcq" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.734448 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-99rq4\" (UniqueName: \"kubernetes.io/projected/39bff9fb-ac50-40cd-b23c-113763e3527e-kube-api-access-99rq4\") pod \"octavia-operator-controller-manager-7b787867f4-r5smf\" (UID: \"39bff9fb-ac50-40cd-b23c-113763e3527e\") " pod="openstack-operators/octavia-operator-controller-manager-7b787867f4-r5smf" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.734503 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkrgg\" (UniqueName: \"kubernetes.io/projected/acce3e2f-7372-4703-a0e2-3dec09dc5b2d-kube-api-access-fkrgg\") pod \"nova-operator-controller-manager-64cd67b5cb-9tqls\" (UID: \"acce3e2f-7372-4703-a0e2-3dec09dc5b2d\") " pod="openstack-operators/nova-operator-controller-manager-64cd67b5cb-9tqls" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.734542 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stvpk\" (UniqueName: \"kubernetes.io/projected/0853e80f-a3aa-4230-b028-f8a0887afb2f-kube-api-access-stvpk\") pod \"neutron-operator-controller-manager-849d5b9b84-lhdzd\" (UID: \"0853e80f-a3aa-4230-b028-f8a0887afb2f\") " pod="openstack-operators/neutron-operator-controller-manager-849d5b9b84-lhdzd" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.734749 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-n8frs"] Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.736384 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-n8frs" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.740006 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-n8frs"] Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.740275 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-mtq8j" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.742910 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"telemetry-operator-controller-manager-dockercfg-dmncr" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.743092 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-4v2wv" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.767944 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-99rq4\" (UniqueName: \"kubernetes.io/projected/39bff9fb-ac50-40cd-b23c-113763e3527e-kube-api-access-99rq4\") pod \"octavia-operator-controller-manager-7b787867f4-r5smf\" (UID: \"39bff9fb-ac50-40cd-b23c-113763e3527e\") " pod="openstack-operators/octavia-operator-controller-manager-7b787867f4-r5smf" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.767983 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkrgg\" (UniqueName: \"kubernetes.io/projected/acce3e2f-7372-4703-a0e2-3dec09dc5b2d-kube-api-access-fkrgg\") pod \"nova-operator-controller-manager-64cd67b5cb-9tqls\" (UID: \"acce3e2f-7372-4703-a0e2-3dec09dc5b2d\") " pod="openstack-operators/nova-operator-controller-manager-64cd67b5cb-9tqls" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.781450 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/test-operator-controller-manager-85777745bb-l7zmk"] Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.783618 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-85777745bb-l7zmk" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.787077 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"test-operator-controller-manager-dockercfg-v69qs" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.787382 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-88c7-98jct" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.840463 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-85777745bb-l7zmk"] Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.841594 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/60ac792e-9135-4ea0-84f1-1708c0421e70-cert\") pod \"openstack-baremetal-operator-controller-manager-77b9676b8crvdqk\" (UID: \"60ac792e-9135-4ea0-84f1-1708c0421e70\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-77b9676b8crvdqk" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.841643 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhddn\" (UniqueName: \"kubernetes.io/projected/60ac792e-9135-4ea0-84f1-1708c0421e70-kube-api-access-bhddn\") pod \"openstack-baremetal-operator-controller-manager-77b9676b8crvdqk\" (UID: \"60ac792e-9135-4ea0-84f1-1708c0421e70\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-77b9676b8crvdqk" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.841670 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vxzht\" (UniqueName: \"kubernetes.io/projected/420e19a1-ab84-4852-ab58-8242a09d5621-kube-api-access-vxzht\") pod \"test-operator-controller-manager-85777745bb-l7zmk\" (UID: \"420e19a1-ab84-4852-ab58-8242a09d5621\") " pod="openstack-operators/test-operator-controller-manager-85777745bb-l7zmk" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.841687 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmpdx\" (UniqueName: \"kubernetes.io/projected/97aa883c-f9b1-4f83-884e-13fdd88beca7-kube-api-access-pmpdx\") pod \"placement-operator-controller-manager-589c58c6c-62v4q\" (UID: \"97aa883c-f9b1-4f83-884e-13fdd88beca7\") " pod="openstack-operators/placement-operator-controller-manager-589c58c6c-62v4q" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.841719 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tzrnk\" (UniqueName: \"kubernetes.io/projected/02d48b56-707e-4347-88c5-0429a487042c-kube-api-access-tzrnk\") pod \"telemetry-operator-controller-manager-b8d54b5d7-n8frs\" (UID: \"02d48b56-707e-4347-88c5-0429a487042c\") " pod="openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-n8frs" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.841754 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llbwj\" (UniqueName: \"kubernetes.io/projected/2d130278-c73a-4681-ae0f-76385dcf4de9-kube-api-access-llbwj\") pod \"ovn-operator-controller-manager-9976ff44c-mblm4\" (UID: \"2d130278-c73a-4681-ae0f-76385dcf4de9\") " pod="openstack-operators/ovn-operator-controller-manager-9976ff44c-mblm4" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.841847 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-snpxw\" (UniqueName: \"kubernetes.io/projected/c3ab5ef5-3877-4512-a279-5d4504fd7301-kube-api-access-snpxw\") pod \"swift-operator-controller-manager-84d6b4b759-9qvwb\" (UID: \"c3ab5ef5-3877-4512-a279-5d4504fd7301\") " pod="openstack-operators/swift-operator-controller-manager-84d6b4b759-9qvwb" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.848857 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stvpk\" (UniqueName: \"kubernetes.io/projected/0853e80f-a3aa-4230-b028-f8a0887afb2f-kube-api-access-stvpk\") pod \"neutron-operator-controller-manager-849d5b9b84-lhdzd\" (UID: \"0853e80f-a3aa-4230-b028-f8a0887afb2f\") " pod="openstack-operators/neutron-operator-controller-manager-849d5b9b84-lhdzd" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.885236 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6b9957f54f-z2b6f"] Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.887604 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6b9957f54f-z2b6f" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.888685 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/nova-operator-controller-manager-64cd67b5cb-9tqls" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.897596 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/neutron-operator-controller-manager-849d5b9b84-lhdzd" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.899495 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"watcher-operator-controller-manager-dockercfg-f5kc8" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.911341 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6b9957f54f-z2b6f"] Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.948859 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-snpxw\" (UniqueName: \"kubernetes.io/projected/c3ab5ef5-3877-4512-a279-5d4504fd7301-kube-api-access-snpxw\") pod \"swift-operator-controller-manager-84d6b4b759-9qvwb\" (UID: \"c3ab5ef5-3877-4512-a279-5d4504fd7301\") " pod="openstack-operators/swift-operator-controller-manager-84d6b4b759-9qvwb" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.949420 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/60ac792e-9135-4ea0-84f1-1708c0421e70-cert\") pod \"openstack-baremetal-operator-controller-manager-77b9676b8crvdqk\" (UID: \"60ac792e-9135-4ea0-84f1-1708c0421e70\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-77b9676b8crvdqk" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.949476 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn4gn\" (UniqueName: \"kubernetes.io/projected/013782e2-98ad-4401-af7d-22ac977c0e42-kube-api-access-bn4gn\") pod \"watcher-operator-controller-manager-6b9957f54f-z2b6f\" (UID: \"013782e2-98ad-4401-af7d-22ac977c0e42\") " pod="openstack-operators/watcher-operator-controller-manager-6b9957f54f-z2b6f" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.949506 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bhddn\" (UniqueName: \"kubernetes.io/projected/60ac792e-9135-4ea0-84f1-1708c0421e70-kube-api-access-bhddn\") pod \"openstack-baremetal-operator-controller-manager-77b9676b8crvdqk\" (UID: \"60ac792e-9135-4ea0-84f1-1708c0421e70\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-77b9676b8crvdqk" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.949549 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vxzht\" (UniqueName: \"kubernetes.io/projected/420e19a1-ab84-4852-ab58-8242a09d5621-kube-api-access-vxzht\") pod \"test-operator-controller-manager-85777745bb-l7zmk\" (UID: \"420e19a1-ab84-4852-ab58-8242a09d5621\") " pod="openstack-operators/test-operator-controller-manager-85777745bb-l7zmk" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.949572 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmpdx\" (UniqueName: \"kubernetes.io/projected/97aa883c-f9b1-4f83-884e-13fdd88beca7-kube-api-access-pmpdx\") pod \"placement-operator-controller-manager-589c58c6c-62v4q\" (UID: \"97aa883c-f9b1-4f83-884e-13fdd88beca7\") " pod="openstack-operators/placement-operator-controller-manager-589c58c6c-62v4q" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.949623 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tzrnk\" (UniqueName: \"kubernetes.io/projected/02d48b56-707e-4347-88c5-0429a487042c-kube-api-access-tzrnk\") pod \"telemetry-operator-controller-manager-b8d54b5d7-n8frs\" (UID: \"02d48b56-707e-4347-88c5-0429a487042c\") " pod="openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-n8frs" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.949659 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llbwj\" (UniqueName: \"kubernetes.io/projected/2d130278-c73a-4681-ae0f-76385dcf4de9-kube-api-access-llbwj\") pod \"ovn-operator-controller-manager-9976ff44c-mblm4\" (UID: \"2d130278-c73a-4681-ae0f-76385dcf4de9\") " pod="openstack-operators/ovn-operator-controller-manager-9976ff44c-mblm4" Oct 01 15:56:11 crc kubenswrapper[4949]: E1001 15:56:11.952086 4949 secret.go:188] Couldn't get secret openstack-operators/openstack-baremetal-operator-webhook-server-cert: secret "openstack-baremetal-operator-webhook-server-cert" not found Oct 01 15:56:11 crc kubenswrapper[4949]: E1001 15:56:11.952166 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/60ac792e-9135-4ea0-84f1-1708c0421e70-cert podName:60ac792e-9135-4ea0-84f1-1708c0421e70 nodeName:}" failed. No retries permitted until 2025-10-01 15:56:12.452142381 +0000 UTC m=+871.757748572 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/60ac792e-9135-4ea0-84f1-1708c0421e70-cert") pod "openstack-baremetal-operator-controller-manager-77b9676b8crvdqk" (UID: "60ac792e-9135-4ea0-84f1-1708c0421e70") : secret "openstack-baremetal-operator-webhook-server-cert" not found Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.988047 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmpdx\" (UniqueName: \"kubernetes.io/projected/97aa883c-f9b1-4f83-884e-13fdd88beca7-kube-api-access-pmpdx\") pod \"placement-operator-controller-manager-589c58c6c-62v4q\" (UID: \"97aa883c-f9b1-4f83-884e-13fdd88beca7\") " pod="openstack-operators/placement-operator-controller-manager-589c58c6c-62v4q" Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.996175 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/openstack-operator-controller-manager-745f9964cd-cwnfz"] Oct 01 15:56:11 crc kubenswrapper[4949]: I1001 15:56:11.997627 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-745f9964cd-cwnfz" Oct 01 15:56:12 crc kubenswrapper[4949]: I1001 15:56:12.011053 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llbwj\" (UniqueName: \"kubernetes.io/projected/2d130278-c73a-4681-ae0f-76385dcf4de9-kube-api-access-llbwj\") pod \"ovn-operator-controller-manager-9976ff44c-mblm4\" (UID: \"2d130278-c73a-4681-ae0f-76385dcf4de9\") " pod="openstack-operators/ovn-operator-controller-manager-9976ff44c-mblm4" Oct 01 15:56:12 crc kubenswrapper[4949]: I1001 15:56:12.017047 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-snpxw\" (UniqueName: \"kubernetes.io/projected/c3ab5ef5-3877-4512-a279-5d4504fd7301-kube-api-access-snpxw\") pod \"swift-operator-controller-manager-84d6b4b759-9qvwb\" (UID: \"c3ab5ef5-3877-4512-a279-5d4504fd7301\") " pod="openstack-operators/swift-operator-controller-manager-84d6b4b759-9qvwb" Oct 01 15:56:12 crc kubenswrapper[4949]: I1001 15:56:12.022948 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"openstack-operator-controller-manager-dockercfg-rfscx" Oct 01 15:56:12 crc kubenswrapper[4949]: I1001 15:56:12.023151 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Oct 01 15:56:12 crc kubenswrapper[4949]: I1001 15:56:12.046537 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bhddn\" (UniqueName: \"kubernetes.io/projected/60ac792e-9135-4ea0-84f1-1708c0421e70-kube-api-access-bhddn\") pod \"openstack-baremetal-operator-controller-manager-77b9676b8crvdqk\" (UID: \"60ac792e-9135-4ea0-84f1-1708c0421e70\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-77b9676b8crvdqk" Oct 01 15:56:12 crc kubenswrapper[4949]: I1001 15:56:12.050290 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-745f9964cd-cwnfz"] Oct 01 15:56:12 crc kubenswrapper[4949]: I1001 15:56:12.051084 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bn4gn\" (UniqueName: \"kubernetes.io/projected/013782e2-98ad-4401-af7d-22ac977c0e42-kube-api-access-bn4gn\") pod \"watcher-operator-controller-manager-6b9957f54f-z2b6f\" (UID: \"013782e2-98ad-4401-af7d-22ac977c0e42\") " pod="openstack-operators/watcher-operator-controller-manager-6b9957f54f-z2b6f" Oct 01 15:56:12 crc kubenswrapper[4949]: I1001 15:56:12.064641 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/octavia-operator-controller-manager-7b787867f4-r5smf" Oct 01 15:56:12 crc kubenswrapper[4949]: I1001 15:56:12.067276 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tzrnk\" (UniqueName: \"kubernetes.io/projected/02d48b56-707e-4347-88c5-0429a487042c-kube-api-access-tzrnk\") pod \"telemetry-operator-controller-manager-b8d54b5d7-n8frs\" (UID: \"02d48b56-707e-4347-88c5-0429a487042c\") " pod="openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-n8frs" Oct 01 15:56:12 crc kubenswrapper[4949]: I1001 15:56:12.070602 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vxzht\" (UniqueName: \"kubernetes.io/projected/420e19a1-ab84-4852-ab58-8242a09d5621-kube-api-access-vxzht\") pod \"test-operator-controller-manager-85777745bb-l7zmk\" (UID: \"420e19a1-ab84-4852-ab58-8242a09d5621\") " pod="openstack-operators/test-operator-controller-manager-85777745bb-l7zmk" Oct 01 15:56:12 crc kubenswrapper[4949]: I1001 15:56:12.084481 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn4gn\" (UniqueName: \"kubernetes.io/projected/013782e2-98ad-4401-af7d-22ac977c0e42-kube-api-access-bn4gn\") pod \"watcher-operator-controller-manager-6b9957f54f-z2b6f\" (UID: \"013782e2-98ad-4401-af7d-22ac977c0e42\") " pod="openstack-operators/watcher-operator-controller-manager-6b9957f54f-z2b6f" Oct 01 15:56:12 crc kubenswrapper[4949]: I1001 15:56:12.099335 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-cnlqh"] Oct 01 15:56:12 crc kubenswrapper[4949]: I1001 15:56:12.100195 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-84d6b4b759-9qvwb" Oct 01 15:56:12 crc kubenswrapper[4949]: I1001 15:56:12.100461 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-cnlqh" Oct 01 15:56:12 crc kubenswrapper[4949]: I1001 15:56:12.105792 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-controller-manager-dockercfg-gvmq7" Oct 01 15:56:12 crc kubenswrapper[4949]: I1001 15:56:12.108362 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-cnlqh"] Oct 01 15:56:12 crc kubenswrapper[4949]: I1001 15:56:12.154650 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fnvvs\" (UniqueName: \"kubernetes.io/projected/e56e40fb-b1f0-4955-9949-6e06db62d247-kube-api-access-fnvvs\") pod \"openstack-operator-controller-manager-745f9964cd-cwnfz\" (UID: \"e56e40fb-b1f0-4955-9949-6e06db62d247\") " pod="openstack-operators/openstack-operator-controller-manager-745f9964cd-cwnfz" Oct 01 15:56:12 crc kubenswrapper[4949]: I1001 15:56:12.154934 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e56e40fb-b1f0-4955-9949-6e06db62d247-cert\") pod \"openstack-operator-controller-manager-745f9964cd-cwnfz\" (UID: \"e56e40fb-b1f0-4955-9949-6e06db62d247\") " pod="openstack-operators/openstack-operator-controller-manager-745f9964cd-cwnfz" Oct 01 15:56:12 crc kubenswrapper[4949]: I1001 15:56:12.187495 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-n8frs" Oct 01 15:56:12 crc kubenswrapper[4949]: I1001 15:56:12.256885 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e56e40fb-b1f0-4955-9949-6e06db62d247-cert\") pod \"openstack-operator-controller-manager-745f9964cd-cwnfz\" (UID: \"e56e40fb-b1f0-4955-9949-6e06db62d247\") " pod="openstack-operators/openstack-operator-controller-manager-745f9964cd-cwnfz" Oct 01 15:56:12 crc kubenswrapper[4949]: I1001 15:56:12.257028 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrbc2\" (UniqueName: \"kubernetes.io/projected/3ad42e41-782a-480a-b8f5-e449eddb1649-kube-api-access-qrbc2\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-cnlqh\" (UID: \"3ad42e41-782a-480a-b8f5-e449eddb1649\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-cnlqh" Oct 01 15:56:12 crc kubenswrapper[4949]: I1001 15:56:12.257058 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fnvvs\" (UniqueName: \"kubernetes.io/projected/e56e40fb-b1f0-4955-9949-6e06db62d247-kube-api-access-fnvvs\") pod \"openstack-operator-controller-manager-745f9964cd-cwnfz\" (UID: \"e56e40fb-b1f0-4955-9949-6e06db62d247\") " pod="openstack-operators/openstack-operator-controller-manager-745f9964cd-cwnfz" Oct 01 15:56:12 crc kubenswrapper[4949]: E1001 15:56:12.261728 4949 secret.go:188] Couldn't get secret openstack-operators/webhook-server-cert: secret "webhook-server-cert" not found Oct 01 15:56:12 crc kubenswrapper[4949]: E1001 15:56:12.261789 4949 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/e56e40fb-b1f0-4955-9949-6e06db62d247-cert podName:e56e40fb-b1f0-4955-9949-6e06db62d247 nodeName:}" failed. No retries permitted until 2025-10-01 15:56:12.761773278 +0000 UTC m=+872.067379469 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/e56e40fb-b1f0-4955-9949-6e06db62d247-cert") pod "openstack-operator-controller-manager-745f9964cd-cwnfz" (UID: "e56e40fb-b1f0-4955-9949-6e06db62d247") : secret "webhook-server-cert" not found Oct 01 15:56:12 crc kubenswrapper[4949]: I1001 15:56:12.278168 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/placement-operator-controller-manager-589c58c6c-62v4q" Oct 01 15:56:12 crc kubenswrapper[4949]: I1001 15:56:12.290964 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/ovn-operator-controller-manager-9976ff44c-mblm4" Oct 01 15:56:12 crc kubenswrapper[4949]: I1001 15:56:12.307892 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fnvvs\" (UniqueName: \"kubernetes.io/projected/e56e40fb-b1f0-4955-9949-6e06db62d247-kube-api-access-fnvvs\") pod \"openstack-operator-controller-manager-745f9964cd-cwnfz\" (UID: \"e56e40fb-b1f0-4955-9949-6e06db62d247\") " pod="openstack-operators/openstack-operator-controller-manager-745f9964cd-cwnfz" Oct 01 15:56:12 crc kubenswrapper[4949]: I1001 15:56:12.362981 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrbc2\" (UniqueName: \"kubernetes.io/projected/3ad42e41-782a-480a-b8f5-e449eddb1649-kube-api-access-qrbc2\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-cnlqh\" (UID: \"3ad42e41-782a-480a-b8f5-e449eddb1649\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-cnlqh" Oct 01 15:56:12 crc kubenswrapper[4949]: I1001 15:56:12.393332 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrbc2\" (UniqueName: \"kubernetes.io/projected/3ad42e41-782a-480a-b8f5-e449eddb1649-kube-api-access-qrbc2\") pod \"rabbitmq-cluster-operator-manager-5f97d8c699-cnlqh\" (UID: \"3ad42e41-782a-480a-b8f5-e449eddb1649\") " pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-cnlqh" Oct 01 15:56:12 crc kubenswrapper[4949]: E1001 15:56:12.393695 4949 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/system.slice/rpm-ostreed.service\": RecentStats: unable to find data in memory cache]" Oct 01 15:56:12 crc kubenswrapper[4949]: I1001 15:56:12.464918 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/60ac792e-9135-4ea0-84f1-1708c0421e70-cert\") pod \"openstack-baremetal-operator-controller-manager-77b9676b8crvdqk\" (UID: \"60ac792e-9135-4ea0-84f1-1708c0421e70\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-77b9676b8crvdqk" Oct 01 15:56:12 crc kubenswrapper[4949]: I1001 15:56:12.470869 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/60ac792e-9135-4ea0-84f1-1708c0421e70-cert\") pod \"openstack-baremetal-operator-controller-manager-77b9676b8crvdqk\" (UID: \"60ac792e-9135-4ea0-84f1-1708c0421e70\") " pod="openstack-operators/openstack-baremetal-operator-controller-manager-77b9676b8crvdqk" Oct 01 15:56:12 crc kubenswrapper[4949]: I1001 15:56:12.524465 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-baremetal-operator-controller-manager-77b9676b8crvdqk" Oct 01 15:56:12 crc kubenswrapper[4949]: I1001 15:56:12.572893 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/cinder-operator-controller-manager-795d876f9c-8wvgx"] Oct 01 15:56:12 crc kubenswrapper[4949]: I1001 15:56:12.579521 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/barbican-operator-controller-manager-6ff8b75857-2dwb2"] Oct 01 15:56:12 crc kubenswrapper[4949]: I1001 15:56:12.642447 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/test-operator-controller-manager-85777745bb-l7zmk" Oct 01 15:56:12 crc kubenswrapper[4949]: I1001 15:56:12.650089 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/watcher-operator-controller-manager-6b9957f54f-z2b6f" Oct 01 15:56:12 crc kubenswrapper[4949]: I1001 15:56:12.674569 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-cnlqh" Oct 01 15:56:12 crc kubenswrapper[4949]: I1001 15:56:12.771169 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e56e40fb-b1f0-4955-9949-6e06db62d247-cert\") pod \"openstack-operator-controller-manager-745f9964cd-cwnfz\" (UID: \"e56e40fb-b1f0-4955-9949-6e06db62d247\") " pod="openstack-operators/openstack-operator-controller-manager-745f9964cd-cwnfz" Oct 01 15:56:12 crc kubenswrapper[4949]: I1001 15:56:12.784463 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e56e40fb-b1f0-4955-9949-6e06db62d247-cert\") pod \"openstack-operator-controller-manager-745f9964cd-cwnfz\" (UID: \"e56e40fb-b1f0-4955-9949-6e06db62d247\") " pod="openstack-operators/openstack-operator-controller-manager-745f9964cd-cwnfz" Oct 01 15:56:12 crc kubenswrapper[4949]: I1001 15:56:12.790494 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-84958c4d49-jshxk"] Oct 01 15:56:12 crc kubenswrapper[4949]: W1001 15:56:12.807515 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode8658648_cf73_468c_8291_7b6ad0f265e6.slice/crio-47577038e410ea8cd101facfec4012cab6f33a5b1b9797230db4fe12b04b364d WatchSource:0}: Error finding container 47577038e410ea8cd101facfec4012cab6f33a5b1b9797230db4fe12b04b364d: Status 404 returned error can't find the container with id 47577038e410ea8cd101facfec4012cab6f33a5b1b9797230db4fe12b04b364d Oct 01 15:56:12 crc kubenswrapper[4949]: I1001 15:56:12.965672 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/openstack-operator-controller-manager-745f9964cd-cwnfz" Oct 01 15:56:13 crc kubenswrapper[4949]: I1001 15:56:13.033909 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ironic-operator-controller-manager-5cd4858477-5pcwg"] Oct 01 15:56:13 crc kubenswrapper[4949]: I1001 15:56:13.038645 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/designate-operator-controller-manager-84f4f7b77b-rbgd2"] Oct 01 15:56:13 crc kubenswrapper[4949]: W1001 15:56:13.059725 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6bec54a7_c02d_45e5_bc1a_22ca4e0d2229.slice/crio-ce439d4c22b62af9c99b9fe89091f5e236f9722526a1f04560e0d209482535fb WatchSource:0}: Error finding container ce439d4c22b62af9c99b9fe89091f5e236f9722526a1f04560e0d209482535fb: Status 404 returned error can't find the container with id ce439d4c22b62af9c99b9fe89091f5e236f9722526a1f04560e0d209482535fb Oct 01 15:56:13 crc kubenswrapper[4949]: I1001 15:56:13.064824 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-5bd55b4bff-mrm9t"] Oct 01 15:56:13 crc kubenswrapper[4949]: I1001 15:56:13.407308 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/neutron-operator-controller-manager-849d5b9b84-lhdzd"] Oct 01 15:56:13 crc kubenswrapper[4949]: I1001 15:56:13.430105 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-9d6c5db85-cg7tn"] Oct 01 15:56:13 crc kubenswrapper[4949]: I1001 15:56:13.440838 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/nova-operator-controller-manager-64cd67b5cb-9tqls"] Oct 01 15:56:13 crc kubenswrapper[4949]: W1001 15:56:13.450580 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod611a7063_c924_4b35_b2d7_d4d48a7e8f7a.slice/crio-a95339d1daccc3463175892a87fc7fc37715da1fe08b8397791b1dbeb7322c6a WatchSource:0}: Error finding container a95339d1daccc3463175892a87fc7fc37715da1fe08b8397791b1dbeb7322c6a: Status 404 returned error can't find the container with id a95339d1daccc3463175892a87fc7fc37715da1fe08b8397791b1dbeb7322c6a Oct 01 15:56:13 crc kubenswrapper[4949]: I1001 15:56:13.462050 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/octavia-operator-controller-manager-7b787867f4-r5smf"] Oct 01 15:56:13 crc kubenswrapper[4949]: I1001 15:56:13.469233 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-84d6b4b759-9qvwb"] Oct 01 15:56:13 crc kubenswrapper[4949]: I1001 15:56:13.476417 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/placement-operator-controller-manager-589c58c6c-62v4q"] Oct 01 15:56:13 crc kubenswrapper[4949]: I1001 15:56:13.498357 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-n8frs"] Oct 01 15:56:13 crc kubenswrapper[4949]: I1001 15:56:13.504047 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-84958c4d49-jshxk" event={"ID":"e8658648-cf73-468c-8291-7b6ad0f265e6","Type":"ContainerStarted","Data":"47577038e410ea8cd101facfec4012cab6f33a5b1b9797230db4fe12b04b364d"} Oct 01 15:56:13 crc kubenswrapper[4949]: I1001 15:56:13.506735 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-88c7-98jct"] Oct 01 15:56:13 crc kubenswrapper[4949]: I1001 15:56:13.516809 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/heat-operator-controller-manager-5d889d78cf-4k7z5"] Oct 01 15:56:13 crc kubenswrapper[4949]: I1001 15:56:13.516841 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-2dwb2" event={"ID":"fd0ea56f-b7dd-4d0d-b6bf-5ca133fa2ea4","Type":"ContainerStarted","Data":"807ea2ec88598b0910b4a4493d1b949a644948bb3e21369c02f444efe010c2c0"} Oct 01 15:56:13 crc kubenswrapper[4949]: I1001 15:56:13.526257 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-9f4696d94-5w8c4"] Oct 01 15:56:13 crc kubenswrapper[4949]: I1001 15:56:13.531669 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/manila-operator-controller-manager-6d68dbc695-mtq8j"] Oct 01 15:56:13 crc kubenswrapper[4949]: I1001 15:56:13.536633 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-849d5b9b84-lhdzd" event={"ID":"0853e80f-a3aa-4230-b028-f8a0887afb2f","Type":"ContainerStarted","Data":"c6fe8721b73a7d18783a368d551ecbbff959cada08ab0abc350bc8df8f9a9af2"} Oct 01 15:56:13 crc kubenswrapper[4949]: E1001 15:56:13.540101 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/mariadb-operator@sha256:15d7b5a365350a831ca59d984df67fadeccf89d599e487a7597b105afb82ce4a,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qc5nw,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod mariadb-operator-controller-manager-88c7-98jct_openstack-operators(6f80de4b-ac33-4b39-b105-5927fd6511fc): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 01 15:56:13 crc kubenswrapper[4949]: I1001 15:56:13.552490 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-795d876f9c-8wvgx" event={"ID":"e5ed691c-da8b-4bae-8d20-e92c1e062ea2","Type":"ContainerStarted","Data":"b1fa80ed2a515bd37dea174ebd5d1a9c9e2b40050afd17a17f3e8c06f0290df4"} Oct 01 15:56:13 crc kubenswrapper[4949]: I1001 15:56:13.562212 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/ovn-operator-controller-manager-9976ff44c-mblm4"] Oct 01 15:56:13 crc kubenswrapper[4949]: I1001 15:56:13.566834 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-baremetal-operator-controller-manager-77b9676b8crvdqk"] Oct 01 15:56:13 crc kubenswrapper[4949]: I1001 15:56:13.567178 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5cd4858477-5pcwg" event={"ID":"58d1b185-0213-427d-8f85-2b2636e0d121","Type":"ContainerStarted","Data":"4b1a19a34c2695d9fae6edfbd6cf761573dc076e4383af52b6e6d9ae92b81a68"} Oct 01 15:56:13 crc kubenswrapper[4949]: W1001 15:56:13.570314 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod02d48b56_707e_4347_88c5_0429a487042c.slice/crio-c81ce06ad832b086b56aa217e0abf6772ea59771d78bb2e40b97d0ef7e2f26aa WatchSource:0}: Error finding container c81ce06ad832b086b56aa217e0abf6772ea59771d78bb2e40b97d0ef7e2f26aa: Status 404 returned error can't find the container with id c81ce06ad832b086b56aa217e0abf6772ea59771d78bb2e40b97d0ef7e2f26aa Oct 01 15:56:13 crc kubenswrapper[4949]: E1001 15:56:13.573482 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/telemetry-operator@sha256:8fdf377daf05e2fa7346505017078fa81981dd945bf635a64c8022633c68118f,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tzrnk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod telemetry-operator-controller-manager-b8d54b5d7-n8frs_openstack-operators(02d48b56-707e-4347-88c5-0429a487042c): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 01 15:56:13 crc kubenswrapper[4949]: W1001 15:56:13.581938 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podad53e0de_7d24_447d_82c6_ab0a523c913a.slice/crio-0d0b81a309d62c331af937995613d43ebc786e9fdcc4fc15c5854b31ea32f137 WatchSource:0}: Error finding container 0d0b81a309d62c331af937995613d43ebc786e9fdcc4fc15c5854b31ea32f137: Status 404 returned error can't find the container with id 0d0b81a309d62c331af937995613d43ebc786e9fdcc4fc15c5854b31ea32f137 Oct 01 15:56:13 crc kubenswrapper[4949]: I1001 15:56:13.585147 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-9d6c5db85-cg7tn" event={"ID":"611a7063-c924-4b35-b2d7-d4d48a7e8f7a","Type":"ContainerStarted","Data":"a95339d1daccc3463175892a87fc7fc37715da1fe08b8397791b1dbeb7322c6a"} Oct 01 15:56:13 crc kubenswrapper[4949]: I1001 15:56:13.587027 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-cnlqh"] Oct 01 15:56:13 crc kubenswrapper[4949]: I1001 15:56:13.590094 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-mrm9t" event={"ID":"1f506f8f-047a-4efa-8ff2-dbe310d0e12e","Type":"ContainerStarted","Data":"da37eed108a1b0ad337d9c6822dfa8274e0d23a2a75f3286dac3aadcc65b1eee"} Oct 01 15:56:13 crc kubenswrapper[4949]: I1001 15:56:13.590973 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/test-operator-controller-manager-85777745bb-l7zmk"] Oct 01 15:56:13 crc kubenswrapper[4949]: I1001 15:56:13.591439 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-rbgd2" event={"ID":"6bec54a7-c02d-45e5-bc1a-22ca4e0d2229","Type":"ContainerStarted","Data":"ce439d4c22b62af9c99b9fe89091f5e236f9722526a1f04560e0d209482535fb"} Oct 01 15:56:13 crc kubenswrapper[4949]: E1001 15:56:13.593552 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/horizon-operator@sha256:f5f0d2eb534f763cf6578af513add1c21c1659b2cd75214dfddfedb9eebf6397,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-nbwhp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod horizon-operator-controller-manager-9f4696d94-5w8c4_openstack-operators(ad53e0de-7d24-447d-82c6-ab0a523c913a): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 01 15:56:13 crc kubenswrapper[4949]: E1001 15:56:13.593940 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e7cfed051c1cf801e651fd4035070e38698039f284ac0b2a0332769fdbb4a9c8,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-baremetal-operator-agent:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_ANSIBLEEE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-ansibleee-runner:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_EVALUATOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-evaluator:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_AODH_NOTIFIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-aodh-notifier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_APACHE_IMAGE_URL_DEFAULT,Value:registry.redhat.io/ubi9/httpd-24:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_KEYSTONE_LISTENER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-keystone-listener:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BARBICAN_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-barbican-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_IPMI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_MYSQLD_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/mysqld-exporter:v0.15.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_NOTIFICATION_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ceilometer-notification:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CEILOMETER_SGCORE_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_BACKUP_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-backup:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CINDER_VOLUME_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cinder-volume:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_BACKENDBIND9_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-backend-bind9:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_CENTRAL_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-central:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_MDNS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-mdns:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_PRODUCER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-producer:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_UNBOUND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-unbound:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_DESIGNATE_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-designate-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_FRR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-frr:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_ISCSID_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-iscsid:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_KEPLER_IMAGE_URL_DEFAULT,Value:quay.io/sustainable_computing_io/kepler:release-0.7.12,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_LOGROTATE_CROND_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-cron:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_MULTIPATHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-multipathd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_DHCP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-dhcp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_METADATA_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_OVN_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-ovn-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NEUTRON_SRIOV_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-sriov-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_NODE_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/prometheus/node-exporter:v1.5.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_OVN_BGP_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-bgp-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_EDPM_PODMAN_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/navidys/prometheus-podman-exporter:v1.10.1,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_GLANCE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-glance-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_CFNAPI_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-api-cfn:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HEAT_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-heat-engine:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_HORIZON_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-horizon:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_MEMCACHED_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-memcached:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_INFRA_REDIS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-redis:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_INSPECTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-inspector:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_NEUTRON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-neutron-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PXE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ironic-pxe:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_IRONIC_PYTHON_AGENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/ironic-python-agent:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KEYSTONE_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-keystone:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_KSM_IMAGE_URL_DEFAULT,Value:registry.k8s.io/kube-state-metrics/kube-state-metrics:v2.15.0,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MANILA_SHARE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-manila-share:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_MARIADB_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-mariadb:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NET_UTILS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-netutils:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NEUTRON_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_COMPUTE_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_CONDUCTOR_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_NOVNC_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-novncproxy:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_NOVA_SCHEDULER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-nova-scheduler:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HEALTHMANAGER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-health-manager:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_HOUSEKEEPING_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-housekeeping:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_RSYSLOG_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rsyslog:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OCTAVIA_WORKER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-octavia-worker:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_CLIENT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-openstackclient:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_LIGHTSPEED_IMAGE_URL_DEFAULT,Value:quay.io/openstack-lightspeed/rag-content:os-docs-2024.2,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_MUST_GATHER_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-must-gather:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OPENSTACK_NETWORK_EXPORTER_IMAGE_URL_DEFAULT,Value:quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OS_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/edpm-hardened-uefi:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_CONTROLLER_OVS_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-base:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-nb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_NORTHD_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-northd:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OVN_SB_DBCLUSTER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-ovn-sb-db-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PLACEMENT_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-placement-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_RABBITMQ_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-rabbitmq:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_ACCOUNT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-account:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_CONTAINER_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-container:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_OBJECT_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-object:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_SWIFT_PROXY_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-swift-proxy-server:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_TEST_TEMPEST_IMAGE_URL_DEFAULT,Value:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_API_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-api:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_APPLIER_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-applier:current-podified,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_WATCHER_DECISION_ENGINE_IMAGE_URL_DEFAULT,Value:quay.io/podified-master-centos9/openstack-watcher-decision-engine:current-podified,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cert,ReadOnly:true,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bhddn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod openstack-baremetal-operator-controller-manager-77b9676b8crvdqk_openstack-operators(60ac792e-9135-4ea0-84f1-1708c0421e70): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 01 15:56:13 crc kubenswrapper[4949]: W1001 15:56:13.604083 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d130278_c73a_4681_ae0f_76385dcf4de9.slice/crio-eaf7ae462f657288766dfa438765a1c8a0ea48c80118b2cdce9f0072f0b3de16 WatchSource:0}: Error finding container eaf7ae462f657288766dfa438765a1c8a0ea48c80118b2cdce9f0072f0b3de16: Status 404 returned error can't find the container with id eaf7ae462f657288766dfa438765a1c8a0ea48c80118b2cdce9f0072f0b3de16 Oct 01 15:56:13 crc kubenswrapper[4949]: E1001 15:56:13.617604 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2,Command:[/manager],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:metrics,HostPort:0,ContainerPort:9782,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{67108864 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qrbc2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000660000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod rabbitmq-cluster-operator-manager-5f97d8c699-cnlqh_openstack-operators(3ad42e41-782a-480a-b8f5-e449eddb1649): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 01 15:56:13 crc kubenswrapper[4949]: E1001 15:56:13.617638 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/test-operator@sha256:f61fdfbfd12027ce6b4e7ad553ec0582f080de0cfb472de6dc04ad3078bb17e3,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-vxzht,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod test-operator-controller-manager-85777745bb-l7zmk_openstack-operators(420e19a1-ab84-4852-ab58-8242a09d5621): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 01 15:56:13 crc kubenswrapper[4949]: E1001 15:56:13.617794 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:quay.io/openstack-k8s-operators/ovn-operator@sha256:1051afc168038fb814f75e7a5f07c588b295a83ebd143dcd8b46d799e31ad302,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-llbwj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod ovn-operator-controller-manager-9976ff44c-mblm4_openstack-operators(2d130278-c73a-4681-ae0f-76385dcf4de9): ErrImagePull: pull QPS exceeded" logger="UnhandledError" Oct 01 15:56:13 crc kubenswrapper[4949]: E1001 15:56:13.619718 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-cnlqh" podUID="3ad42e41-782a-480a-b8f5-e449eddb1649" Oct 01 15:56:13 crc kubenswrapper[4949]: I1001 15:56:13.776515 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/watcher-operator-controller-manager-6b9957f54f-z2b6f"] Oct 01 15:56:13 crc kubenswrapper[4949]: W1001 15:56:13.788689 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod013782e2_98ad_4401_af7d_22ac977c0e42.slice/crio-c071cbcff3630c8ab82a48227fbea0a210e7d9b96d64d112e449f882c381fdbf WatchSource:0}: Error finding container c071cbcff3630c8ab82a48227fbea0a210e7d9b96d64d112e449f882c381fdbf: Status 404 returned error can't find the container with id c071cbcff3630c8ab82a48227fbea0a210e7d9b96d64d112e449f882c381fdbf Oct 01 15:56:13 crc kubenswrapper[4949]: I1001 15:56:13.789552 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/openstack-operator-controller-manager-745f9964cd-cwnfz"] Oct 01 15:56:13 crc kubenswrapper[4949]: W1001 15:56:13.797147 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode56e40fb_b1f0_4955_9949_6e06db62d247.slice/crio-f6157d88e4c8337db7b9d24363799b8f8e4032d963698b7efa3d1637d933122c WatchSource:0}: Error finding container f6157d88e4c8337db7b9d24363799b8f8e4032d963698b7efa3d1637d933122c: Status 404 returned error can't find the container with id f6157d88e4c8337db7b9d24363799b8f8e4032d963698b7efa3d1637d933122c Oct 01 15:56:14 crc kubenswrapper[4949]: I1001 15:56:14.599862 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-4k7z5" event={"ID":"36350170-f4ac-4f4b-ba23-a05d9300f63a","Type":"ContainerStarted","Data":"ee43edc7f0a5465ad597345249681bd0313904490b2acfc837eada5bd1f7d350"} Oct 01 15:56:14 crc kubenswrapper[4949]: I1001 15:56:14.602451 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-mtq8j" event={"ID":"61fc51ec-92f9-4dc7-a0ea-793712809ebc","Type":"ContainerStarted","Data":"041374ab9918e14dff3b79d59fde0e0b505a17deff151d177a9708baeb280e16"} Oct 01 15:56:14 crc kubenswrapper[4949]: I1001 15:56:14.603489 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-9976ff44c-mblm4" event={"ID":"2d130278-c73a-4681-ae0f-76385dcf4de9","Type":"ContainerStarted","Data":"eaf7ae462f657288766dfa438765a1c8a0ea48c80118b2cdce9f0072f0b3de16"} Oct 01 15:56:14 crc kubenswrapper[4949]: I1001 15:56:14.604911 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-85777745bb-l7zmk" event={"ID":"420e19a1-ab84-4852-ab58-8242a09d5621","Type":"ContainerStarted","Data":"73b66506e13d951bfe6fac756ef625544c21845098c6669b51922611e5e6de0b"} Oct 01 15:56:14 crc kubenswrapper[4949]: I1001 15:56:14.606283 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-64cd67b5cb-9tqls" event={"ID":"acce3e2f-7372-4703-a0e2-3dec09dc5b2d","Type":"ContainerStarted","Data":"c67a3c3f64240b41c1d40ba14f8131d65742b946e3e595cb2ee6dee68d997204"} Oct 01 15:56:14 crc kubenswrapper[4949]: I1001 15:56:14.607520 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-745f9964cd-cwnfz" event={"ID":"e56e40fb-b1f0-4955-9949-6e06db62d247","Type":"ContainerStarted","Data":"f6157d88e4c8337db7b9d24363799b8f8e4032d963698b7efa3d1637d933122c"} Oct 01 15:56:14 crc kubenswrapper[4949]: I1001 15:56:14.608699 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7b787867f4-r5smf" event={"ID":"39bff9fb-ac50-40cd-b23c-113763e3527e","Type":"ContainerStarted","Data":"69bc4a885e8c0c0aac3a882e9153ef0e204814a7cc4dcca5fc9932c6071ea2ae"} Oct 01 15:56:14 crc kubenswrapper[4949]: I1001 15:56:14.609940 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-589c58c6c-62v4q" event={"ID":"97aa883c-f9b1-4f83-884e-13fdd88beca7","Type":"ContainerStarted","Data":"a18ecec6b62ee1830eb4dca1b0db7c18f01aa69b151b16b92cf8026cdabc94c5"} Oct 01 15:56:14 crc kubenswrapper[4949]: I1001 15:56:14.610911 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-cnlqh" event={"ID":"3ad42e41-782a-480a-b8f5-e449eddb1649","Type":"ContainerStarted","Data":"0133d7e0490d398f5f3118729e8eef4481770d34fd0d34a13a41dd92c7d96f0e"} Oct 01 15:56:14 crc kubenswrapper[4949]: I1001 15:56:14.613163 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-n8frs" event={"ID":"02d48b56-707e-4347-88c5-0429a487042c","Type":"ContainerStarted","Data":"c81ce06ad832b086b56aa217e0abf6772ea59771d78bb2e40b97d0ef7e2f26aa"} Oct 01 15:56:14 crc kubenswrapper[4949]: E1001 15:56:14.613810 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-cnlqh" podUID="3ad42e41-782a-480a-b8f5-e449eddb1649" Oct 01 15:56:14 crc kubenswrapper[4949]: I1001 15:56:14.617503 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-84d6b4b759-9qvwb" event={"ID":"c3ab5ef5-3877-4512-a279-5d4504fd7301","Type":"ContainerStarted","Data":"5e7b15755db794f398b1c5345b73ffe47af345ea517e56fe79e69a520f2f4edc"} Oct 01 15:56:14 crc kubenswrapper[4949]: I1001 15:56:14.620887 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-88c7-98jct" event={"ID":"6f80de4b-ac33-4b39-b105-5927fd6511fc","Type":"ContainerStarted","Data":"e7c3ae1d1086d1bcc98303aad9b8d206ef5eb5106950d2c171d735fb3af25828"} Oct 01 15:56:14 crc kubenswrapper[4949]: I1001 15:56:14.622117 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-5w8c4" event={"ID":"ad53e0de-7d24-447d-82c6-ab0a523c913a","Type":"ContainerStarted","Data":"0d0b81a309d62c331af937995613d43ebc786e9fdcc4fc15c5854b31ea32f137"} Oct 01 15:56:14 crc kubenswrapper[4949]: I1001 15:56:14.623048 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-77b9676b8crvdqk" event={"ID":"60ac792e-9135-4ea0-84f1-1708c0421e70","Type":"ContainerStarted","Data":"9272f015e91bac1d2a2fa326f872fef7a0bc0eab40e1bb6e205bb590a564323c"} Oct 01 15:56:14 crc kubenswrapper[4949]: I1001 15:56:14.624183 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6b9957f54f-z2b6f" event={"ID":"013782e2-98ad-4401-af7d-22ac977c0e42","Type":"ContainerStarted","Data":"c071cbcff3630c8ab82a48227fbea0a210e7d9b96d64d112e449f882c381fdbf"} Oct 01 15:56:15 crc kubenswrapper[4949]: I1001 15:56:15.631758 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-5w8c4" event={"ID":"ad53e0de-7d24-447d-82c6-ab0a523c913a","Type":"ContainerStarted","Data":"c3cbd0b849b29c2a17311c375b49e6d6706d9188bb29a4b061487a4e20340489"} Oct 01 15:56:15 crc kubenswrapper[4949]: E1001 15:56:15.633310 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/rabbitmq-cluster-operator@sha256:893e66303c1b0bc1d00a299a3f0380bad55c8dc813c8a1c6a4aab379f5aa12a2\\\"\"" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-cnlqh" podUID="3ad42e41-782a-480a-b8f5-e449eddb1649" Oct 01 15:56:15 crc kubenswrapper[4949]: E1001 15:56:15.792837 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-5w8c4" podUID="ad53e0de-7d24-447d-82c6-ab0a523c913a" Oct 01 15:56:15 crc kubenswrapper[4949]: E1001 15:56:15.869698 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-77b9676b8crvdqk" podUID="60ac792e-9135-4ea0-84f1-1708c0421e70" Oct 01 15:56:15 crc kubenswrapper[4949]: E1001 15:56:15.869809 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/test-operator-controller-manager-85777745bb-l7zmk" podUID="420e19a1-ab84-4852-ab58-8242a09d5621" Oct 01 15:56:15 crc kubenswrapper[4949]: E1001 15:56:15.870081 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-n8frs" podUID="02d48b56-707e-4347-88c5-0429a487042c" Oct 01 15:56:15 crc kubenswrapper[4949]: E1001 15:56:15.870135 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/ovn-operator-controller-manager-9976ff44c-mblm4" podUID="2d130278-c73a-4681-ae0f-76385dcf4de9" Oct 01 15:56:16 crc kubenswrapper[4949]: E1001 15:56:16.021400 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"pull QPS exceeded\"" pod="openstack-operators/mariadb-operator-controller-manager-88c7-98jct" podUID="6f80de4b-ac33-4b39-b105-5927fd6511fc" Oct 01 15:56:16 crc kubenswrapper[4949]: I1001 15:56:16.641941 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-88c7-98jct" event={"ID":"6f80de4b-ac33-4b39-b105-5927fd6511fc","Type":"ContainerStarted","Data":"a5ea023ba6273997341a1615a6fa2f0dfcf1a0b3c37aa6c1a3b7797313d4e4f6"} Oct 01 15:56:16 crc kubenswrapper[4949]: E1001 15:56:16.643513 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:15d7b5a365350a831ca59d984df67fadeccf89d599e487a7597b105afb82ce4a\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-88c7-98jct" podUID="6f80de4b-ac33-4b39-b105-5927fd6511fc" Oct 01 15:56:16 crc kubenswrapper[4949]: I1001 15:56:16.643787 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-77b9676b8crvdqk" event={"ID":"60ac792e-9135-4ea0-84f1-1708c0421e70","Type":"ContainerStarted","Data":"62e9087110f4759eb308904240a4f65cbb9fbf53e4f5ee4c397014f5e550c50e"} Oct 01 15:56:16 crc kubenswrapper[4949]: E1001 15:56:16.645407 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e7cfed051c1cf801e651fd4035070e38698039f284ac0b2a0332769fdbb4a9c8\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-77b9676b8crvdqk" podUID="60ac792e-9135-4ea0-84f1-1708c0421e70" Oct 01 15:56:16 crc kubenswrapper[4949]: I1001 15:56:16.646943 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-85777745bb-l7zmk" event={"ID":"420e19a1-ab84-4852-ab58-8242a09d5621","Type":"ContainerStarted","Data":"b1ce5c624f910ab16769503577cd82a5e4dc49526ff3b9e3ef7914e2484f89f7"} Oct 01 15:56:16 crc kubenswrapper[4949]: E1001 15:56:16.648381 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f61fdfbfd12027ce6b4e7ad553ec0582f080de0cfb472de6dc04ad3078bb17e3\\\"\"" pod="openstack-operators/test-operator-controller-manager-85777745bb-l7zmk" podUID="420e19a1-ab84-4852-ab58-8242a09d5621" Oct 01 15:56:16 crc kubenswrapper[4949]: I1001 15:56:16.650700 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-745f9964cd-cwnfz" event={"ID":"e56e40fb-b1f0-4955-9949-6e06db62d247","Type":"ContainerStarted","Data":"e364b46c17165ab99d889c1eed9d52eb4d8b88069e65b4fc8ae246d1e1a9544f"} Oct 01 15:56:16 crc kubenswrapper[4949]: I1001 15:56:16.650748 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-operator-controller-manager-745f9964cd-cwnfz" event={"ID":"e56e40fb-b1f0-4955-9949-6e06db62d247","Type":"ContainerStarted","Data":"de070df2df4596ccbd93c08b65532cf351ca6458f46d93cfc4aee182a9315df6"} Oct 01 15:56:16 crc kubenswrapper[4949]: I1001 15:56:16.651520 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-operator-controller-manager-745f9964cd-cwnfz" Oct 01 15:56:16 crc kubenswrapper[4949]: I1001 15:56:16.669294 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-n8frs" event={"ID":"02d48b56-707e-4347-88c5-0429a487042c","Type":"ContainerStarted","Data":"d595b9da33b23abc4b0e984e78c5c6e0db00f62d84855b097eee8ab70b4149bf"} Oct 01 15:56:16 crc kubenswrapper[4949]: E1001 15:56:16.673239 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:8fdf377daf05e2fa7346505017078fa81981dd945bf635a64c8022633c68118f\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-n8frs" podUID="02d48b56-707e-4347-88c5-0429a487042c" Oct 01 15:56:16 crc kubenswrapper[4949]: I1001 15:56:16.675402 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-9976ff44c-mblm4" event={"ID":"2d130278-c73a-4681-ae0f-76385dcf4de9","Type":"ContainerStarted","Data":"fe5aa9156ac64461a555473669137f51e551f3fb40109368fac511faec72ce02"} Oct 01 15:56:16 crc kubenswrapper[4949]: E1001 15:56:16.677478 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/horizon-operator@sha256:f5f0d2eb534f763cf6578af513add1c21c1659b2cd75214dfddfedb9eebf6397\\\"\"" pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-5w8c4" podUID="ad53e0de-7d24-447d-82c6-ab0a523c913a" Oct 01 15:56:16 crc kubenswrapper[4949]: E1001 15:56:16.677822 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:1051afc168038fb814f75e7a5f07c588b295a83ebd143dcd8b46d799e31ad302\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-9976ff44c-mblm4" podUID="2d130278-c73a-4681-ae0f-76385dcf4de9" Oct 01 15:56:16 crc kubenswrapper[4949]: I1001 15:56:16.743052 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-operator-controller-manager-745f9964cd-cwnfz" podStartSLOduration=5.74303284 podStartE2EDuration="5.74303284s" podCreationTimestamp="2025-10-01 15:56:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:56:16.716173481 +0000 UTC m=+876.021779702" watchObservedRunningTime="2025-10-01 15:56:16.74303284 +0000 UTC m=+876.048639031" Oct 01 15:56:17 crc kubenswrapper[4949]: E1001 15:56:17.683046 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/telemetry-operator@sha256:8fdf377daf05e2fa7346505017078fa81981dd945bf635a64c8022633c68118f\\\"\"" pod="openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-n8frs" podUID="02d48b56-707e-4347-88c5-0429a487042c" Oct 01 15:56:17 crc kubenswrapper[4949]: E1001 15:56:17.683058 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/test-operator@sha256:f61fdfbfd12027ce6b4e7ad553ec0582f080de0cfb472de6dc04ad3078bb17e3\\\"\"" pod="openstack-operators/test-operator-controller-manager-85777745bb-l7zmk" podUID="420e19a1-ab84-4852-ab58-8242a09d5621" Oct 01 15:56:17 crc kubenswrapper[4949]: E1001 15:56:17.683105 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/openstack-baremetal-operator@sha256:e7cfed051c1cf801e651fd4035070e38698039f284ac0b2a0332769fdbb4a9c8\\\"\"" pod="openstack-operators/openstack-baremetal-operator-controller-manager-77b9676b8crvdqk" podUID="60ac792e-9135-4ea0-84f1-1708c0421e70" Oct 01 15:56:17 crc kubenswrapper[4949]: E1001 15:56:17.683154 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/mariadb-operator@sha256:15d7b5a365350a831ca59d984df67fadeccf89d599e487a7597b105afb82ce4a\\\"\"" pod="openstack-operators/mariadb-operator-controller-manager-88c7-98jct" podUID="6f80de4b-ac33-4b39-b105-5927fd6511fc" Oct 01 15:56:17 crc kubenswrapper[4949]: E1001 15:56:17.684083 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openstack-k8s-operators/ovn-operator@sha256:1051afc168038fb814f75e7a5f07c588b295a83ebd143dcd8b46d799e31ad302\\\"\"" pod="openstack-operators/ovn-operator-controller-manager-9976ff44c-mblm4" podUID="2d130278-c73a-4681-ae0f-76385dcf4de9" Oct 01 15:56:18 crc kubenswrapper[4949]: I1001 15:56:18.038813 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 15:56:18 crc kubenswrapper[4949]: I1001 15:56:18.038887 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 15:56:21 crc kubenswrapper[4949]: I1001 15:56:21.495439 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-dfd74"] Oct 01 15:56:21 crc kubenswrapper[4949]: I1001 15:56:21.497995 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dfd74" Oct 01 15:56:21 crc kubenswrapper[4949]: I1001 15:56:21.500933 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dfd74"] Oct 01 15:56:21 crc kubenswrapper[4949]: I1001 15:56:21.618405 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd7e2de6-a131-47b0-ac94-aca6b79f7147-catalog-content\") pod \"community-operators-dfd74\" (UID: \"cd7e2de6-a131-47b0-ac94-aca6b79f7147\") " pod="openshift-marketplace/community-operators-dfd74" Oct 01 15:56:21 crc kubenswrapper[4949]: I1001 15:56:21.618481 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd7e2de6-a131-47b0-ac94-aca6b79f7147-utilities\") pod \"community-operators-dfd74\" (UID: \"cd7e2de6-a131-47b0-ac94-aca6b79f7147\") " pod="openshift-marketplace/community-operators-dfd74" Oct 01 15:56:21 crc kubenswrapper[4949]: I1001 15:56:21.618529 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9xdcx\" (UniqueName: \"kubernetes.io/projected/cd7e2de6-a131-47b0-ac94-aca6b79f7147-kube-api-access-9xdcx\") pod \"community-operators-dfd74\" (UID: \"cd7e2de6-a131-47b0-ac94-aca6b79f7147\") " pod="openshift-marketplace/community-operators-dfd74" Oct 01 15:56:21 crc kubenswrapper[4949]: I1001 15:56:21.719877 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9xdcx\" (UniqueName: \"kubernetes.io/projected/cd7e2de6-a131-47b0-ac94-aca6b79f7147-kube-api-access-9xdcx\") pod \"community-operators-dfd74\" (UID: \"cd7e2de6-a131-47b0-ac94-aca6b79f7147\") " pod="openshift-marketplace/community-operators-dfd74" Oct 01 15:56:21 crc kubenswrapper[4949]: I1001 15:56:21.719993 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd7e2de6-a131-47b0-ac94-aca6b79f7147-catalog-content\") pod \"community-operators-dfd74\" (UID: \"cd7e2de6-a131-47b0-ac94-aca6b79f7147\") " pod="openshift-marketplace/community-operators-dfd74" Oct 01 15:56:21 crc kubenswrapper[4949]: I1001 15:56:21.720050 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd7e2de6-a131-47b0-ac94-aca6b79f7147-utilities\") pod \"community-operators-dfd74\" (UID: \"cd7e2de6-a131-47b0-ac94-aca6b79f7147\") " pod="openshift-marketplace/community-operators-dfd74" Oct 01 15:56:21 crc kubenswrapper[4949]: I1001 15:56:21.720730 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd7e2de6-a131-47b0-ac94-aca6b79f7147-catalog-content\") pod \"community-operators-dfd74\" (UID: \"cd7e2de6-a131-47b0-ac94-aca6b79f7147\") " pod="openshift-marketplace/community-operators-dfd74" Oct 01 15:56:21 crc kubenswrapper[4949]: I1001 15:56:21.720966 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd7e2de6-a131-47b0-ac94-aca6b79f7147-utilities\") pod \"community-operators-dfd74\" (UID: \"cd7e2de6-a131-47b0-ac94-aca6b79f7147\") " pod="openshift-marketplace/community-operators-dfd74" Oct 01 15:56:21 crc kubenswrapper[4949]: I1001 15:56:21.748260 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9xdcx\" (UniqueName: \"kubernetes.io/projected/cd7e2de6-a131-47b0-ac94-aca6b79f7147-kube-api-access-9xdcx\") pod \"community-operators-dfd74\" (UID: \"cd7e2de6-a131-47b0-ac94-aca6b79f7147\") " pod="openshift-marketplace/community-operators-dfd74" Oct 01 15:56:21 crc kubenswrapper[4949]: I1001 15:56:21.817797 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dfd74" Oct 01 15:56:22 crc kubenswrapper[4949]: I1001 15:56:22.971276 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-operator-controller-manager-745f9964cd-cwnfz" Oct 01 15:56:26 crc kubenswrapper[4949]: E1001 15:56:26.798906 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/openstack-k8s-operators/cinder-operator:87a9122de453c888e7b27af4d1987d7697f31616" Oct 01 15:56:26 crc kubenswrapper[4949]: E1001 15:56:26.799326 4949 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = Canceled desc = copying config: context canceled" image="38.102.83.80:5001/openstack-k8s-operators/cinder-operator:87a9122de453c888e7b27af4d1987d7697f31616" Oct 01 15:56:26 crc kubenswrapper[4949]: E1001 15:56:26.799543 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:38.102.83.80:5001/openstack-k8s-operators/cinder-operator:87a9122de453c888e7b27af4d1987d7697f31616,Command:[/manager],Args:[--health-probe-bind-address=:8081 --metrics-bind-address=127.0.0.1:8080 --leader-elect],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LEASE_DURATION,Value:30,ValueFrom:nil,},EnvVar{Name:RENEW_DEADLINE,Value:20,ValueFrom:nil,},EnvVar{Name:RETRY_PERIOD,Value:5,ValueFrom:nil,},EnvVar{Name:ENABLE_WEBHOOKS,Value:false,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{500 -3} {} 500m DecimalSI},memory: {{536870912 0} {} BinarySI},},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{268435456 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-qkv24,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:15,TimeoutSeconds:1,PeriodSeconds:20,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 8081 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-operator-controller-manager-795d876f9c-8wvgx_openstack-operators(e5ed691c-da8b-4bae-8d20-e92c1e062ea2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 01 15:56:27 crc kubenswrapper[4949]: E1001 15:56:27.210343 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack-operators/cinder-operator-controller-manager-795d876f9c-8wvgx" podUID="e5ed691c-da8b-4bae-8d20-e92c1e062ea2" Oct 01 15:56:27 crc kubenswrapper[4949]: I1001 15:56:27.280766 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-dfd74"] Oct 01 15:56:27 crc kubenswrapper[4949]: W1001 15:56:27.318509 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd7e2de6_a131_47b0_ac94_aca6b79f7147.slice/crio-892a1e1240938819301d9bec4848f4ed416f693bfb9bee815b78967bfc53ef73 WatchSource:0}: Error finding container 892a1e1240938819301d9bec4848f4ed416f693bfb9bee815b78967bfc53ef73: Status 404 returned error can't find the container with id 892a1e1240938819301d9bec4848f4ed416f693bfb9bee815b78967bfc53ef73 Oct 01 15:56:27 crc kubenswrapper[4949]: I1001 15:56:27.779550 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5cd4858477-5pcwg" event={"ID":"58d1b185-0213-427d-8f85-2b2636e0d121","Type":"ContainerStarted","Data":"84f114a51b3b1bd7d8e1d4033f2fdceacd60a33404350466e33483521d11efb8"} Oct 01 15:56:27 crc kubenswrapper[4949]: I1001 15:56:27.784523 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-4k7z5" event={"ID":"36350170-f4ac-4f4b-ba23-a05d9300f63a","Type":"ContainerStarted","Data":"0c29c032574b3156657b46f31ebaca52149547e56dc53883e428bb3353799f4f"} Oct 01 15:56:27 crc kubenswrapper[4949]: I1001 15:56:27.786280 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-849d5b9b84-lhdzd" event={"ID":"0853e80f-a3aa-4230-b028-f8a0887afb2f","Type":"ContainerStarted","Data":"5127cab9957f67428c526e31150184c581ba7144f7016e54f09d38c26d99a952"} Oct 01 15:56:27 crc kubenswrapper[4949]: I1001 15:56:27.791401 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-795d876f9c-8wvgx" event={"ID":"e5ed691c-da8b-4bae-8d20-e92c1e062ea2","Type":"ContainerStarted","Data":"0c6959b817c457b6001e5b432a45f4c9b8955417bd531fed680e85dbd43ba27d"} Oct 01 15:56:27 crc kubenswrapper[4949]: E1001 15:56:27.793778 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.80:5001/openstack-k8s-operators/cinder-operator:87a9122de453c888e7b27af4d1987d7697f31616\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-795d876f9c-8wvgx" podUID="e5ed691c-da8b-4bae-8d20-e92c1e062ea2" Oct 01 15:56:27 crc kubenswrapper[4949]: I1001 15:56:27.794547 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-mtq8j" event={"ID":"61fc51ec-92f9-4dc7-a0ea-793712809ebc","Type":"ContainerStarted","Data":"58f627ab07c4328617a69fcb00660657f2323c1122a049a7b8867220e6e6d036"} Oct 01 15:56:27 crc kubenswrapper[4949]: I1001 15:56:27.797830 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-2dwb2" event={"ID":"fd0ea56f-b7dd-4d0d-b6bf-5ca133fa2ea4","Type":"ContainerStarted","Data":"12f835e722ff96a9cfa0fedc2a31d15ed48900a0ce12e5bd14b98463b84895c4"} Oct 01 15:56:27 crc kubenswrapper[4949]: I1001 15:56:27.804322 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7b787867f4-r5smf" event={"ID":"39bff9fb-ac50-40cd-b23c-113763e3527e","Type":"ContainerStarted","Data":"90fae6bc038aa276177adffcec13f8dc6210ff4f573ddf218c62e2cdd74a8acb"} Oct 01 15:56:27 crc kubenswrapper[4949]: I1001 15:56:27.811086 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dfd74" event={"ID":"cd7e2de6-a131-47b0-ac94-aca6b79f7147","Type":"ContainerStarted","Data":"892a1e1240938819301d9bec4848f4ed416f693bfb9bee815b78967bfc53ef73"} Oct 01 15:56:27 crc kubenswrapper[4949]: I1001 15:56:27.815424 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-84958c4d49-jshxk" event={"ID":"e8658648-cf73-468c-8291-7b6ad0f265e6","Type":"ContainerStarted","Data":"3919b58cb75bedc1a29002f86eef7d3d0815458d16819d18c973beee928b1c82"} Oct 01 15:56:27 crc kubenswrapper[4949]: I1001 15:56:27.817013 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-64cd67b5cb-9tqls" event={"ID":"acce3e2f-7372-4703-a0e2-3dec09dc5b2d","Type":"ContainerStarted","Data":"f0af13e01201a5e909960fac50a54d5171a113c6373fc2375331d7d9794b79ca"} Oct 01 15:56:27 crc kubenswrapper[4949]: I1001 15:56:27.818264 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-mrm9t" event={"ID":"1f506f8f-047a-4efa-8ff2-dbe310d0e12e","Type":"ContainerStarted","Data":"63dd9bbb11ae74afedc417c9c8f3f3abd0a0fd6c0d15dd318a7bf3e527398428"} Oct 01 15:56:27 crc kubenswrapper[4949]: I1001 15:56:27.825979 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-589c58c6c-62v4q" event={"ID":"97aa883c-f9b1-4f83-884e-13fdd88beca7","Type":"ContainerStarted","Data":"a02d0bd7a07e460269a7869d9884eb1e0efcb9b676464cf79194f82da266265e"} Oct 01 15:56:27 crc kubenswrapper[4949]: I1001 15:56:27.833032 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-rbgd2" event={"ID":"6bec54a7-c02d-45e5-bc1a-22ca4e0d2229","Type":"ContainerStarted","Data":"c667acd144e405bccbb79fabda910e7adda5f5954d630e5bfea7ac600e4bfc57"} Oct 01 15:56:28 crc kubenswrapper[4949]: I1001 15:56:28.840869 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-84958c4d49-jshxk" event={"ID":"e8658648-cf73-468c-8291-7b6ad0f265e6","Type":"ContainerStarted","Data":"c07638a73d8d6bc523a8818d5ddb1a5f357e79259220f7d991e557fbdd24a011"} Oct 01 15:56:28 crc kubenswrapper[4949]: I1001 15:56:28.841243 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-84958c4d49-jshxk" Oct 01 15:56:28 crc kubenswrapper[4949]: I1001 15:56:28.842883 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-mtq8j" event={"ID":"61fc51ec-92f9-4dc7-a0ea-793712809ebc","Type":"ContainerStarted","Data":"81dbbcaa2dc7fe6f7ca00fb4ad5d581d4abf2aa79b5a7e85b8a984818ca7a580"} Oct 01 15:56:28 crc kubenswrapper[4949]: I1001 15:56:28.843603 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-mtq8j" Oct 01 15:56:28 crc kubenswrapper[4949]: I1001 15:56:28.855347 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/placement-operator-controller-manager-589c58c6c-62v4q" event={"ID":"97aa883c-f9b1-4f83-884e-13fdd88beca7","Type":"ContainerStarted","Data":"463951cc103302f8306d82b710a56cc00ab05a2e35a80b780cb9164719d65aa4"} Oct 01 15:56:28 crc kubenswrapper[4949]: I1001 15:56:28.855779 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/placement-operator-controller-manager-589c58c6c-62v4q" Oct 01 15:56:28 crc kubenswrapper[4949]: I1001 15:56:28.857442 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ironic-operator-controller-manager-5cd4858477-5pcwg" event={"ID":"58d1b185-0213-427d-8f85-2b2636e0d121","Type":"ContainerStarted","Data":"4dba703333875c9636ee358979ed1882c7cad013dc8bef78affc5728cab4a50c"} Oct 01 15:56:28 crc kubenswrapper[4949]: I1001 15:56:28.857526 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ironic-operator-controller-manager-5cd4858477-5pcwg" Oct 01 15:56:28 crc kubenswrapper[4949]: I1001 15:56:28.859461 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-rbgd2" event={"ID":"6bec54a7-c02d-45e5-bc1a-22ca4e0d2229","Type":"ContainerStarted","Data":"f942fb65b24f2aabc7c1197dce80c69b5bd2437e89416500409e82f95da99ea8"} Oct 01 15:56:28 crc kubenswrapper[4949]: I1001 15:56:28.860172 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-rbgd2" Oct 01 15:56:28 crc kubenswrapper[4949]: I1001 15:56:28.861712 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/octavia-operator-controller-manager-7b787867f4-r5smf" event={"ID":"39bff9fb-ac50-40cd-b23c-113763e3527e","Type":"ContainerStarted","Data":"31238db8960a492ae103e9ad3f690d126fa68f2b04ad220e194118dd883f7975"} Oct 01 15:56:28 crc kubenswrapper[4949]: I1001 15:56:28.862169 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/octavia-operator-controller-manager-7b787867f4-r5smf" Oct 01 15:56:28 crc kubenswrapper[4949]: I1001 15:56:28.863862 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-4k7z5" event={"ID":"36350170-f4ac-4f4b-ba23-a05d9300f63a","Type":"ContainerStarted","Data":"8c76935aafccc98389a6834a797622476a21a53d4dc846d08b04b85b4bb56dee"} Oct 01 15:56:28 crc kubenswrapper[4949]: I1001 15:56:28.864276 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-4k7z5" Oct 01 15:56:28 crc kubenswrapper[4949]: I1001 15:56:28.869589 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/nova-operator-controller-manager-64cd67b5cb-9tqls" event={"ID":"acce3e2f-7372-4703-a0e2-3dec09dc5b2d","Type":"ContainerStarted","Data":"58c3dae5548a57a4d73adcec4f9fb39bed9f7838b491d0c0b8314ea60d7c4cfd"} Oct 01 15:56:28 crc kubenswrapper[4949]: I1001 15:56:28.869725 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/nova-operator-controller-manager-64cd67b5cb-9tqls" Oct 01 15:56:28 crc kubenswrapper[4949]: I1001 15:56:28.871267 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6b9957f54f-z2b6f" event={"ID":"013782e2-98ad-4401-af7d-22ac977c0e42","Type":"ContainerStarted","Data":"11d48b568069c4080ad7f85bded4da5cbc3a169b755ba986b67b2d12fd2e5ff1"} Oct 01 15:56:28 crc kubenswrapper[4949]: I1001 15:56:28.871390 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/watcher-operator-controller-manager-6b9957f54f-z2b6f" event={"ID":"013782e2-98ad-4401-af7d-22ac977c0e42","Type":"ContainerStarted","Data":"bffa18d850c9863a7475ec48d166aba1290d041c4fbe0ee869382f10af37863a"} Oct 01 15:56:28 crc kubenswrapper[4949]: I1001 15:56:28.871471 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/watcher-operator-controller-manager-6b9957f54f-z2b6f" Oct 01 15:56:28 crc kubenswrapper[4949]: I1001 15:56:28.876595 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-mrm9t" event={"ID":"1f506f8f-047a-4efa-8ff2-dbe310d0e12e","Type":"ContainerStarted","Data":"f38d2fb7f9a487d7cc086e723b33ab5133bfe2be25190431c66edd3d9d316901"} Oct 01 15:56:28 crc kubenswrapper[4949]: I1001 15:56:28.876633 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-mrm9t" Oct 01 15:56:28 crc kubenswrapper[4949]: I1001 15:56:28.877484 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-84958c4d49-jshxk" podStartSLOduration=6.68717011 podStartE2EDuration="17.877464738s" podCreationTimestamp="2025-10-01 15:56:11 +0000 UTC" firstStartedPulling="2025-10-01 15:56:12.821961573 +0000 UTC m=+872.127567764" lastFinishedPulling="2025-10-01 15:56:24.012256201 +0000 UTC m=+883.317862392" observedRunningTime="2025-10-01 15:56:28.874049058 +0000 UTC m=+888.179655259" watchObservedRunningTime="2025-10-01 15:56:28.877464738 +0000 UTC m=+888.183070949" Oct 01 15:56:28 crc kubenswrapper[4949]: I1001 15:56:28.879108 4949 generic.go:334] "Generic (PLEG): container finished" podID="cd7e2de6-a131-47b0-ac94-aca6b79f7147" containerID="b1ef6a44bc88b87a0e08ecb2616712f9c435a1d9e6b51a07a9c756a0d4c0394a" exitCode=0 Oct 01 15:56:28 crc kubenswrapper[4949]: I1001 15:56:28.879174 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dfd74" event={"ID":"cd7e2de6-a131-47b0-ac94-aca6b79f7147","Type":"ContainerDied","Data":"b1ef6a44bc88b87a0e08ecb2616712f9c435a1d9e6b51a07a9c756a0d4c0394a"} Oct 01 15:56:28 crc kubenswrapper[4949]: I1001 15:56:28.885860 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/neutron-operator-controller-manager-849d5b9b84-lhdzd" event={"ID":"0853e80f-a3aa-4230-b028-f8a0887afb2f","Type":"ContainerStarted","Data":"b4dff68cafab0765b0d3574914e3f2fc45899e324c2ab729976f8bc2873fa2ad"} Oct 01 15:56:28 crc kubenswrapper[4949]: I1001 15:56:28.886412 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/neutron-operator-controller-manager-849d5b9b84-lhdzd" Oct 01 15:56:28 crc kubenswrapper[4949]: I1001 15:56:28.898042 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-9d6c5db85-cg7tn" event={"ID":"611a7063-c924-4b35-b2d7-d4d48a7e8f7a","Type":"ContainerStarted","Data":"4a8f92416eb86418d0a307839f0b8cf9e4c3117c60d558a61397d4b86815e58d"} Oct 01 15:56:28 crc kubenswrapper[4949]: I1001 15:56:28.898254 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-9d6c5db85-cg7tn" event={"ID":"611a7063-c924-4b35-b2d7-d4d48a7e8f7a","Type":"ContainerStarted","Data":"c97c10524ac94c0a24ee10e6ea82db11adfa0e620c33ce256ba3b77e366cdab2"} Oct 01 15:56:28 crc kubenswrapper[4949]: I1001 15:56:28.898277 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-9d6c5db85-cg7tn" Oct 01 15:56:28 crc kubenswrapper[4949]: I1001 15:56:28.901945 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-2dwb2" event={"ID":"fd0ea56f-b7dd-4d0d-b6bf-5ca133fa2ea4","Type":"ContainerStarted","Data":"17b14919801562e27c75c8083f1aa7746717f0e26bfa358e2e47cdf2eb10c2c6"} Oct 01 15:56:28 crc kubenswrapper[4949]: I1001 15:56:28.902956 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-2dwb2" Oct 01 15:56:28 crc kubenswrapper[4949]: I1001 15:56:28.910688 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-84d6b4b759-9qvwb" event={"ID":"c3ab5ef5-3877-4512-a279-5d4504fd7301","Type":"ContainerStarted","Data":"2bfc0433bd6842714d1304f18be422f7645d41b2ee1191017e85f0e323649050"} Oct 01 15:56:28 crc kubenswrapper[4949]: I1001 15:56:28.910725 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-84d6b4b759-9qvwb" Oct 01 15:56:28 crc kubenswrapper[4949]: I1001 15:56:28.910735 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-84d6b4b759-9qvwb" event={"ID":"c3ab5ef5-3877-4512-a279-5d4504fd7301","Type":"ContainerStarted","Data":"cd1e8c9c84b26b1134e9c1597b94cf2e6665d9525ef9472b1a42618f831586a9"} Oct 01 15:56:28 crc kubenswrapper[4949]: E1001 15:56:28.911555 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"38.102.83.80:5001/openstack-k8s-operators/cinder-operator:87a9122de453c888e7b27af4d1987d7697f31616\\\"\"" pod="openstack-operators/cinder-operator-controller-manager-795d876f9c-8wvgx" podUID="e5ed691c-da8b-4bae-8d20-e92c1e062ea2" Oct 01 15:56:28 crc kubenswrapper[4949]: I1001 15:56:28.913214 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/nova-operator-controller-manager-64cd67b5cb-9tqls" podStartSLOduration=4.54843907 podStartE2EDuration="17.91320153s" podCreationTimestamp="2025-10-01 15:56:11 +0000 UTC" firstStartedPulling="2025-10-01 15:56:13.486788646 +0000 UTC m=+872.792394837" lastFinishedPulling="2025-10-01 15:56:26.851551106 +0000 UTC m=+886.157157297" observedRunningTime="2025-10-01 15:56:28.897797734 +0000 UTC m=+888.203403945" watchObservedRunningTime="2025-10-01 15:56:28.91320153 +0000 UTC m=+888.218807721" Oct 01 15:56:28 crc kubenswrapper[4949]: I1001 15:56:28.914321 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ironic-operator-controller-manager-5cd4858477-5pcwg" podStartSLOduration=4.149463078 podStartE2EDuration="17.914312379s" podCreationTimestamp="2025-10-01 15:56:11 +0000 UTC" firstStartedPulling="2025-10-01 15:56:13.042308554 +0000 UTC m=+872.347914745" lastFinishedPulling="2025-10-01 15:56:26.807157835 +0000 UTC m=+886.112764046" observedRunningTime="2025-10-01 15:56:28.911995839 +0000 UTC m=+888.217602030" watchObservedRunningTime="2025-10-01 15:56:28.914312379 +0000 UTC m=+888.219918580" Oct 01 15:56:28 crc kubenswrapper[4949]: I1001 15:56:28.935812 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/placement-operator-controller-manager-589c58c6c-62v4q" podStartSLOduration=4.594210589 podStartE2EDuration="17.935792306s" podCreationTimestamp="2025-10-01 15:56:11 +0000 UTC" firstStartedPulling="2025-10-01 15:56:13.519629673 +0000 UTC m=+872.825235864" lastFinishedPulling="2025-10-01 15:56:26.86121139 +0000 UTC m=+886.166817581" observedRunningTime="2025-10-01 15:56:28.931456841 +0000 UTC m=+888.237063032" watchObservedRunningTime="2025-10-01 15:56:28.935792306 +0000 UTC m=+888.241398497" Oct 01 15:56:28 crc kubenswrapper[4949]: I1001 15:56:28.959929 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/octavia-operator-controller-manager-7b787867f4-r5smf" podStartSLOduration=4.625109404 podStartE2EDuration="17.959910153s" podCreationTimestamp="2025-10-01 15:56:11 +0000 UTC" firstStartedPulling="2025-10-01 15:56:13.516368497 +0000 UTC m=+872.821974688" lastFinishedPulling="2025-10-01 15:56:26.851169246 +0000 UTC m=+886.156775437" observedRunningTime="2025-10-01 15:56:28.956222715 +0000 UTC m=+888.261828906" watchObservedRunningTime="2025-10-01 15:56:28.959910153 +0000 UTC m=+888.265516354" Oct 01 15:56:28 crc kubenswrapper[4949]: I1001 15:56:28.985591 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-rbgd2" podStartSLOduration=4.19612046 podStartE2EDuration="17.985576259s" podCreationTimestamp="2025-10-01 15:56:11 +0000 UTC" firstStartedPulling="2025-10-01 15:56:13.06184848 +0000 UTC m=+872.367454671" lastFinishedPulling="2025-10-01 15:56:26.851304279 +0000 UTC m=+886.156910470" observedRunningTime="2025-10-01 15:56:28.980176737 +0000 UTC m=+888.285782928" watchObservedRunningTime="2025-10-01 15:56:28.985576259 +0000 UTC m=+888.291182450" Oct 01 15:56:28 crc kubenswrapper[4949]: I1001 15:56:28.999441 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-4k7z5" podStartSLOduration=4.645017228 podStartE2EDuration="17.999420504s" podCreationTimestamp="2025-10-01 15:56:11 +0000 UTC" firstStartedPulling="2025-10-01 15:56:13.503869127 +0000 UTC m=+872.809475318" lastFinishedPulling="2025-10-01 15:56:26.858272403 +0000 UTC m=+886.163878594" observedRunningTime="2025-10-01 15:56:28.994958136 +0000 UTC m=+888.300564337" watchObservedRunningTime="2025-10-01 15:56:28.999420504 +0000 UTC m=+888.305026696" Oct 01 15:56:29 crc kubenswrapper[4949]: I1001 15:56:29.016517 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-mtq8j" podStartSLOduration=4.670400089 podStartE2EDuration="18.016501605s" podCreationTimestamp="2025-10-01 15:56:11 +0000 UTC" firstStartedPulling="2025-10-01 15:56:13.515850084 +0000 UTC m=+872.821456265" lastFinishedPulling="2025-10-01 15:56:26.86195159 +0000 UTC m=+886.167557781" observedRunningTime="2025-10-01 15:56:29.01063602 +0000 UTC m=+888.316242211" watchObservedRunningTime="2025-10-01 15:56:29.016501605 +0000 UTC m=+888.322107796" Oct 01 15:56:29 crc kubenswrapper[4949]: I1001 15:56:29.026247 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/watcher-operator-controller-manager-6b9957f54f-z2b6f" podStartSLOduration=4.945211266 podStartE2EDuration="18.026231831s" podCreationTimestamp="2025-10-01 15:56:11 +0000 UTC" firstStartedPulling="2025-10-01 15:56:13.794063431 +0000 UTC m=+873.099669622" lastFinishedPulling="2025-10-01 15:56:26.875083996 +0000 UTC m=+886.180690187" observedRunningTime="2025-10-01 15:56:29.022814911 +0000 UTC m=+888.328421102" watchObservedRunningTime="2025-10-01 15:56:29.026231831 +0000 UTC m=+888.331838012" Oct 01 15:56:29 crc kubenswrapper[4949]: I1001 15:56:29.076467 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-84d6b4b759-9qvwb" podStartSLOduration=4.69098111 podStartE2EDuration="18.076449196s" podCreationTimestamp="2025-10-01 15:56:11 +0000 UTC" firstStartedPulling="2025-10-01 15:56:13.504169955 +0000 UTC m=+872.809776146" lastFinishedPulling="2025-10-01 15:56:26.889638041 +0000 UTC m=+886.195244232" observedRunningTime="2025-10-01 15:56:29.072613425 +0000 UTC m=+888.378219616" watchObservedRunningTime="2025-10-01 15:56:29.076449196 +0000 UTC m=+888.382055387" Oct 01 15:56:29 crc kubenswrapper[4949]: I1001 15:56:29.094294 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/neutron-operator-controller-manager-849d5b9b84-lhdzd" podStartSLOduration=4.660698202 podStartE2EDuration="18.094277186s" podCreationTimestamp="2025-10-01 15:56:11 +0000 UTC" firstStartedPulling="2025-10-01 15:56:13.428964502 +0000 UTC m=+872.734570693" lastFinishedPulling="2025-10-01 15:56:26.862543486 +0000 UTC m=+886.168149677" observedRunningTime="2025-10-01 15:56:29.090093036 +0000 UTC m=+888.395699227" watchObservedRunningTime="2025-10-01 15:56:29.094277186 +0000 UTC m=+888.399883387" Oct 01 15:56:29 crc kubenswrapper[4949]: I1001 15:56:29.108847 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-9d6c5db85-cg7tn" podStartSLOduration=4.691632208 podStartE2EDuration="18.1088284s" podCreationTimestamp="2025-10-01 15:56:11 +0000 UTC" firstStartedPulling="2025-10-01 15:56:13.466846571 +0000 UTC m=+872.772452762" lastFinishedPulling="2025-10-01 15:56:26.884042763 +0000 UTC m=+886.189648954" observedRunningTime="2025-10-01 15:56:29.10617987 +0000 UTC m=+888.411786071" watchObservedRunningTime="2025-10-01 15:56:29.1088284 +0000 UTC m=+888.414434601" Oct 01 15:56:29 crc kubenswrapper[4949]: I1001 15:56:29.122776 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-mrm9t" podStartSLOduration=5.393659003 podStartE2EDuration="18.122748787s" podCreationTimestamp="2025-10-01 15:56:11 +0000 UTC" firstStartedPulling="2025-10-01 15:56:13.078898879 +0000 UTC m=+872.384505070" lastFinishedPulling="2025-10-01 15:56:25.807988663 +0000 UTC m=+885.113594854" observedRunningTime="2025-10-01 15:56:29.122359947 +0000 UTC m=+888.427966158" watchObservedRunningTime="2025-10-01 15:56:29.122748787 +0000 UTC m=+888.428354978" Oct 01 15:56:29 crc kubenswrapper[4949]: I1001 15:56:29.140689 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-2dwb2" podStartSLOduration=4.986594322 podStartE2EDuration="19.140670859s" podCreationTimestamp="2025-10-01 15:56:10 +0000 UTC" firstStartedPulling="2025-10-01 15:56:12.652191635 +0000 UTC m=+871.957797826" lastFinishedPulling="2025-10-01 15:56:26.806268172 +0000 UTC m=+886.111874363" observedRunningTime="2025-10-01 15:56:29.137713821 +0000 UTC m=+888.443320022" watchObservedRunningTime="2025-10-01 15:56:29.140670859 +0000 UTC m=+888.446277050" Oct 01 15:56:29 crc kubenswrapper[4949]: I1001 15:56:29.919058 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dfd74" event={"ID":"cd7e2de6-a131-47b0-ac94-aca6b79f7147","Type":"ContainerStarted","Data":"fb71326053ced7424d07a84d3272be57c8b8ffa179c5f154238cfad4488b7044"} Oct 01 15:56:30 crc kubenswrapper[4949]: I1001 15:56:30.931842 4949 generic.go:334] "Generic (PLEG): container finished" podID="cd7e2de6-a131-47b0-ac94-aca6b79f7147" containerID="fb71326053ced7424d07a84d3272be57c8b8ffa179c5f154238cfad4488b7044" exitCode=0 Oct 01 15:56:30 crc kubenswrapper[4949]: I1001 15:56:30.931937 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dfd74" event={"ID":"cd7e2de6-a131-47b0-ac94-aca6b79f7147","Type":"ContainerDied","Data":"fb71326053ced7424d07a84d3272be57c8b8ffa179c5f154238cfad4488b7044"} Oct 01 15:56:30 crc kubenswrapper[4949]: I1001 15:56:30.935887 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-n8frs" event={"ID":"02d48b56-707e-4347-88c5-0429a487042c","Type":"ContainerStarted","Data":"cdcc2af244ebc79ed59b13de854e6f3b54e7b707150b544cd07324afea2d8fab"} Oct 01 15:56:30 crc kubenswrapper[4949]: I1001 15:56:30.936521 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-n8frs" Oct 01 15:56:30 crc kubenswrapper[4949]: I1001 15:56:30.969201 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-n8frs" podStartSLOduration=3.115345205 podStartE2EDuration="19.969180775s" podCreationTimestamp="2025-10-01 15:56:11 +0000 UTC" firstStartedPulling="2025-10-01 15:56:13.573320649 +0000 UTC m=+872.878926840" lastFinishedPulling="2025-10-01 15:56:30.427156219 +0000 UTC m=+889.732762410" observedRunningTime="2025-10-01 15:56:30.964646805 +0000 UTC m=+890.270252996" watchObservedRunningTime="2025-10-01 15:56:30.969180775 +0000 UTC m=+890.274786966" Oct 01 15:56:32 crc kubenswrapper[4949]: I1001 15:56:32.069285 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/octavia-operator-controller-manager-7b787867f4-r5smf" Oct 01 15:56:32 crc kubenswrapper[4949]: I1001 15:56:32.103440 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-84d6b4b759-9qvwb" Oct 01 15:56:32 crc kubenswrapper[4949]: I1001 15:56:32.286740 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/placement-operator-controller-manager-589c58c6c-62v4q" Oct 01 15:56:32 crc kubenswrapper[4949]: I1001 15:56:32.654945 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/watcher-operator-controller-manager-6b9957f54f-z2b6f" Oct 01 15:56:32 crc kubenswrapper[4949]: I1001 15:56:32.960300 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-cnlqh" event={"ID":"3ad42e41-782a-480a-b8f5-e449eddb1649","Type":"ContainerStarted","Data":"0d27e44660fbf538335ae7daf1091d10d0261a66e722635e4a757c7ac0f283b2"} Oct 01 15:56:32 crc kubenswrapper[4949]: I1001 15:56:32.963659 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dfd74" event={"ID":"cd7e2de6-a131-47b0-ac94-aca6b79f7147","Type":"ContainerStarted","Data":"a0bda4f55821f88ec9bff6f0dc70ab8d119fc70562bc129f82e1817bf3995225"} Oct 01 15:56:32 crc kubenswrapper[4949]: I1001 15:56:32.974502 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-88c7-98jct" event={"ID":"6f80de4b-ac33-4b39-b105-5927fd6511fc","Type":"ContainerStarted","Data":"c0361f315a483e2d3abe973b9bb0177bf17c30490831c653ef5ed1e56d9a1316"} Oct 01 15:56:32 crc kubenswrapper[4949]: I1001 15:56:32.975087 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-88c7-98jct" Oct 01 15:56:32 crc kubenswrapper[4949]: I1001 15:56:32.977143 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-manager-5f97d8c699-cnlqh" podStartSLOduration=3.461439893 podStartE2EDuration="21.977117134s" podCreationTimestamp="2025-10-01 15:56:11 +0000 UTC" firstStartedPulling="2025-10-01 15:56:13.617391311 +0000 UTC m=+872.922997502" lastFinishedPulling="2025-10-01 15:56:32.133068552 +0000 UTC m=+891.438674743" observedRunningTime="2025-10-01 15:56:32.975566602 +0000 UTC m=+892.281172793" watchObservedRunningTime="2025-10-01 15:56:32.977117134 +0000 UTC m=+892.282723315" Oct 01 15:56:32 crc kubenswrapper[4949]: I1001 15:56:32.995939 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-88c7-98jct" podStartSLOduration=3.4029663 podStartE2EDuration="21.995916969s" podCreationTimestamp="2025-10-01 15:56:11 +0000 UTC" firstStartedPulling="2025-10-01 15:56:13.539965929 +0000 UTC m=+872.845572120" lastFinishedPulling="2025-10-01 15:56:32.132916598 +0000 UTC m=+891.438522789" observedRunningTime="2025-10-01 15:56:32.992792316 +0000 UTC m=+892.298398527" watchObservedRunningTime="2025-10-01 15:56:32.995916969 +0000 UTC m=+892.301523160" Oct 01 15:56:33 crc kubenswrapper[4949]: I1001 15:56:33.015107 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-dfd74" podStartSLOduration=8.766529157 podStartE2EDuration="12.015092155s" podCreationTimestamp="2025-10-01 15:56:21 +0000 UTC" firstStartedPulling="2025-10-01 15:56:28.884241557 +0000 UTC m=+888.189847748" lastFinishedPulling="2025-10-01 15:56:32.132804555 +0000 UTC m=+891.438410746" observedRunningTime="2025-10-01 15:56:33.008342567 +0000 UTC m=+892.313948758" watchObservedRunningTime="2025-10-01 15:56:33.015092155 +0000 UTC m=+892.320698346" Oct 01 15:56:34 crc kubenswrapper[4949]: I1001 15:56:34.989169 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/ovn-operator-controller-manager-9976ff44c-mblm4" event={"ID":"2d130278-c73a-4681-ae0f-76385dcf4de9","Type":"ContainerStarted","Data":"b7e5d04f2779319c6c306ebd5c30e04f493e20dc3cb57352046d408311190869"} Oct 01 15:56:34 crc kubenswrapper[4949]: I1001 15:56:34.989777 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/ovn-operator-controller-manager-9976ff44c-mblm4" Oct 01 15:56:34 crc kubenswrapper[4949]: I1001 15:56:34.991253 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-5w8c4" event={"ID":"ad53e0de-7d24-447d-82c6-ab0a523c913a","Type":"ContainerStarted","Data":"865abdc5f0277b2300b66dd6d012d33567c7d9d9889026fec9829aa126a3c5b2"} Oct 01 15:56:34 crc kubenswrapper[4949]: I1001 15:56:34.991434 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-5w8c4" Oct 01 15:56:34 crc kubenswrapper[4949]: I1001 15:56:34.993619 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/test-operator-controller-manager-85777745bb-l7zmk" event={"ID":"420e19a1-ab84-4852-ab58-8242a09d5621","Type":"ContainerStarted","Data":"ff5de9620ecaf6d9fad00f352adad2f0058f2ec97dc8ea0a5a61052ad1c05aac"} Oct 01 15:56:34 crc kubenswrapper[4949]: I1001 15:56:34.993792 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/test-operator-controller-manager-85777745bb-l7zmk" Oct 01 15:56:35 crc kubenswrapper[4949]: I1001 15:56:35.008709 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/ovn-operator-controller-manager-9976ff44c-mblm4" podStartSLOduration=3.038056157 podStartE2EDuration="24.008689585s" podCreationTimestamp="2025-10-01 15:56:11 +0000 UTC" firstStartedPulling="2025-10-01 15:56:13.61770612 +0000 UTC m=+872.923312311" lastFinishedPulling="2025-10-01 15:56:34.588339548 +0000 UTC m=+893.893945739" observedRunningTime="2025-10-01 15:56:35.006507457 +0000 UTC m=+894.312113648" watchObservedRunningTime="2025-10-01 15:56:35.008689585 +0000 UTC m=+894.314295776" Oct 01 15:56:35 crc kubenswrapper[4949]: I1001 15:56:35.033190 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/test-operator-controller-manager-85777745bb-l7zmk" podStartSLOduration=3.062259816 podStartE2EDuration="24.033172311s" podCreationTimestamp="2025-10-01 15:56:11 +0000 UTC" firstStartedPulling="2025-10-01 15:56:13.617511205 +0000 UTC m=+872.923117396" lastFinishedPulling="2025-10-01 15:56:34.5884237 +0000 UTC m=+893.894029891" observedRunningTime="2025-10-01 15:56:35.028550208 +0000 UTC m=+894.334156399" watchObservedRunningTime="2025-10-01 15:56:35.033172311 +0000 UTC m=+894.338778502" Oct 01 15:56:35 crc kubenswrapper[4949]: I1001 15:56:35.049782 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-5w8c4" podStartSLOduration=3.055099415 podStartE2EDuration="24.049763308s" podCreationTimestamp="2025-10-01 15:56:11 +0000 UTC" firstStartedPulling="2025-10-01 15:56:13.593386658 +0000 UTC m=+872.898992849" lastFinishedPulling="2025-10-01 15:56:34.588050551 +0000 UTC m=+893.893656742" observedRunningTime="2025-10-01 15:56:35.046172753 +0000 UTC m=+894.351778954" watchObservedRunningTime="2025-10-01 15:56:35.049763308 +0000 UTC m=+894.355369499" Oct 01 15:56:36 crc kubenswrapper[4949]: I1001 15:56:36.002534 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/openstack-baremetal-operator-controller-manager-77b9676b8crvdqk" event={"ID":"60ac792e-9135-4ea0-84f1-1708c0421e70","Type":"ContainerStarted","Data":"6ade6900465e37aed829c7a7478ce4ad16e49169723e401badd9431c0e6bff3f"} Oct 01 15:56:36 crc kubenswrapper[4949]: I1001 15:56:36.034545 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/openstack-baremetal-operator-controller-manager-77b9676b8crvdqk" podStartSLOduration=3.734912964 podStartE2EDuration="25.03452633s" podCreationTimestamp="2025-10-01 15:56:11 +0000 UTC" firstStartedPulling="2025-10-01 15:56:13.593383518 +0000 UTC m=+872.898989709" lastFinishedPulling="2025-10-01 15:56:34.892996884 +0000 UTC m=+894.198603075" observedRunningTime="2025-10-01 15:56:36.028226434 +0000 UTC m=+895.333832635" watchObservedRunningTime="2025-10-01 15:56:36.03452633 +0000 UTC m=+895.340132521" Oct 01 15:56:41 crc kubenswrapper[4949]: I1001 15:56:41.326467 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/barbican-operator-controller-manager-6ff8b75857-2dwb2" Oct 01 15:56:41 crc kubenswrapper[4949]: I1001 15:56:41.368717 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/designate-operator-controller-manager-84f4f7b77b-rbgd2" Oct 01 15:56:41 crc kubenswrapper[4949]: I1001 15:56:41.427882 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-84958c4d49-jshxk" Oct 01 15:56:41 crc kubenswrapper[4949]: I1001 15:56:41.537066 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-9f4696d94-5w8c4" Oct 01 15:56:41 crc kubenswrapper[4949]: I1001 15:56:41.549433 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/heat-operator-controller-manager-5d889d78cf-4k7z5" Oct 01 15:56:41 crc kubenswrapper[4949]: I1001 15:56:41.626105 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-9d6c5db85-cg7tn" Oct 01 15:56:41 crc kubenswrapper[4949]: I1001 15:56:41.632816 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ironic-operator-controller-manager-5cd4858477-5pcwg" Oct 01 15:56:41 crc kubenswrapper[4949]: I1001 15:56:41.681520 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-5bd55b4bff-mrm9t" Oct 01 15:56:41 crc kubenswrapper[4949]: I1001 15:56:41.754914 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/manila-operator-controller-manager-6d68dbc695-mtq8j" Oct 01 15:56:41 crc kubenswrapper[4949]: I1001 15:56:41.790383 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-88c7-98jct" Oct 01 15:56:41 crc kubenswrapper[4949]: I1001 15:56:41.819619 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-dfd74" Oct 01 15:56:41 crc kubenswrapper[4949]: I1001 15:56:41.819663 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-dfd74" Oct 01 15:56:41 crc kubenswrapper[4949]: I1001 15:56:41.863102 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-dfd74" Oct 01 15:56:41 crc kubenswrapper[4949]: I1001 15:56:41.892232 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/nova-operator-controller-manager-64cd67b5cb-9tqls" Oct 01 15:56:41 crc kubenswrapper[4949]: I1001 15:56:41.900599 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/neutron-operator-controller-manager-849d5b9b84-lhdzd" Oct 01 15:56:42 crc kubenswrapper[4949]: I1001 15:56:42.079295 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-dfd74" Oct 01 15:56:42 crc kubenswrapper[4949]: I1001 15:56:42.149438 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dfd74"] Oct 01 15:56:42 crc kubenswrapper[4949]: I1001 15:56:42.191338 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/telemetry-operator-controller-manager-b8d54b5d7-n8frs" Oct 01 15:56:42 crc kubenswrapper[4949]: I1001 15:56:42.294317 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/ovn-operator-controller-manager-9976ff44c-mblm4" Oct 01 15:56:42 crc kubenswrapper[4949]: I1001 15:56:42.525210 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/openstack-baremetal-operator-controller-manager-77b9676b8crvdqk" Oct 01 15:56:42 crc kubenswrapper[4949]: I1001 15:56:42.530700 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/openstack-baremetal-operator-controller-manager-77b9676b8crvdqk" Oct 01 15:56:42 crc kubenswrapper[4949]: I1001 15:56:42.645257 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/test-operator-controller-manager-85777745bb-l7zmk" Oct 01 15:56:43 crc kubenswrapper[4949]: I1001 15:56:43.055805 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/cinder-operator-controller-manager-795d876f9c-8wvgx" event={"ID":"e5ed691c-da8b-4bae-8d20-e92c1e062ea2","Type":"ContainerStarted","Data":"3f42a36c85874bce226b7dd2fe80bb1393ee2978e436287b077a47f27eb724e2"} Oct 01 15:56:43 crc kubenswrapper[4949]: I1001 15:56:43.077332 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/cinder-operator-controller-manager-795d876f9c-8wvgx" podStartSLOduration=3.67047118 podStartE2EDuration="33.0773137s" podCreationTimestamp="2025-10-01 15:56:10 +0000 UTC" firstStartedPulling="2025-10-01 15:56:12.652555854 +0000 UTC m=+871.958162045" lastFinishedPulling="2025-10-01 15:56:42.059398374 +0000 UTC m=+901.365004565" observedRunningTime="2025-10-01 15:56:43.069591796 +0000 UTC m=+902.375197987" watchObservedRunningTime="2025-10-01 15:56:43.0773137 +0000 UTC m=+902.382919881" Oct 01 15:56:44 crc kubenswrapper[4949]: I1001 15:56:44.062489 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-dfd74" podUID="cd7e2de6-a131-47b0-ac94-aca6b79f7147" containerName="registry-server" containerID="cri-o://a0bda4f55821f88ec9bff6f0dc70ab8d119fc70562bc129f82e1817bf3995225" gracePeriod=2 Oct 01 15:56:45 crc kubenswrapper[4949]: I1001 15:56:45.072194 4949 generic.go:334] "Generic (PLEG): container finished" podID="cd7e2de6-a131-47b0-ac94-aca6b79f7147" containerID="a0bda4f55821f88ec9bff6f0dc70ab8d119fc70562bc129f82e1817bf3995225" exitCode=0 Oct 01 15:56:45 crc kubenswrapper[4949]: I1001 15:56:45.072300 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dfd74" event={"ID":"cd7e2de6-a131-47b0-ac94-aca6b79f7147","Type":"ContainerDied","Data":"a0bda4f55821f88ec9bff6f0dc70ab8d119fc70562bc129f82e1817bf3995225"} Oct 01 15:56:48 crc kubenswrapper[4949]: I1001 15:56:48.038502 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 15:56:48 crc kubenswrapper[4949]: I1001 15:56:48.039408 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 15:56:50 crc kubenswrapper[4949]: I1001 15:56:50.968256 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dfd74" Oct 01 15:56:51 crc kubenswrapper[4949]: I1001 15:56:51.115315 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-dfd74" event={"ID":"cd7e2de6-a131-47b0-ac94-aca6b79f7147","Type":"ContainerDied","Data":"892a1e1240938819301d9bec4848f4ed416f693bfb9bee815b78967bfc53ef73"} Oct 01 15:56:51 crc kubenswrapper[4949]: I1001 15:56:51.115390 4949 scope.go:117] "RemoveContainer" containerID="a0bda4f55821f88ec9bff6f0dc70ab8d119fc70562bc129f82e1817bf3995225" Oct 01 15:56:51 crc kubenswrapper[4949]: I1001 15:56:51.115408 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-dfd74" Oct 01 15:56:51 crc kubenswrapper[4949]: I1001 15:56:51.124400 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xdcx\" (UniqueName: \"kubernetes.io/projected/cd7e2de6-a131-47b0-ac94-aca6b79f7147-kube-api-access-9xdcx\") pod \"cd7e2de6-a131-47b0-ac94-aca6b79f7147\" (UID: \"cd7e2de6-a131-47b0-ac94-aca6b79f7147\") " Oct 01 15:56:51 crc kubenswrapper[4949]: I1001 15:56:51.124538 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd7e2de6-a131-47b0-ac94-aca6b79f7147-catalog-content\") pod \"cd7e2de6-a131-47b0-ac94-aca6b79f7147\" (UID: \"cd7e2de6-a131-47b0-ac94-aca6b79f7147\") " Oct 01 15:56:51 crc kubenswrapper[4949]: I1001 15:56:51.124578 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd7e2de6-a131-47b0-ac94-aca6b79f7147-utilities\") pod \"cd7e2de6-a131-47b0-ac94-aca6b79f7147\" (UID: \"cd7e2de6-a131-47b0-ac94-aca6b79f7147\") " Oct 01 15:56:51 crc kubenswrapper[4949]: I1001 15:56:51.125521 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd7e2de6-a131-47b0-ac94-aca6b79f7147-utilities" (OuterVolumeSpecName: "utilities") pod "cd7e2de6-a131-47b0-ac94-aca6b79f7147" (UID: "cd7e2de6-a131-47b0-ac94-aca6b79f7147"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 15:56:51 crc kubenswrapper[4949]: I1001 15:56:51.130364 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd7e2de6-a131-47b0-ac94-aca6b79f7147-kube-api-access-9xdcx" (OuterVolumeSpecName: "kube-api-access-9xdcx") pod "cd7e2de6-a131-47b0-ac94-aca6b79f7147" (UID: "cd7e2de6-a131-47b0-ac94-aca6b79f7147"). InnerVolumeSpecName "kube-api-access-9xdcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:56:51 crc kubenswrapper[4949]: I1001 15:56:51.138985 4949 scope.go:117] "RemoveContainer" containerID="fb71326053ced7424d07a84d3272be57c8b8ffa179c5f154238cfad4488b7044" Oct 01 15:56:51 crc kubenswrapper[4949]: I1001 15:56:51.165459 4949 scope.go:117] "RemoveContainer" containerID="b1ef6a44bc88b87a0e08ecb2616712f9c435a1d9e6b51a07a9c756a0d4c0394a" Oct 01 15:56:51 crc kubenswrapper[4949]: I1001 15:56:51.175171 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cd7e2de6-a131-47b0-ac94-aca6b79f7147-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cd7e2de6-a131-47b0-ac94-aca6b79f7147" (UID: "cd7e2de6-a131-47b0-ac94-aca6b79f7147"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 15:56:51 crc kubenswrapper[4949]: I1001 15:56:51.226079 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cd7e2de6-a131-47b0-ac94-aca6b79f7147-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 01 15:56:51 crc kubenswrapper[4949]: I1001 15:56:51.226116 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cd7e2de6-a131-47b0-ac94-aca6b79f7147-utilities\") on node \"crc\" DevicePath \"\"" Oct 01 15:56:51 crc kubenswrapper[4949]: I1001 15:56:51.226139 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xdcx\" (UniqueName: \"kubernetes.io/projected/cd7e2de6-a131-47b0-ac94-aca6b79f7147-kube-api-access-9xdcx\") on node \"crc\" DevicePath \"\"" Oct 01 15:56:51 crc kubenswrapper[4949]: I1001 15:56:51.351935 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/cinder-operator-controller-manager-795d876f9c-8wvgx" Oct 01 15:56:51 crc kubenswrapper[4949]: I1001 15:56:51.354275 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/cinder-operator-controller-manager-795d876f9c-8wvgx" Oct 01 15:56:51 crc kubenswrapper[4949]: I1001 15:56:51.456216 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-dfd74"] Oct 01 15:56:51 crc kubenswrapper[4949]: I1001 15:56:51.459567 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-dfd74"] Oct 01 15:56:51 crc kubenswrapper[4949]: I1001 15:56:51.610332 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd7e2de6-a131-47b0-ac94-aca6b79f7147" path="/var/lib/kubelet/pods/cd7e2de6-a131-47b0-ac94-aca6b79f7147/volumes" Oct 01 15:57:06 crc kubenswrapper[4949]: I1001 15:57:06.339779 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-zh6mt"] Oct 01 15:57:06 crc kubenswrapper[4949]: E1001 15:57:06.340510 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd7e2de6-a131-47b0-ac94-aca6b79f7147" containerName="registry-server" Oct 01 15:57:06 crc kubenswrapper[4949]: I1001 15:57:06.340526 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd7e2de6-a131-47b0-ac94-aca6b79f7147" containerName="registry-server" Oct 01 15:57:06 crc kubenswrapper[4949]: E1001 15:57:06.340552 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd7e2de6-a131-47b0-ac94-aca6b79f7147" containerName="extract-content" Oct 01 15:57:06 crc kubenswrapper[4949]: I1001 15:57:06.340558 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd7e2de6-a131-47b0-ac94-aca6b79f7147" containerName="extract-content" Oct 01 15:57:06 crc kubenswrapper[4949]: E1001 15:57:06.340574 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd7e2de6-a131-47b0-ac94-aca6b79f7147" containerName="extract-utilities" Oct 01 15:57:06 crc kubenswrapper[4949]: I1001 15:57:06.340580 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd7e2de6-a131-47b0-ac94-aca6b79f7147" containerName="extract-utilities" Oct 01 15:57:06 crc kubenswrapper[4949]: I1001 15:57:06.340710 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd7e2de6-a131-47b0-ac94-aca6b79f7147" containerName="registry-server" Oct 01 15:57:06 crc kubenswrapper[4949]: I1001 15:57:06.341532 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-zh6mt" Oct 01 15:57:06 crc kubenswrapper[4949]: I1001 15:57:06.343547 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openshift-service-ca.crt" Oct 01 15:57:06 crc kubenswrapper[4949]: I1001 15:57:06.344788 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dnsmasq-dns-dockercfg-jdnff" Oct 01 15:57:06 crc kubenswrapper[4949]: I1001 15:57:06.344930 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"kube-root-ca.crt" Oct 01 15:57:06 crc kubenswrapper[4949]: I1001 15:57:06.345057 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns" Oct 01 15:57:06 crc kubenswrapper[4949]: I1001 15:57:06.346816 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6b5vl\" (UniqueName: \"kubernetes.io/projected/e3c4f0da-e41c-4091-9e44-989b59617c83-kube-api-access-6b5vl\") pod \"dnsmasq-dns-675f4bcbfc-zh6mt\" (UID: \"e3c4f0da-e41c-4091-9e44-989b59617c83\") " pod="openstack/dnsmasq-dns-675f4bcbfc-zh6mt" Oct 01 15:57:06 crc kubenswrapper[4949]: I1001 15:57:06.346872 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3c4f0da-e41c-4091-9e44-989b59617c83-config\") pod \"dnsmasq-dns-675f4bcbfc-zh6mt\" (UID: \"e3c4f0da-e41c-4091-9e44-989b59617c83\") " pod="openstack/dnsmasq-dns-675f4bcbfc-zh6mt" Oct 01 15:57:06 crc kubenswrapper[4949]: I1001 15:57:06.354515 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-zh6mt"] Oct 01 15:57:06 crc kubenswrapper[4949]: I1001 15:57:06.410778 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6s8xp"] Oct 01 15:57:06 crc kubenswrapper[4949]: I1001 15:57:06.412304 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-6s8xp" Oct 01 15:57:06 crc kubenswrapper[4949]: I1001 15:57:06.414892 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"dns-svc" Oct 01 15:57:06 crc kubenswrapper[4949]: I1001 15:57:06.419498 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6s8xp"] Oct 01 15:57:06 crc kubenswrapper[4949]: I1001 15:57:06.449187 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9c1d21c-695d-40b8-959a-36cd897d56f6-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-6s8xp\" (UID: \"f9c1d21c-695d-40b8-959a-36cd897d56f6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6s8xp" Oct 01 15:57:06 crc kubenswrapper[4949]: I1001 15:57:06.449278 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6b5vl\" (UniqueName: \"kubernetes.io/projected/e3c4f0da-e41c-4091-9e44-989b59617c83-kube-api-access-6b5vl\") pod \"dnsmasq-dns-675f4bcbfc-zh6mt\" (UID: \"e3c4f0da-e41c-4091-9e44-989b59617c83\") " pod="openstack/dnsmasq-dns-675f4bcbfc-zh6mt" Oct 01 15:57:06 crc kubenswrapper[4949]: I1001 15:57:06.449303 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3c4f0da-e41c-4091-9e44-989b59617c83-config\") pod \"dnsmasq-dns-675f4bcbfc-zh6mt\" (UID: \"e3c4f0da-e41c-4091-9e44-989b59617c83\") " pod="openstack/dnsmasq-dns-675f4bcbfc-zh6mt" Oct 01 15:57:06 crc kubenswrapper[4949]: I1001 15:57:06.449322 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9c1d21c-695d-40b8-959a-36cd897d56f6-config\") pod \"dnsmasq-dns-78dd6ddcc-6s8xp\" (UID: \"f9c1d21c-695d-40b8-959a-36cd897d56f6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6s8xp" Oct 01 15:57:06 crc kubenswrapper[4949]: I1001 15:57:06.449369 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkmww\" (UniqueName: \"kubernetes.io/projected/f9c1d21c-695d-40b8-959a-36cd897d56f6-kube-api-access-wkmww\") pod \"dnsmasq-dns-78dd6ddcc-6s8xp\" (UID: \"f9c1d21c-695d-40b8-959a-36cd897d56f6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6s8xp" Oct 01 15:57:06 crc kubenswrapper[4949]: I1001 15:57:06.450361 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3c4f0da-e41c-4091-9e44-989b59617c83-config\") pod \"dnsmasq-dns-675f4bcbfc-zh6mt\" (UID: \"e3c4f0da-e41c-4091-9e44-989b59617c83\") " pod="openstack/dnsmasq-dns-675f4bcbfc-zh6mt" Oct 01 15:57:06 crc kubenswrapper[4949]: I1001 15:57:06.474247 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6b5vl\" (UniqueName: \"kubernetes.io/projected/e3c4f0da-e41c-4091-9e44-989b59617c83-kube-api-access-6b5vl\") pod \"dnsmasq-dns-675f4bcbfc-zh6mt\" (UID: \"e3c4f0da-e41c-4091-9e44-989b59617c83\") " pod="openstack/dnsmasq-dns-675f4bcbfc-zh6mt" Oct 01 15:57:06 crc kubenswrapper[4949]: I1001 15:57:06.550080 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkmww\" (UniqueName: \"kubernetes.io/projected/f9c1d21c-695d-40b8-959a-36cd897d56f6-kube-api-access-wkmww\") pod \"dnsmasq-dns-78dd6ddcc-6s8xp\" (UID: \"f9c1d21c-695d-40b8-959a-36cd897d56f6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6s8xp" Oct 01 15:57:06 crc kubenswrapper[4949]: I1001 15:57:06.550202 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9c1d21c-695d-40b8-959a-36cd897d56f6-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-6s8xp\" (UID: \"f9c1d21c-695d-40b8-959a-36cd897d56f6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6s8xp" Oct 01 15:57:06 crc kubenswrapper[4949]: I1001 15:57:06.550268 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9c1d21c-695d-40b8-959a-36cd897d56f6-config\") pod \"dnsmasq-dns-78dd6ddcc-6s8xp\" (UID: \"f9c1d21c-695d-40b8-959a-36cd897d56f6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6s8xp" Oct 01 15:57:06 crc kubenswrapper[4949]: I1001 15:57:06.551152 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9c1d21c-695d-40b8-959a-36cd897d56f6-dns-svc\") pod \"dnsmasq-dns-78dd6ddcc-6s8xp\" (UID: \"f9c1d21c-695d-40b8-959a-36cd897d56f6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6s8xp" Oct 01 15:57:06 crc kubenswrapper[4949]: I1001 15:57:06.551190 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9c1d21c-695d-40b8-959a-36cd897d56f6-config\") pod \"dnsmasq-dns-78dd6ddcc-6s8xp\" (UID: \"f9c1d21c-695d-40b8-959a-36cd897d56f6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6s8xp" Oct 01 15:57:06 crc kubenswrapper[4949]: I1001 15:57:06.567368 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkmww\" (UniqueName: \"kubernetes.io/projected/f9c1d21c-695d-40b8-959a-36cd897d56f6-kube-api-access-wkmww\") pod \"dnsmasq-dns-78dd6ddcc-6s8xp\" (UID: \"f9c1d21c-695d-40b8-959a-36cd897d56f6\") " pod="openstack/dnsmasq-dns-78dd6ddcc-6s8xp" Oct 01 15:57:06 crc kubenswrapper[4949]: I1001 15:57:06.678323 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-zh6mt" Oct 01 15:57:06 crc kubenswrapper[4949]: I1001 15:57:06.731703 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-6s8xp" Oct 01 15:57:07 crc kubenswrapper[4949]: I1001 15:57:07.121166 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-zh6mt"] Oct 01 15:57:07 crc kubenswrapper[4949]: I1001 15:57:07.132283 4949 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 01 15:57:07 crc kubenswrapper[4949]: I1001 15:57:07.201823 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6s8xp"] Oct 01 15:57:07 crc kubenswrapper[4949]: W1001 15:57:07.206716 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf9c1d21c_695d_40b8_959a_36cd897d56f6.slice/crio-2ff52cf8620a045ee0e029bf1ac4cb62c0fba699861e2d77a9d33625fdb07591 WatchSource:0}: Error finding container 2ff52cf8620a045ee0e029bf1ac4cb62c0fba699861e2d77a9d33625fdb07591: Status 404 returned error can't find the container with id 2ff52cf8620a045ee0e029bf1ac4cb62c0fba699861e2d77a9d33625fdb07591 Oct 01 15:57:07 crc kubenswrapper[4949]: I1001 15:57:07.223826 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-zh6mt" event={"ID":"e3c4f0da-e41c-4091-9e44-989b59617c83","Type":"ContainerStarted","Data":"d5eeec89310a54604b8a72088528aec994d2430cda3c43198185933aba3b8f0b"} Oct 01 15:57:07 crc kubenswrapper[4949]: I1001 15:57:07.225452 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-6s8xp" event={"ID":"f9c1d21c-695d-40b8-959a-36cd897d56f6","Type":"ContainerStarted","Data":"2ff52cf8620a045ee0e029bf1ac4cb62c0fba699861e2d77a9d33625fdb07591"} Oct 01 15:57:09 crc kubenswrapper[4949]: I1001 15:57:09.380341 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-zh6mt"] Oct 01 15:57:09 crc kubenswrapper[4949]: I1001 15:57:09.411462 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-p9pv5"] Oct 01 15:57:09 crc kubenswrapper[4949]: I1001 15:57:09.412640 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-p9pv5" Oct 01 15:57:09 crc kubenswrapper[4949]: I1001 15:57:09.441907 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-p9pv5"] Oct 01 15:57:09 crc kubenswrapper[4949]: I1001 15:57:09.496900 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/762283e6-fbe2-4174-92e3-ffc1dc5e76f7-config\") pod \"dnsmasq-dns-666b6646f7-p9pv5\" (UID: \"762283e6-fbe2-4174-92e3-ffc1dc5e76f7\") " pod="openstack/dnsmasq-dns-666b6646f7-p9pv5" Oct 01 15:57:09 crc kubenswrapper[4949]: I1001 15:57:09.496963 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djbzn\" (UniqueName: \"kubernetes.io/projected/762283e6-fbe2-4174-92e3-ffc1dc5e76f7-kube-api-access-djbzn\") pod \"dnsmasq-dns-666b6646f7-p9pv5\" (UID: \"762283e6-fbe2-4174-92e3-ffc1dc5e76f7\") " pod="openstack/dnsmasq-dns-666b6646f7-p9pv5" Oct 01 15:57:09 crc kubenswrapper[4949]: I1001 15:57:09.497018 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/762283e6-fbe2-4174-92e3-ffc1dc5e76f7-dns-svc\") pod \"dnsmasq-dns-666b6646f7-p9pv5\" (UID: \"762283e6-fbe2-4174-92e3-ffc1dc5e76f7\") " pod="openstack/dnsmasq-dns-666b6646f7-p9pv5" Oct 01 15:57:09 crc kubenswrapper[4949]: I1001 15:57:09.597504 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/762283e6-fbe2-4174-92e3-ffc1dc5e76f7-config\") pod \"dnsmasq-dns-666b6646f7-p9pv5\" (UID: \"762283e6-fbe2-4174-92e3-ffc1dc5e76f7\") " pod="openstack/dnsmasq-dns-666b6646f7-p9pv5" Oct 01 15:57:09 crc kubenswrapper[4949]: I1001 15:57:09.597613 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-djbzn\" (UniqueName: \"kubernetes.io/projected/762283e6-fbe2-4174-92e3-ffc1dc5e76f7-kube-api-access-djbzn\") pod \"dnsmasq-dns-666b6646f7-p9pv5\" (UID: \"762283e6-fbe2-4174-92e3-ffc1dc5e76f7\") " pod="openstack/dnsmasq-dns-666b6646f7-p9pv5" Oct 01 15:57:09 crc kubenswrapper[4949]: I1001 15:57:09.597991 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/762283e6-fbe2-4174-92e3-ffc1dc5e76f7-dns-svc\") pod \"dnsmasq-dns-666b6646f7-p9pv5\" (UID: \"762283e6-fbe2-4174-92e3-ffc1dc5e76f7\") " pod="openstack/dnsmasq-dns-666b6646f7-p9pv5" Oct 01 15:57:09 crc kubenswrapper[4949]: I1001 15:57:09.598364 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/762283e6-fbe2-4174-92e3-ffc1dc5e76f7-config\") pod \"dnsmasq-dns-666b6646f7-p9pv5\" (UID: \"762283e6-fbe2-4174-92e3-ffc1dc5e76f7\") " pod="openstack/dnsmasq-dns-666b6646f7-p9pv5" Oct 01 15:57:09 crc kubenswrapper[4949]: I1001 15:57:09.598694 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/762283e6-fbe2-4174-92e3-ffc1dc5e76f7-dns-svc\") pod \"dnsmasq-dns-666b6646f7-p9pv5\" (UID: \"762283e6-fbe2-4174-92e3-ffc1dc5e76f7\") " pod="openstack/dnsmasq-dns-666b6646f7-p9pv5" Oct 01 15:57:09 crc kubenswrapper[4949]: I1001 15:57:09.627907 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-djbzn\" (UniqueName: \"kubernetes.io/projected/762283e6-fbe2-4174-92e3-ffc1dc5e76f7-kube-api-access-djbzn\") pod \"dnsmasq-dns-666b6646f7-p9pv5\" (UID: \"762283e6-fbe2-4174-92e3-ffc1dc5e76f7\") " pod="openstack/dnsmasq-dns-666b6646f7-p9pv5" Oct 01 15:57:09 crc kubenswrapper[4949]: I1001 15:57:09.751574 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-p9pv5" Oct 01 15:57:09 crc kubenswrapper[4949]: I1001 15:57:09.779254 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6s8xp"] Oct 01 15:57:09 crc kubenswrapper[4949]: I1001 15:57:09.820831 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-znlf7"] Oct 01 15:57:09 crc kubenswrapper[4949]: I1001 15:57:09.822100 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-znlf7" Oct 01 15:57:09 crc kubenswrapper[4949]: I1001 15:57:09.834584 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-znlf7"] Oct 01 15:57:09 crc kubenswrapper[4949]: I1001 15:57:09.904572 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkk9b\" (UniqueName: \"kubernetes.io/projected/c27ad98d-1bd3-4d18-b0c5-882f5e080459-kube-api-access-qkk9b\") pod \"dnsmasq-dns-57d769cc4f-znlf7\" (UID: \"c27ad98d-1bd3-4d18-b0c5-882f5e080459\") " pod="openstack/dnsmasq-dns-57d769cc4f-znlf7" Oct 01 15:57:09 crc kubenswrapper[4949]: I1001 15:57:09.904647 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c27ad98d-1bd3-4d18-b0c5-882f5e080459-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-znlf7\" (UID: \"c27ad98d-1bd3-4d18-b0c5-882f5e080459\") " pod="openstack/dnsmasq-dns-57d769cc4f-znlf7" Oct 01 15:57:09 crc kubenswrapper[4949]: I1001 15:57:09.904699 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c27ad98d-1bd3-4d18-b0c5-882f5e080459-config\") pod \"dnsmasq-dns-57d769cc4f-znlf7\" (UID: \"c27ad98d-1bd3-4d18-b0c5-882f5e080459\") " pod="openstack/dnsmasq-dns-57d769cc4f-znlf7" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.005467 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c27ad98d-1bd3-4d18-b0c5-882f5e080459-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-znlf7\" (UID: \"c27ad98d-1bd3-4d18-b0c5-882f5e080459\") " pod="openstack/dnsmasq-dns-57d769cc4f-znlf7" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.005848 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c27ad98d-1bd3-4d18-b0c5-882f5e080459-config\") pod \"dnsmasq-dns-57d769cc4f-znlf7\" (UID: \"c27ad98d-1bd3-4d18-b0c5-882f5e080459\") " pod="openstack/dnsmasq-dns-57d769cc4f-znlf7" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.005900 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkk9b\" (UniqueName: \"kubernetes.io/projected/c27ad98d-1bd3-4d18-b0c5-882f5e080459-kube-api-access-qkk9b\") pod \"dnsmasq-dns-57d769cc4f-znlf7\" (UID: \"c27ad98d-1bd3-4d18-b0c5-882f5e080459\") " pod="openstack/dnsmasq-dns-57d769cc4f-znlf7" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.006647 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c27ad98d-1bd3-4d18-b0c5-882f5e080459-dns-svc\") pod \"dnsmasq-dns-57d769cc4f-znlf7\" (UID: \"c27ad98d-1bd3-4d18-b0c5-882f5e080459\") " pod="openstack/dnsmasq-dns-57d769cc4f-znlf7" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.007084 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c27ad98d-1bd3-4d18-b0c5-882f5e080459-config\") pod \"dnsmasq-dns-57d769cc4f-znlf7\" (UID: \"c27ad98d-1bd3-4d18-b0c5-882f5e080459\") " pod="openstack/dnsmasq-dns-57d769cc4f-znlf7" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.025979 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkk9b\" (UniqueName: \"kubernetes.io/projected/c27ad98d-1bd3-4d18-b0c5-882f5e080459-kube-api-access-qkk9b\") pod \"dnsmasq-dns-57d769cc4f-znlf7\" (UID: \"c27ad98d-1bd3-4d18-b0c5-882f5e080459\") " pod="openstack/dnsmasq-dns-57d769cc4f-znlf7" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.178156 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-znlf7" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.407076 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-p9pv5"] Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.606925 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.611340 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.617806 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.618030 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.618228 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-77hcf" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.618380 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.618646 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.618672 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.618651 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.625463 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\") " pod="openstack/rabbitmq-server-0" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.625543 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\") " pod="openstack/rabbitmq-server-0" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.625611 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\") " pod="openstack/rabbitmq-server-0" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.625710 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4w8w\" (UniqueName: \"kubernetes.io/projected/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-kube-api-access-z4w8w\") pod \"rabbitmq-server-0\" (UID: \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\") " pod="openstack/rabbitmq-server-0" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.625761 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\") " pod="openstack/rabbitmq-server-0" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.625788 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\") " pod="openstack/rabbitmq-server-0" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.625872 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\") " pod="openstack/rabbitmq-server-0" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.625931 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-config-data\") pod \"rabbitmq-server-0\" (UID: \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\") " pod="openstack/rabbitmq-server-0" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.625998 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\") " pod="openstack/rabbitmq-server-0" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.626025 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\") " pod="openstack/rabbitmq-server-0" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.626086 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\") " pod="openstack/rabbitmq-server-0" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.630755 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.661746 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-znlf7"] Oct 01 15:57:10 crc kubenswrapper[4949]: W1001 15:57:10.670500 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc27ad98d_1bd3_4d18_b0c5_882f5e080459.slice/crio-154da57f56647d6b3dfef7dffc3bc9c9107f464201693b08ad882f1e38101bc2 WatchSource:0}: Error finding container 154da57f56647d6b3dfef7dffc3bc9c9107f464201693b08ad882f1e38101bc2: Status 404 returned error can't find the container with id 154da57f56647d6b3dfef7dffc3bc9c9107f464201693b08ad882f1e38101bc2 Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.727574 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\") " pod="openstack/rabbitmq-server-0" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.727639 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4w8w\" (UniqueName: \"kubernetes.io/projected/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-kube-api-access-z4w8w\") pod \"rabbitmq-server-0\" (UID: \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\") " pod="openstack/rabbitmq-server-0" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.727655 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\") " pod="openstack/rabbitmq-server-0" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.727672 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\") " pod="openstack/rabbitmq-server-0" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.727693 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\") " pod="openstack/rabbitmq-server-0" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.727717 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-config-data\") pod \"rabbitmq-server-0\" (UID: \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\") " pod="openstack/rabbitmq-server-0" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.727747 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\") " pod="openstack/rabbitmq-server-0" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.727766 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\") " pod="openstack/rabbitmq-server-0" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.727787 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\") " pod="openstack/rabbitmq-server-0" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.727818 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\") " pod="openstack/rabbitmq-server-0" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.727833 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\") " pod="openstack/rabbitmq-server-0" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.728623 4949 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-server-0" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.728899 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\") " pod="openstack/rabbitmq-server-0" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.728977 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\") " pod="openstack/rabbitmq-server-0" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.729425 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\") " pod="openstack/rabbitmq-server-0" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.729774 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\") " pod="openstack/rabbitmq-server-0" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.729811 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-config-data\") pod \"rabbitmq-server-0\" (UID: \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\") " pod="openstack/rabbitmq-server-0" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.734298 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\") " pod="openstack/rabbitmq-server-0" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.736427 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\") " pod="openstack/rabbitmq-server-0" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.737034 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\") " pod="openstack/rabbitmq-server-0" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.737320 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\") " pod="openstack/rabbitmq-server-0" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.743503 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4w8w\" (UniqueName: \"kubernetes.io/projected/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-kube-api-access-z4w8w\") pod \"rabbitmq-server-0\" (UID: \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\") " pod="openstack/rabbitmq-server-0" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.756426 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\") " pod="openstack/rabbitmq-server-0" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.945078 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.946433 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.948217 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.948240 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.948416 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.948454 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.948527 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.948887 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-bvvb7" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.950732 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.980608 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 01 15:57:10 crc kubenswrapper[4949]: I1001 15:57:10.991427 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 01 15:57:11 crc kubenswrapper[4949]: I1001 15:57:11.134047 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 15:57:11 crc kubenswrapper[4949]: I1001 15:57:11.134091 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 15:57:11 crc kubenswrapper[4949]: I1001 15:57:11.134131 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 15:57:11 crc kubenswrapper[4949]: I1001 15:57:11.134270 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ls2ml\" (UniqueName: \"kubernetes.io/projected/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-kube-api-access-ls2ml\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 15:57:11 crc kubenswrapper[4949]: I1001 15:57:11.134337 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 15:57:11 crc kubenswrapper[4949]: I1001 15:57:11.134454 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 15:57:11 crc kubenswrapper[4949]: I1001 15:57:11.134505 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 15:57:11 crc kubenswrapper[4949]: I1001 15:57:11.134558 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 15:57:11 crc kubenswrapper[4949]: I1001 15:57:11.134592 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 15:57:11 crc kubenswrapper[4949]: I1001 15:57:11.134625 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 15:57:11 crc kubenswrapper[4949]: I1001 15:57:11.134676 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 15:57:11 crc kubenswrapper[4949]: I1001 15:57:11.236033 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 15:57:11 crc kubenswrapper[4949]: I1001 15:57:11.236086 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 15:57:11 crc kubenswrapper[4949]: I1001 15:57:11.236112 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 15:57:11 crc kubenswrapper[4949]: I1001 15:57:11.236158 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ls2ml\" (UniqueName: \"kubernetes.io/projected/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-kube-api-access-ls2ml\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 15:57:11 crc kubenswrapper[4949]: I1001 15:57:11.236186 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 15:57:11 crc kubenswrapper[4949]: I1001 15:57:11.236210 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 15:57:11 crc kubenswrapper[4949]: I1001 15:57:11.236234 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 15:57:11 crc kubenswrapper[4949]: I1001 15:57:11.236254 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 15:57:11 crc kubenswrapper[4949]: I1001 15:57:11.236275 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 15:57:11 crc kubenswrapper[4949]: I1001 15:57:11.236311 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 15:57:11 crc kubenswrapper[4949]: I1001 15:57:11.236341 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 15:57:11 crc kubenswrapper[4949]: I1001 15:57:11.237287 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 15:57:11 crc kubenswrapper[4949]: I1001 15:57:11.238191 4949 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Oct 01 15:57:11 crc kubenswrapper[4949]: I1001 15:57:11.238299 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 15:57:11 crc kubenswrapper[4949]: I1001 15:57:11.238199 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 15:57:11 crc kubenswrapper[4949]: I1001 15:57:11.238803 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 15:57:11 crc kubenswrapper[4949]: I1001 15:57:11.239402 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 15:57:11 crc kubenswrapper[4949]: I1001 15:57:11.241715 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 15:57:11 crc kubenswrapper[4949]: I1001 15:57:11.242524 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 15:57:11 crc kubenswrapper[4949]: I1001 15:57:11.242731 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 15:57:11 crc kubenswrapper[4949]: I1001 15:57:11.251405 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 15:57:11 crc kubenswrapper[4949]: I1001 15:57:11.253895 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ls2ml\" (UniqueName: \"kubernetes.io/projected/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-kube-api-access-ls2ml\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 15:57:11 crc kubenswrapper[4949]: I1001 15:57:11.260677 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 15:57:11 crc kubenswrapper[4949]: I1001 15:57:11.274972 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 01 15:57:11 crc kubenswrapper[4949]: I1001 15:57:11.282862 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-p9pv5" event={"ID":"762283e6-fbe2-4174-92e3-ffc1dc5e76f7","Type":"ContainerStarted","Data":"ab05bd7f8f1755aff14027c9f6be98de3adfd9b9217036781f59b558e227bef6"} Oct 01 15:57:11 crc kubenswrapper[4949]: I1001 15:57:11.285789 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-znlf7" event={"ID":"c27ad98d-1bd3-4d18-b0c5-882f5e080459","Type":"ContainerStarted","Data":"154da57f56647d6b3dfef7dffc3bc9c9107f464201693b08ad882f1e38101bc2"} Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.459489 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-galera-0"] Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.463991 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.466823 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-dockercfg-sdqlw" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.467004 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-svc" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.467139 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"osp-secret" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.468490 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-scripts" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.469447 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config-data" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.478291 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"combined-ca-bundle" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.486628 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.580915 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.582186 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.585095 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d5cdf223-529e-4d39-bfc1-7483fbd94a69-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d5cdf223-529e-4d39-bfc1-7483fbd94a69\") " pod="openstack/openstack-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.585169 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d5cdf223-529e-4d39-bfc1-7483fbd94a69-kolla-config\") pod \"openstack-galera-0\" (UID: \"d5cdf223-529e-4d39-bfc1-7483fbd94a69\") " pod="openstack/openstack-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.585216 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5cdf223-529e-4d39-bfc1-7483fbd94a69-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d5cdf223-529e-4d39-bfc1-7483fbd94a69\") " pod="openstack/openstack-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.585253 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5cdf223-529e-4d39-bfc1-7483fbd94a69-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d5cdf223-529e-4d39-bfc1-7483fbd94a69\") " pod="openstack/openstack-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.585276 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5cdf223-529e-4d39-bfc1-7483fbd94a69-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d5cdf223-529e-4d39-bfc1-7483fbd94a69\") " pod="openstack/openstack-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.585298 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9bdt\" (UniqueName: \"kubernetes.io/projected/d5cdf223-529e-4d39-bfc1-7483fbd94a69-kube-api-access-n9bdt\") pod \"openstack-galera-0\" (UID: \"d5cdf223-529e-4d39-bfc1-7483fbd94a69\") " pod="openstack/openstack-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.585355 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/d5cdf223-529e-4d39-bfc1-7483fbd94a69-secrets\") pod \"openstack-galera-0\" (UID: \"d5cdf223-529e-4d39-bfc1-7483fbd94a69\") " pod="openstack/openstack-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.585374 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d5cdf223-529e-4d39-bfc1-7483fbd94a69-config-data-default\") pod \"openstack-galera-0\" (UID: \"d5cdf223-529e-4d39-bfc1-7483fbd94a69\") " pod="openstack/openstack-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.585393 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"d5cdf223-529e-4d39-bfc1-7483fbd94a69\") " pod="openstack/openstack-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.586801 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-scripts" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.587492 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-cell1-config-data" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.587617 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"galera-openstack-cell1-dockercfg-7mmcg" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.588615 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-galera-openstack-cell1-svc" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.591409 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.686726 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5cdf223-529e-4d39-bfc1-7483fbd94a69-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d5cdf223-529e-4d39-bfc1-7483fbd94a69\") " pod="openstack/openstack-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.686764 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc01020a-ebfd-4c4b-b211-d7da1f9aa357-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"fc01020a-ebfd-4c4b-b211-d7da1f9aa357\") " pod="openstack/openstack-cell1-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.686791 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/fc01020a-ebfd-4c4b-b211-d7da1f9aa357-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"fc01020a-ebfd-4c4b-b211-d7da1f9aa357\") " pod="openstack/openstack-cell1-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.686827 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5cdf223-529e-4d39-bfc1-7483fbd94a69-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d5cdf223-529e-4d39-bfc1-7483fbd94a69\") " pod="openstack/openstack-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.686851 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fc01020a-ebfd-4c4b-b211-d7da1f9aa357-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"fc01020a-ebfd-4c4b-b211-d7da1f9aa357\") " pod="openstack/openstack-cell1-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.686874 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5cdf223-529e-4d39-bfc1-7483fbd94a69-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d5cdf223-529e-4d39-bfc1-7483fbd94a69\") " pod="openstack/openstack-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.686901 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n9bdt\" (UniqueName: \"kubernetes.io/projected/d5cdf223-529e-4d39-bfc1-7483fbd94a69-kube-api-access-n9bdt\") pod \"openstack-galera-0\" (UID: \"d5cdf223-529e-4d39-bfc1-7483fbd94a69\") " pod="openstack/openstack-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.686962 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"fc01020a-ebfd-4c4b-b211-d7da1f9aa357\") " pod="openstack/openstack-cell1-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.687012 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fc01020a-ebfd-4c4b-b211-d7da1f9aa357-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"fc01020a-ebfd-4c4b-b211-d7da1f9aa357\") " pod="openstack/openstack-cell1-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.687037 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc01020a-ebfd-4c4b-b211-d7da1f9aa357-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"fc01020a-ebfd-4c4b-b211-d7da1f9aa357\") " pod="openstack/openstack-cell1-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.687066 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/d5cdf223-529e-4d39-bfc1-7483fbd94a69-secrets\") pod \"openstack-galera-0\" (UID: \"d5cdf223-529e-4d39-bfc1-7483fbd94a69\") " pod="openstack/openstack-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.687094 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d5cdf223-529e-4d39-bfc1-7483fbd94a69-config-data-default\") pod \"openstack-galera-0\" (UID: \"d5cdf223-529e-4d39-bfc1-7483fbd94a69\") " pod="openstack/openstack-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.687117 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"d5cdf223-529e-4d39-bfc1-7483fbd94a69\") " pod="openstack/openstack-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.687162 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fc01020a-ebfd-4c4b-b211-d7da1f9aa357-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"fc01020a-ebfd-4c4b-b211-d7da1f9aa357\") " pod="openstack/openstack-cell1-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.687201 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sr68\" (UniqueName: \"kubernetes.io/projected/fc01020a-ebfd-4c4b-b211-d7da1f9aa357-kube-api-access-8sr68\") pod \"openstack-cell1-galera-0\" (UID: \"fc01020a-ebfd-4c4b-b211-d7da1f9aa357\") " pod="openstack/openstack-cell1-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.687232 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc01020a-ebfd-4c4b-b211-d7da1f9aa357-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"fc01020a-ebfd-4c4b-b211-d7da1f9aa357\") " pod="openstack/openstack-cell1-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.687254 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d5cdf223-529e-4d39-bfc1-7483fbd94a69-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d5cdf223-529e-4d39-bfc1-7483fbd94a69\") " pod="openstack/openstack-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.687278 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d5cdf223-529e-4d39-bfc1-7483fbd94a69-kolla-config\") pod \"openstack-galera-0\" (UID: \"d5cdf223-529e-4d39-bfc1-7483fbd94a69\") " pod="openstack/openstack-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.688198 4949 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"d5cdf223-529e-4d39-bfc1-7483fbd94a69\") device mount path \"/mnt/openstack/pv02\"" pod="openstack/openstack-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.688598 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/d5cdf223-529e-4d39-bfc1-7483fbd94a69-config-data-generated\") pod \"openstack-galera-0\" (UID: \"d5cdf223-529e-4d39-bfc1-7483fbd94a69\") " pod="openstack/openstack-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.688890 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/d5cdf223-529e-4d39-bfc1-7483fbd94a69-kolla-config\") pod \"openstack-galera-0\" (UID: \"d5cdf223-529e-4d39-bfc1-7483fbd94a69\") " pod="openstack/openstack-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.689147 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/d5cdf223-529e-4d39-bfc1-7483fbd94a69-config-data-default\") pod \"openstack-galera-0\" (UID: \"d5cdf223-529e-4d39-bfc1-7483fbd94a69\") " pod="openstack/openstack-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.689466 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d5cdf223-529e-4d39-bfc1-7483fbd94a69-operator-scripts\") pod \"openstack-galera-0\" (UID: \"d5cdf223-529e-4d39-bfc1-7483fbd94a69\") " pod="openstack/openstack-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.693850 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d5cdf223-529e-4d39-bfc1-7483fbd94a69-combined-ca-bundle\") pod \"openstack-galera-0\" (UID: \"d5cdf223-529e-4d39-bfc1-7483fbd94a69\") " pod="openstack/openstack-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.698095 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/d5cdf223-529e-4d39-bfc1-7483fbd94a69-galera-tls-certs\") pod \"openstack-galera-0\" (UID: \"d5cdf223-529e-4d39-bfc1-7483fbd94a69\") " pod="openstack/openstack-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.699226 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/d5cdf223-529e-4d39-bfc1-7483fbd94a69-secrets\") pod \"openstack-galera-0\" (UID: \"d5cdf223-529e-4d39-bfc1-7483fbd94a69\") " pod="openstack/openstack-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.705184 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9bdt\" (UniqueName: \"kubernetes.io/projected/d5cdf223-529e-4d39-bfc1-7483fbd94a69-kube-api-access-n9bdt\") pod \"openstack-galera-0\" (UID: \"d5cdf223-529e-4d39-bfc1-7483fbd94a69\") " pod="openstack/openstack-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.715830 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"openstack-galera-0\" (UID: \"d5cdf223-529e-4d39-bfc1-7483fbd94a69\") " pod="openstack/openstack-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.788377 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fc01020a-ebfd-4c4b-b211-d7da1f9aa357-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"fc01020a-ebfd-4c4b-b211-d7da1f9aa357\") " pod="openstack/openstack-cell1-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.788462 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"fc01020a-ebfd-4c4b-b211-d7da1f9aa357\") " pod="openstack/openstack-cell1-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.788510 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fc01020a-ebfd-4c4b-b211-d7da1f9aa357-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"fc01020a-ebfd-4c4b-b211-d7da1f9aa357\") " pod="openstack/openstack-cell1-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.788534 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc01020a-ebfd-4c4b-b211-d7da1f9aa357-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"fc01020a-ebfd-4c4b-b211-d7da1f9aa357\") " pod="openstack/openstack-cell1-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.788575 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fc01020a-ebfd-4c4b-b211-d7da1f9aa357-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"fc01020a-ebfd-4c4b-b211-d7da1f9aa357\") " pod="openstack/openstack-cell1-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.788596 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8sr68\" (UniqueName: \"kubernetes.io/projected/fc01020a-ebfd-4c4b-b211-d7da1f9aa357-kube-api-access-8sr68\") pod \"openstack-cell1-galera-0\" (UID: \"fc01020a-ebfd-4c4b-b211-d7da1f9aa357\") " pod="openstack/openstack-cell1-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.788621 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc01020a-ebfd-4c4b-b211-d7da1f9aa357-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"fc01020a-ebfd-4c4b-b211-d7da1f9aa357\") " pod="openstack/openstack-cell1-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.788641 4949 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"fc01020a-ebfd-4c4b-b211-d7da1f9aa357\") device mount path \"/mnt/openstack/pv09\"" pod="openstack/openstack-cell1-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.788673 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc01020a-ebfd-4c4b-b211-d7da1f9aa357-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"fc01020a-ebfd-4c4b-b211-d7da1f9aa357\") " pod="openstack/openstack-cell1-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.788699 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/fc01020a-ebfd-4c4b-b211-d7da1f9aa357-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"fc01020a-ebfd-4c4b-b211-d7da1f9aa357\") " pod="openstack/openstack-cell1-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.788920 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/fc01020a-ebfd-4c4b-b211-d7da1f9aa357-config-data-generated\") pod \"openstack-cell1-galera-0\" (UID: \"fc01020a-ebfd-4c4b-b211-d7da1f9aa357\") " pod="openstack/openstack-cell1-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.789345 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/fc01020a-ebfd-4c4b-b211-d7da1f9aa357-kolla-config\") pod \"openstack-cell1-galera-0\" (UID: \"fc01020a-ebfd-4c4b-b211-d7da1f9aa357\") " pod="openstack/openstack-cell1-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.789653 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.790596 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/fc01020a-ebfd-4c4b-b211-d7da1f9aa357-config-data-default\") pod \"openstack-cell1-galera-0\" (UID: \"fc01020a-ebfd-4c4b-b211-d7da1f9aa357\") " pod="openstack/openstack-cell1-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.791309 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/fc01020a-ebfd-4c4b-b211-d7da1f9aa357-operator-scripts\") pod \"openstack-cell1-galera-0\" (UID: \"fc01020a-ebfd-4c4b-b211-d7da1f9aa357\") " pod="openstack/openstack-cell1-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.792485 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secrets\" (UniqueName: \"kubernetes.io/secret/fc01020a-ebfd-4c4b-b211-d7da1f9aa357-secrets\") pod \"openstack-cell1-galera-0\" (UID: \"fc01020a-ebfd-4c4b-b211-d7da1f9aa357\") " pod="openstack/openstack-cell1-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.794693 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"galera-tls-certs\" (UniqueName: \"kubernetes.io/secret/fc01020a-ebfd-4c4b-b211-d7da1f9aa357-galera-tls-certs\") pod \"openstack-cell1-galera-0\" (UID: \"fc01020a-ebfd-4c4b-b211-d7da1f9aa357\") " pod="openstack/openstack-cell1-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.796795 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fc01020a-ebfd-4c4b-b211-d7da1f9aa357-combined-ca-bundle\") pod \"openstack-cell1-galera-0\" (UID: \"fc01020a-ebfd-4c4b-b211-d7da1f9aa357\") " pod="openstack/openstack-cell1-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.818404 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8sr68\" (UniqueName: \"kubernetes.io/projected/fc01020a-ebfd-4c4b-b211-d7da1f9aa357-kube-api-access-8sr68\") pod \"openstack-cell1-galera-0\" (UID: \"fc01020a-ebfd-4c4b-b211-d7da1f9aa357\") " pod="openstack/openstack-cell1-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.824294 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"openstack-cell1-galera-0\" (UID: \"fc01020a-ebfd-4c4b-b211-d7da1f9aa357\") " pod="openstack/openstack-cell1-galera-0" Oct 01 15:57:13 crc kubenswrapper[4949]: I1001 15:57:13.911560 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstack-cell1-galera-0" Oct 01 15:57:14 crc kubenswrapper[4949]: I1001 15:57:14.117323 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/memcached-0"] Oct 01 15:57:14 crc kubenswrapper[4949]: I1001 15:57:14.121568 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 01 15:57:14 crc kubenswrapper[4949]: I1001 15:57:14.125145 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"memcached-config-data" Oct 01 15:57:14 crc kubenswrapper[4949]: I1001 15:57:14.125408 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"memcached-memcached-dockercfg-sv64p" Oct 01 15:57:14 crc kubenswrapper[4949]: I1001 15:57:14.125563 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-memcached-svc" Oct 01 15:57:14 crc kubenswrapper[4949]: I1001 15:57:14.134060 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Oct 01 15:57:14 crc kubenswrapper[4949]: I1001 15:57:14.193989 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ecccaacc-6b05-4bbb-bba1-523c5b3de332-kolla-config\") pod \"memcached-0\" (UID: \"ecccaacc-6b05-4bbb-bba1-523c5b3de332\") " pod="openstack/memcached-0" Oct 01 15:57:14 crc kubenswrapper[4949]: I1001 15:57:14.194044 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecccaacc-6b05-4bbb-bba1-523c5b3de332-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ecccaacc-6b05-4bbb-bba1-523c5b3de332\") " pod="openstack/memcached-0" Oct 01 15:57:14 crc kubenswrapper[4949]: I1001 15:57:14.194088 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h2px\" (UniqueName: \"kubernetes.io/projected/ecccaacc-6b05-4bbb-bba1-523c5b3de332-kube-api-access-8h2px\") pod \"memcached-0\" (UID: \"ecccaacc-6b05-4bbb-bba1-523c5b3de332\") " pod="openstack/memcached-0" Oct 01 15:57:14 crc kubenswrapper[4949]: I1001 15:57:14.194136 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecccaacc-6b05-4bbb-bba1-523c5b3de332-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ecccaacc-6b05-4bbb-bba1-523c5b3de332\") " pod="openstack/memcached-0" Oct 01 15:57:14 crc kubenswrapper[4949]: I1001 15:57:14.194177 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ecccaacc-6b05-4bbb-bba1-523c5b3de332-config-data\") pod \"memcached-0\" (UID: \"ecccaacc-6b05-4bbb-bba1-523c5b3de332\") " pod="openstack/memcached-0" Oct 01 15:57:14 crc kubenswrapper[4949]: I1001 15:57:14.295837 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ecccaacc-6b05-4bbb-bba1-523c5b3de332-kolla-config\") pod \"memcached-0\" (UID: \"ecccaacc-6b05-4bbb-bba1-523c5b3de332\") " pod="openstack/memcached-0" Oct 01 15:57:14 crc kubenswrapper[4949]: I1001 15:57:14.295879 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecccaacc-6b05-4bbb-bba1-523c5b3de332-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ecccaacc-6b05-4bbb-bba1-523c5b3de332\") " pod="openstack/memcached-0" Oct 01 15:57:14 crc kubenswrapper[4949]: I1001 15:57:14.295920 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h2px\" (UniqueName: \"kubernetes.io/projected/ecccaacc-6b05-4bbb-bba1-523c5b3de332-kube-api-access-8h2px\") pod \"memcached-0\" (UID: \"ecccaacc-6b05-4bbb-bba1-523c5b3de332\") " pod="openstack/memcached-0" Oct 01 15:57:14 crc kubenswrapper[4949]: I1001 15:57:14.295943 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecccaacc-6b05-4bbb-bba1-523c5b3de332-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ecccaacc-6b05-4bbb-bba1-523c5b3de332\") " pod="openstack/memcached-0" Oct 01 15:57:14 crc kubenswrapper[4949]: I1001 15:57:14.295972 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ecccaacc-6b05-4bbb-bba1-523c5b3de332-config-data\") pod \"memcached-0\" (UID: \"ecccaacc-6b05-4bbb-bba1-523c5b3de332\") " pod="openstack/memcached-0" Oct 01 15:57:14 crc kubenswrapper[4949]: I1001 15:57:14.297252 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/ecccaacc-6b05-4bbb-bba1-523c5b3de332-config-data\") pod \"memcached-0\" (UID: \"ecccaacc-6b05-4bbb-bba1-523c5b3de332\") " pod="openstack/memcached-0" Oct 01 15:57:14 crc kubenswrapper[4949]: I1001 15:57:14.298001 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/ecccaacc-6b05-4bbb-bba1-523c5b3de332-kolla-config\") pod \"memcached-0\" (UID: \"ecccaacc-6b05-4bbb-bba1-523c5b3de332\") " pod="openstack/memcached-0" Oct 01 15:57:14 crc kubenswrapper[4949]: I1001 15:57:14.302241 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memcached-tls-certs\" (UniqueName: \"kubernetes.io/secret/ecccaacc-6b05-4bbb-bba1-523c5b3de332-memcached-tls-certs\") pod \"memcached-0\" (UID: \"ecccaacc-6b05-4bbb-bba1-523c5b3de332\") " pod="openstack/memcached-0" Oct 01 15:57:14 crc kubenswrapper[4949]: I1001 15:57:14.305347 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ecccaacc-6b05-4bbb-bba1-523c5b3de332-combined-ca-bundle\") pod \"memcached-0\" (UID: \"ecccaacc-6b05-4bbb-bba1-523c5b3de332\") " pod="openstack/memcached-0" Oct 01 15:57:14 crc kubenswrapper[4949]: I1001 15:57:14.319649 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h2px\" (UniqueName: \"kubernetes.io/projected/ecccaacc-6b05-4bbb-bba1-523c5b3de332-kube-api-access-8h2px\") pod \"memcached-0\" (UID: \"ecccaacc-6b05-4bbb-bba1-523c5b3de332\") " pod="openstack/memcached-0" Oct 01 15:57:14 crc kubenswrapper[4949]: I1001 15:57:14.438101 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/memcached-0" Oct 01 15:57:15 crc kubenswrapper[4949]: I1001 15:57:15.909828 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Oct 01 15:57:15 crc kubenswrapper[4949]: I1001 15:57:15.910789 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 01 15:57:15 crc kubenswrapper[4949]: I1001 15:57:15.912762 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"telemetry-ceilometer-dockercfg-d4kc2" Oct 01 15:57:15 crc kubenswrapper[4949]: I1001 15:57:15.932567 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 01 15:57:16 crc kubenswrapper[4949]: I1001 15:57:16.021796 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hf7s9\" (UniqueName: \"kubernetes.io/projected/a8530d62-62b2-46a8-be1c-7061ce71f1c2-kube-api-access-hf7s9\") pod \"kube-state-metrics-0\" (UID: \"a8530d62-62b2-46a8-be1c-7061ce71f1c2\") " pod="openstack/kube-state-metrics-0" Oct 01 15:57:16 crc kubenswrapper[4949]: I1001 15:57:16.123866 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hf7s9\" (UniqueName: \"kubernetes.io/projected/a8530d62-62b2-46a8-be1c-7061ce71f1c2-kube-api-access-hf7s9\") pod \"kube-state-metrics-0\" (UID: \"a8530d62-62b2-46a8-be1c-7061ce71f1c2\") " pod="openstack/kube-state-metrics-0" Oct 01 15:57:16 crc kubenswrapper[4949]: I1001 15:57:16.141558 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hf7s9\" (UniqueName: \"kubernetes.io/projected/a8530d62-62b2-46a8-be1c-7061ce71f1c2-kube-api-access-hf7s9\") pod \"kube-state-metrics-0\" (UID: \"a8530d62-62b2-46a8-be1c-7061ce71f1c2\") " pod="openstack/kube-state-metrics-0" Oct 01 15:57:16 crc kubenswrapper[4949]: I1001 15:57:16.265667 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 01 15:57:18 crc kubenswrapper[4949]: I1001 15:57:18.038935 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 15:57:18 crc kubenswrapper[4949]: I1001 15:57:18.039319 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 15:57:18 crc kubenswrapper[4949]: I1001 15:57:18.039383 4949 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l6287" Oct 01 15:57:18 crc kubenswrapper[4949]: I1001 15:57:18.040463 4949 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"4d3a1844e3e942fdd712d174f5f7801debfe4a5d8b96ab72892f6da901567689"} pod="openshift-machine-config-operator/machine-config-daemon-l6287" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 01 15:57:18 crc kubenswrapper[4949]: I1001 15:57:18.040536 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" containerID="cri-o://4d3a1844e3e942fdd712d174f5f7801debfe4a5d8b96ab72892f6da901567689" gracePeriod=600 Oct 01 15:57:18 crc kubenswrapper[4949]: I1001 15:57:18.350769 4949 generic.go:334] "Generic (PLEG): container finished" podID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerID="4d3a1844e3e942fdd712d174f5f7801debfe4a5d8b96ab72892f6da901567689" exitCode=0 Oct 01 15:57:18 crc kubenswrapper[4949]: I1001 15:57:18.350810 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" event={"ID":"0e15cd67-d4ad-49b8-96a6-da114105e558","Type":"ContainerDied","Data":"4d3a1844e3e942fdd712d174f5f7801debfe4a5d8b96ab72892f6da901567689"} Oct 01 15:57:18 crc kubenswrapper[4949]: I1001 15:57:18.350842 4949 scope.go:117] "RemoveContainer" containerID="8b48ddbdf5b95765cf3f08bfbf80fa29211dfe735cc809fa7f3ec31b955af407" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.535262 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.536853 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.541585 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-sb-ovndbs" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.541585 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-config" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.542333 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-sb-dockercfg-h9jvl" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.542525 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-sb-scripts" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.542663 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovn-metrics" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.559053 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.629512 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"394391aa-ba00-4197-870b-33f881a1afda\") " pod="openstack/ovsdbserver-sb-0" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.629586 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m4gq\" (UniqueName: \"kubernetes.io/projected/394391aa-ba00-4197-870b-33f881a1afda-kube-api-access-7m4gq\") pod \"ovsdbserver-sb-0\" (UID: \"394391aa-ba00-4197-870b-33f881a1afda\") " pod="openstack/ovsdbserver-sb-0" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.629620 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/394391aa-ba00-4197-870b-33f881a1afda-config\") pod \"ovsdbserver-sb-0\" (UID: \"394391aa-ba00-4197-870b-33f881a1afda\") " pod="openstack/ovsdbserver-sb-0" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.629644 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/394391aa-ba00-4197-870b-33f881a1afda-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"394391aa-ba00-4197-870b-33f881a1afda\") " pod="openstack/ovsdbserver-sb-0" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.629665 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/394391aa-ba00-4197-870b-33f881a1afda-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"394391aa-ba00-4197-870b-33f881a1afda\") " pod="openstack/ovsdbserver-sb-0" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.629687 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/394391aa-ba00-4197-870b-33f881a1afda-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"394391aa-ba00-4197-870b-33f881a1afda\") " pod="openstack/ovsdbserver-sb-0" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.629745 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/394391aa-ba00-4197-870b-33f881a1afda-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"394391aa-ba00-4197-870b-33f881a1afda\") " pod="openstack/ovsdbserver-sb-0" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.629758 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/394391aa-ba00-4197-870b-33f881a1afda-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"394391aa-ba00-4197-870b-33f881a1afda\") " pod="openstack/ovsdbserver-sb-0" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.696522 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-4kkzs"] Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.697790 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4kkzs" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.701978 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-ovs-248dp"] Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.702233 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovncontroller-ovndbs" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.703579 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncontroller-ovncontroller-dockercfg-lww2x" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.703605 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-248dp" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.706047 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-scripts" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.723760 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-248dp"] Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.730777 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/394391aa-ba00-4197-870b-33f881a1afda-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"394391aa-ba00-4197-870b-33f881a1afda\") " pod="openstack/ovsdbserver-sb-0" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.730822 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/394391aa-ba00-4197-870b-33f881a1afda-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"394391aa-ba00-4197-870b-33f881a1afda\") " pod="openstack/ovsdbserver-sb-0" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.730852 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"394391aa-ba00-4197-870b-33f881a1afda\") " pod="openstack/ovsdbserver-sb-0" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.730891 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7m4gq\" (UniqueName: \"kubernetes.io/projected/394391aa-ba00-4197-870b-33f881a1afda-kube-api-access-7m4gq\") pod \"ovsdbserver-sb-0\" (UID: \"394391aa-ba00-4197-870b-33f881a1afda\") " pod="openstack/ovsdbserver-sb-0" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.730918 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/394391aa-ba00-4197-870b-33f881a1afda-config\") pod \"ovsdbserver-sb-0\" (UID: \"394391aa-ba00-4197-870b-33f881a1afda\") " pod="openstack/ovsdbserver-sb-0" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.730067 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4kkzs"] Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.730942 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/394391aa-ba00-4197-870b-33f881a1afda-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"394391aa-ba00-4197-870b-33f881a1afda\") " pod="openstack/ovsdbserver-sb-0" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.731045 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/394391aa-ba00-4197-870b-33f881a1afda-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"394391aa-ba00-4197-870b-33f881a1afda\") " pod="openstack/ovsdbserver-sb-0" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.731085 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/394391aa-ba00-4197-870b-33f881a1afda-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"394391aa-ba00-4197-870b-33f881a1afda\") " pod="openstack/ovsdbserver-sb-0" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.731866 4949 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"394391aa-ba00-4197-870b-33f881a1afda\") device mount path \"/mnt/openstack/pv06\"" pod="openstack/ovsdbserver-sb-0" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.735202 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/394391aa-ba00-4197-870b-33f881a1afda-config\") pod \"ovsdbserver-sb-0\" (UID: \"394391aa-ba00-4197-870b-33f881a1afda\") " pod="openstack/ovsdbserver-sb-0" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.736499 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/394391aa-ba00-4197-870b-33f881a1afda-scripts\") pod \"ovsdbserver-sb-0\" (UID: \"394391aa-ba00-4197-870b-33f881a1afda\") " pod="openstack/ovsdbserver-sb-0" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.737232 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/394391aa-ba00-4197-870b-33f881a1afda-combined-ca-bundle\") pod \"ovsdbserver-sb-0\" (UID: \"394391aa-ba00-4197-870b-33f881a1afda\") " pod="openstack/ovsdbserver-sb-0" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.738670 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/394391aa-ba00-4197-870b-33f881a1afda-metrics-certs-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"394391aa-ba00-4197-870b-33f881a1afda\") " pod="openstack/ovsdbserver-sb-0" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.745141 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/394391aa-ba00-4197-870b-33f881a1afda-ovsdb-rundir\") pod \"ovsdbserver-sb-0\" (UID: \"394391aa-ba00-4197-870b-33f881a1afda\") " pod="openstack/ovsdbserver-sb-0" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.748023 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb-tls-certs\" (UniqueName: \"kubernetes.io/secret/394391aa-ba00-4197-870b-33f881a1afda-ovsdbserver-sb-tls-certs\") pod \"ovsdbserver-sb-0\" (UID: \"394391aa-ba00-4197-870b-33f881a1afda\") " pod="openstack/ovsdbserver-sb-0" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.756926 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7m4gq\" (UniqueName: \"kubernetes.io/projected/394391aa-ba00-4197-870b-33f881a1afda-kube-api-access-7m4gq\") pod \"ovsdbserver-sb-0\" (UID: \"394391aa-ba00-4197-870b-33f881a1afda\") " pod="openstack/ovsdbserver-sb-0" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.789436 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"ovsdbserver-sb-0\" (UID: \"394391aa-ba00-4197-870b-33f881a1afda\") " pod="openstack/ovsdbserver-sb-0" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.832678 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/230fbfcd-f990-42cf-88bb-9e4c4ae45a7d-var-log-ovn\") pod \"ovn-controller-4kkzs\" (UID: \"230fbfcd-f990-42cf-88bb-9e4c4ae45a7d\") " pod="openstack/ovn-controller-4kkzs" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.832725 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/230fbfcd-f990-42cf-88bb-9e4c4ae45a7d-var-run\") pod \"ovn-controller-4kkzs\" (UID: \"230fbfcd-f990-42cf-88bb-9e4c4ae45a7d\") " pod="openstack/ovn-controller-4kkzs" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.832761 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9db16e40-d8f9-4ee2-bcfc-b093ecdacc7c-var-run\") pod \"ovn-controller-ovs-248dp\" (UID: \"9db16e40-d8f9-4ee2-bcfc-b093ecdacc7c\") " pod="openstack/ovn-controller-ovs-248dp" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.832787 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/230fbfcd-f990-42cf-88bb-9e4c4ae45a7d-combined-ca-bundle\") pod \"ovn-controller-4kkzs\" (UID: \"230fbfcd-f990-42cf-88bb-9e4c4ae45a7d\") " pod="openstack/ovn-controller-4kkzs" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.832838 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9db16e40-d8f9-4ee2-bcfc-b093ecdacc7c-var-lib\") pod \"ovn-controller-ovs-248dp\" (UID: \"9db16e40-d8f9-4ee2-bcfc-b093ecdacc7c\") " pod="openstack/ovn-controller-ovs-248dp" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.832854 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9db16e40-d8f9-4ee2-bcfc-b093ecdacc7c-scripts\") pod \"ovn-controller-ovs-248dp\" (UID: \"9db16e40-d8f9-4ee2-bcfc-b093ecdacc7c\") " pod="openstack/ovn-controller-ovs-248dp" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.832893 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/230fbfcd-f990-42cf-88bb-9e4c4ae45a7d-ovn-controller-tls-certs\") pod \"ovn-controller-4kkzs\" (UID: \"230fbfcd-f990-42cf-88bb-9e4c4ae45a7d\") " pod="openstack/ovn-controller-4kkzs" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.832931 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9db16e40-d8f9-4ee2-bcfc-b093ecdacc7c-etc-ovs\") pod \"ovn-controller-ovs-248dp\" (UID: \"9db16e40-d8f9-4ee2-bcfc-b093ecdacc7c\") " pod="openstack/ovn-controller-ovs-248dp" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.832954 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/230fbfcd-f990-42cf-88bb-9e4c4ae45a7d-scripts\") pod \"ovn-controller-4kkzs\" (UID: \"230fbfcd-f990-42cf-88bb-9e4c4ae45a7d\") " pod="openstack/ovn-controller-4kkzs" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.832969 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbgxc\" (UniqueName: \"kubernetes.io/projected/9db16e40-d8f9-4ee2-bcfc-b093ecdacc7c-kube-api-access-kbgxc\") pod \"ovn-controller-ovs-248dp\" (UID: \"9db16e40-d8f9-4ee2-bcfc-b093ecdacc7c\") " pod="openstack/ovn-controller-ovs-248dp" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.833012 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlsb6\" (UniqueName: \"kubernetes.io/projected/230fbfcd-f990-42cf-88bb-9e4c4ae45a7d-kube-api-access-tlsb6\") pod \"ovn-controller-4kkzs\" (UID: \"230fbfcd-f990-42cf-88bb-9e4c4ae45a7d\") " pod="openstack/ovn-controller-4kkzs" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.833033 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/230fbfcd-f990-42cf-88bb-9e4c4ae45a7d-var-run-ovn\") pod \"ovn-controller-4kkzs\" (UID: \"230fbfcd-f990-42cf-88bb-9e4c4ae45a7d\") " pod="openstack/ovn-controller-4kkzs" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.833048 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9db16e40-d8f9-4ee2-bcfc-b093ecdacc7c-var-log\") pod \"ovn-controller-ovs-248dp\" (UID: \"9db16e40-d8f9-4ee2-bcfc-b093ecdacc7c\") " pod="openstack/ovn-controller-ovs-248dp" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.856773 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-sb-0" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.934573 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9db16e40-d8f9-4ee2-bcfc-b093ecdacc7c-var-lib\") pod \"ovn-controller-ovs-248dp\" (UID: \"9db16e40-d8f9-4ee2-bcfc-b093ecdacc7c\") " pod="openstack/ovn-controller-ovs-248dp" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.934612 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9db16e40-d8f9-4ee2-bcfc-b093ecdacc7c-scripts\") pod \"ovn-controller-ovs-248dp\" (UID: \"9db16e40-d8f9-4ee2-bcfc-b093ecdacc7c\") " pod="openstack/ovn-controller-ovs-248dp" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.934656 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/230fbfcd-f990-42cf-88bb-9e4c4ae45a7d-ovn-controller-tls-certs\") pod \"ovn-controller-4kkzs\" (UID: \"230fbfcd-f990-42cf-88bb-9e4c4ae45a7d\") " pod="openstack/ovn-controller-4kkzs" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.934680 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9db16e40-d8f9-4ee2-bcfc-b093ecdacc7c-etc-ovs\") pod \"ovn-controller-ovs-248dp\" (UID: \"9db16e40-d8f9-4ee2-bcfc-b093ecdacc7c\") " pod="openstack/ovn-controller-ovs-248dp" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.934707 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/230fbfcd-f990-42cf-88bb-9e4c4ae45a7d-scripts\") pod \"ovn-controller-4kkzs\" (UID: \"230fbfcd-f990-42cf-88bb-9e4c4ae45a7d\") " pod="openstack/ovn-controller-4kkzs" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.934727 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbgxc\" (UniqueName: \"kubernetes.io/projected/9db16e40-d8f9-4ee2-bcfc-b093ecdacc7c-kube-api-access-kbgxc\") pod \"ovn-controller-ovs-248dp\" (UID: \"9db16e40-d8f9-4ee2-bcfc-b093ecdacc7c\") " pod="openstack/ovn-controller-ovs-248dp" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.934755 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlsb6\" (UniqueName: \"kubernetes.io/projected/230fbfcd-f990-42cf-88bb-9e4c4ae45a7d-kube-api-access-tlsb6\") pod \"ovn-controller-4kkzs\" (UID: \"230fbfcd-f990-42cf-88bb-9e4c4ae45a7d\") " pod="openstack/ovn-controller-4kkzs" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.934780 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/230fbfcd-f990-42cf-88bb-9e4c4ae45a7d-var-run-ovn\") pod \"ovn-controller-4kkzs\" (UID: \"230fbfcd-f990-42cf-88bb-9e4c4ae45a7d\") " pod="openstack/ovn-controller-4kkzs" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.934795 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9db16e40-d8f9-4ee2-bcfc-b093ecdacc7c-var-log\") pod \"ovn-controller-ovs-248dp\" (UID: \"9db16e40-d8f9-4ee2-bcfc-b093ecdacc7c\") " pod="openstack/ovn-controller-ovs-248dp" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.934821 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/230fbfcd-f990-42cf-88bb-9e4c4ae45a7d-var-log-ovn\") pod \"ovn-controller-4kkzs\" (UID: \"230fbfcd-f990-42cf-88bb-9e4c4ae45a7d\") " pod="openstack/ovn-controller-4kkzs" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.934842 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/230fbfcd-f990-42cf-88bb-9e4c4ae45a7d-var-run\") pod \"ovn-controller-4kkzs\" (UID: \"230fbfcd-f990-42cf-88bb-9e4c4ae45a7d\") " pod="openstack/ovn-controller-4kkzs" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.934857 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9db16e40-d8f9-4ee2-bcfc-b093ecdacc7c-var-run\") pod \"ovn-controller-ovs-248dp\" (UID: \"9db16e40-d8f9-4ee2-bcfc-b093ecdacc7c\") " pod="openstack/ovn-controller-ovs-248dp" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.934881 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/230fbfcd-f990-42cf-88bb-9e4c4ae45a7d-combined-ca-bundle\") pod \"ovn-controller-4kkzs\" (UID: \"230fbfcd-f990-42cf-88bb-9e4c4ae45a7d\") " pod="openstack/ovn-controller-4kkzs" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.935269 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib\" (UniqueName: \"kubernetes.io/host-path/9db16e40-d8f9-4ee2-bcfc-b093ecdacc7c-var-lib\") pod \"ovn-controller-ovs-248dp\" (UID: \"9db16e40-d8f9-4ee2-bcfc-b093ecdacc7c\") " pod="openstack/ovn-controller-ovs-248dp" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.935433 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ovs\" (UniqueName: \"kubernetes.io/host-path/9db16e40-d8f9-4ee2-bcfc-b093ecdacc7c-etc-ovs\") pod \"ovn-controller-ovs-248dp\" (UID: \"9db16e40-d8f9-4ee2-bcfc-b093ecdacc7c\") " pod="openstack/ovn-controller-ovs-248dp" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.935563 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/230fbfcd-f990-42cf-88bb-9e4c4ae45a7d-var-log-ovn\") pod \"ovn-controller-4kkzs\" (UID: \"230fbfcd-f990-42cf-88bb-9e4c4ae45a7d\") " pod="openstack/ovn-controller-4kkzs" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.935720 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/230fbfcd-f990-42cf-88bb-9e4c4ae45a7d-var-run-ovn\") pod \"ovn-controller-4kkzs\" (UID: \"230fbfcd-f990-42cf-88bb-9e4c4ae45a7d\") " pod="openstack/ovn-controller-4kkzs" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.935866 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/9db16e40-d8f9-4ee2-bcfc-b093ecdacc7c-var-run\") pod \"ovn-controller-ovs-248dp\" (UID: \"9db16e40-d8f9-4ee2-bcfc-b093ecdacc7c\") " pod="openstack/ovn-controller-ovs-248dp" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.935878 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/230fbfcd-f990-42cf-88bb-9e4c4ae45a7d-var-run\") pod \"ovn-controller-4kkzs\" (UID: \"230fbfcd-f990-42cf-88bb-9e4c4ae45a7d\") " pod="openstack/ovn-controller-4kkzs" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.935981 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/9db16e40-d8f9-4ee2-bcfc-b093ecdacc7c-var-log\") pod \"ovn-controller-ovs-248dp\" (UID: \"9db16e40-d8f9-4ee2-bcfc-b093ecdacc7c\") " pod="openstack/ovn-controller-ovs-248dp" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.937013 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/230fbfcd-f990-42cf-88bb-9e4c4ae45a7d-scripts\") pod \"ovn-controller-4kkzs\" (UID: \"230fbfcd-f990-42cf-88bb-9e4c4ae45a7d\") " pod="openstack/ovn-controller-4kkzs" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.938437 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/9db16e40-d8f9-4ee2-bcfc-b093ecdacc7c-scripts\") pod \"ovn-controller-ovs-248dp\" (UID: \"9db16e40-d8f9-4ee2-bcfc-b093ecdacc7c\") " pod="openstack/ovn-controller-ovs-248dp" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.940902 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/230fbfcd-f990-42cf-88bb-9e4c4ae45a7d-combined-ca-bundle\") pod \"ovn-controller-4kkzs\" (UID: \"230fbfcd-f990-42cf-88bb-9e4c4ae45a7d\") " pod="openstack/ovn-controller-4kkzs" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.941190 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-controller-tls-certs\" (UniqueName: \"kubernetes.io/secret/230fbfcd-f990-42cf-88bb-9e4c4ae45a7d-ovn-controller-tls-certs\") pod \"ovn-controller-4kkzs\" (UID: \"230fbfcd-f990-42cf-88bb-9e4c4ae45a7d\") " pod="openstack/ovn-controller-4kkzs" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.955770 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbgxc\" (UniqueName: \"kubernetes.io/projected/9db16e40-d8f9-4ee2-bcfc-b093ecdacc7c-kube-api-access-kbgxc\") pod \"ovn-controller-ovs-248dp\" (UID: \"9db16e40-d8f9-4ee2-bcfc-b093ecdacc7c\") " pod="openstack/ovn-controller-ovs-248dp" Oct 01 15:57:20 crc kubenswrapper[4949]: I1001 15:57:20.956778 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlsb6\" (UniqueName: \"kubernetes.io/projected/230fbfcd-f990-42cf-88bb-9e4c4ae45a7d-kube-api-access-tlsb6\") pod \"ovn-controller-4kkzs\" (UID: \"230fbfcd-f990-42cf-88bb-9e4c4ae45a7d\") " pod="openstack/ovn-controller-4kkzs" Oct 01 15:57:21 crc kubenswrapper[4949]: I1001 15:57:21.015612 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4kkzs" Oct 01 15:57:21 crc kubenswrapper[4949]: I1001 15:57:21.023886 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-ovs-248dp" Oct 01 15:57:22 crc kubenswrapper[4949]: I1001 15:57:22.503473 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 01 15:57:22 crc kubenswrapper[4949]: I1001 15:57:22.505266 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Oct 01 15:57:22 crc kubenswrapper[4949]: I1001 15:57:22.507439 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovncluster-ovndbcluster-nb-dockercfg-bxvhs" Oct 01 15:57:22 crc kubenswrapper[4949]: I1001 15:57:22.507891 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovndbcluster-nb-ovndbs" Oct 01 15:57:22 crc kubenswrapper[4949]: I1001 15:57:22.508308 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-config" Oct 01 15:57:22 crc kubenswrapper[4949]: I1001 15:57:22.509714 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovndbcluster-nb-scripts" Oct 01 15:57:22 crc kubenswrapper[4949]: I1001 15:57:22.512102 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 01 15:57:22 crc kubenswrapper[4949]: I1001 15:57:22.659459 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/321b87e2-0290-4062-9ae3-a7370005b2e4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"321b87e2-0290-4062-9ae3-a7370005b2e4\") " pod="openstack/ovsdbserver-nb-0" Oct 01 15:57:22 crc kubenswrapper[4949]: I1001 15:57:22.659560 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5pcbb\" (UniqueName: \"kubernetes.io/projected/321b87e2-0290-4062-9ae3-a7370005b2e4-kube-api-access-5pcbb\") pod \"ovsdbserver-nb-0\" (UID: \"321b87e2-0290-4062-9ae3-a7370005b2e4\") " pod="openstack/ovsdbserver-nb-0" Oct 01 15:57:22 crc kubenswrapper[4949]: I1001 15:57:22.659596 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"321b87e2-0290-4062-9ae3-a7370005b2e4\") " pod="openstack/ovsdbserver-nb-0" Oct 01 15:57:22 crc kubenswrapper[4949]: I1001 15:57:22.659636 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/321b87e2-0290-4062-9ae3-a7370005b2e4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"321b87e2-0290-4062-9ae3-a7370005b2e4\") " pod="openstack/ovsdbserver-nb-0" Oct 01 15:57:22 crc kubenswrapper[4949]: I1001 15:57:22.659660 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/321b87e2-0290-4062-9ae3-a7370005b2e4-config\") pod \"ovsdbserver-nb-0\" (UID: \"321b87e2-0290-4062-9ae3-a7370005b2e4\") " pod="openstack/ovsdbserver-nb-0" Oct 01 15:57:22 crc kubenswrapper[4949]: I1001 15:57:22.659701 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/321b87e2-0290-4062-9ae3-a7370005b2e4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"321b87e2-0290-4062-9ae3-a7370005b2e4\") " pod="openstack/ovsdbserver-nb-0" Oct 01 15:57:22 crc kubenswrapper[4949]: I1001 15:57:22.659723 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/321b87e2-0290-4062-9ae3-a7370005b2e4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"321b87e2-0290-4062-9ae3-a7370005b2e4\") " pod="openstack/ovsdbserver-nb-0" Oct 01 15:57:22 crc kubenswrapper[4949]: I1001 15:57:22.659775 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/321b87e2-0290-4062-9ae3-a7370005b2e4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"321b87e2-0290-4062-9ae3-a7370005b2e4\") " pod="openstack/ovsdbserver-nb-0" Oct 01 15:57:22 crc kubenswrapper[4949]: I1001 15:57:22.760770 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5pcbb\" (UniqueName: \"kubernetes.io/projected/321b87e2-0290-4062-9ae3-a7370005b2e4-kube-api-access-5pcbb\") pod \"ovsdbserver-nb-0\" (UID: \"321b87e2-0290-4062-9ae3-a7370005b2e4\") " pod="openstack/ovsdbserver-nb-0" Oct 01 15:57:22 crc kubenswrapper[4949]: I1001 15:57:22.760815 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"321b87e2-0290-4062-9ae3-a7370005b2e4\") " pod="openstack/ovsdbserver-nb-0" Oct 01 15:57:22 crc kubenswrapper[4949]: I1001 15:57:22.760842 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/321b87e2-0290-4062-9ae3-a7370005b2e4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"321b87e2-0290-4062-9ae3-a7370005b2e4\") " pod="openstack/ovsdbserver-nb-0" Oct 01 15:57:22 crc kubenswrapper[4949]: I1001 15:57:22.760862 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/321b87e2-0290-4062-9ae3-a7370005b2e4-config\") pod \"ovsdbserver-nb-0\" (UID: \"321b87e2-0290-4062-9ae3-a7370005b2e4\") " pod="openstack/ovsdbserver-nb-0" Oct 01 15:57:22 crc kubenswrapper[4949]: I1001 15:57:22.760906 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/321b87e2-0290-4062-9ae3-a7370005b2e4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"321b87e2-0290-4062-9ae3-a7370005b2e4\") " pod="openstack/ovsdbserver-nb-0" Oct 01 15:57:22 crc kubenswrapper[4949]: I1001 15:57:22.760924 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/321b87e2-0290-4062-9ae3-a7370005b2e4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"321b87e2-0290-4062-9ae3-a7370005b2e4\") " pod="openstack/ovsdbserver-nb-0" Oct 01 15:57:22 crc kubenswrapper[4949]: I1001 15:57:22.760945 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/321b87e2-0290-4062-9ae3-a7370005b2e4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"321b87e2-0290-4062-9ae3-a7370005b2e4\") " pod="openstack/ovsdbserver-nb-0" Oct 01 15:57:22 crc kubenswrapper[4949]: I1001 15:57:22.760994 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/321b87e2-0290-4062-9ae3-a7370005b2e4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"321b87e2-0290-4062-9ae3-a7370005b2e4\") " pod="openstack/ovsdbserver-nb-0" Oct 01 15:57:22 crc kubenswrapper[4949]: I1001 15:57:22.761419 4949 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"321b87e2-0290-4062-9ae3-a7370005b2e4\") device mount path \"/mnt/openstack/pv03\"" pod="openstack/ovsdbserver-nb-0" Oct 01 15:57:22 crc kubenswrapper[4949]: I1001 15:57:22.762209 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdb-rundir\" (UniqueName: \"kubernetes.io/empty-dir/321b87e2-0290-4062-9ae3-a7370005b2e4-ovsdb-rundir\") pod \"ovsdbserver-nb-0\" (UID: \"321b87e2-0290-4062-9ae3-a7370005b2e4\") " pod="openstack/ovsdbserver-nb-0" Oct 01 15:57:22 crc kubenswrapper[4949]: I1001 15:57:22.762413 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/321b87e2-0290-4062-9ae3-a7370005b2e4-config\") pod \"ovsdbserver-nb-0\" (UID: \"321b87e2-0290-4062-9ae3-a7370005b2e4\") " pod="openstack/ovsdbserver-nb-0" Oct 01 15:57:22 crc kubenswrapper[4949]: I1001 15:57:22.763057 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/321b87e2-0290-4062-9ae3-a7370005b2e4-scripts\") pod \"ovsdbserver-nb-0\" (UID: \"321b87e2-0290-4062-9ae3-a7370005b2e4\") " pod="openstack/ovsdbserver-nb-0" Oct 01 15:57:22 crc kubenswrapper[4949]: I1001 15:57:22.765623 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/321b87e2-0290-4062-9ae3-a7370005b2e4-combined-ca-bundle\") pod \"ovsdbserver-nb-0\" (UID: \"321b87e2-0290-4062-9ae3-a7370005b2e4\") " pod="openstack/ovsdbserver-nb-0" Oct 01 15:57:22 crc kubenswrapper[4949]: I1001 15:57:22.775286 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb-tls-certs\" (UniqueName: \"kubernetes.io/secret/321b87e2-0290-4062-9ae3-a7370005b2e4-ovsdbserver-nb-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"321b87e2-0290-4062-9ae3-a7370005b2e4\") " pod="openstack/ovsdbserver-nb-0" Oct 01 15:57:22 crc kubenswrapper[4949]: I1001 15:57:22.776499 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5pcbb\" (UniqueName: \"kubernetes.io/projected/321b87e2-0290-4062-9ae3-a7370005b2e4-kube-api-access-5pcbb\") pod \"ovsdbserver-nb-0\" (UID: \"321b87e2-0290-4062-9ae3-a7370005b2e4\") " pod="openstack/ovsdbserver-nb-0" Oct 01 15:57:22 crc kubenswrapper[4949]: I1001 15:57:22.778086 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/321b87e2-0290-4062-9ae3-a7370005b2e4-metrics-certs-tls-certs\") pod \"ovsdbserver-nb-0\" (UID: \"321b87e2-0290-4062-9ae3-a7370005b2e4\") " pod="openstack/ovsdbserver-nb-0" Oct 01 15:57:22 crc kubenswrapper[4949]: I1001 15:57:22.817470 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ovsdbserver-nb-0\" (UID: \"321b87e2-0290-4062-9ae3-a7370005b2e4\") " pod="openstack/ovsdbserver-nb-0" Oct 01 15:57:22 crc kubenswrapper[4949]: I1001 15:57:22.842115 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovsdbserver-nb-0" Oct 01 15:57:43 crc kubenswrapper[4949]: I1001 15:57:43.709069 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-galera-0"] Oct 01 15:57:43 crc kubenswrapper[4949]: I1001 15:57:43.716706 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 01 15:57:43 crc kubenswrapper[4949]: W1001 15:57:43.722878 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5cdf223_529e_4d39_bfc1_7483fbd94a69.slice/crio-96419946e70f0695f7266ab3ccb288c834ea01cdc276c5444188d0f3cf0710a7 WatchSource:0}: Error finding container 96419946e70f0695f7266ab3ccb288c834ea01cdc276c5444188d0f3cf0710a7: Status 404 returned error can't find the container with id 96419946e70f0695f7266ab3ccb288c834ea01cdc276c5444188d0f3cf0710a7 Oct 01 15:57:43 crc kubenswrapper[4949]: I1001 15:57:43.729845 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 01 15:57:43 crc kubenswrapper[4949]: I1001 15:57:43.889247 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/memcached-0"] Oct 01 15:57:43 crc kubenswrapper[4949]: I1001 15:57:43.895925 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 01 15:57:43 crc kubenswrapper[4949]: I1001 15:57:43.899427 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4kkzs"] Oct 01 15:57:43 crc kubenswrapper[4949]: I1001 15:57:43.907483 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstack-cell1-galera-0"] Oct 01 15:57:43 crc kubenswrapper[4949]: I1001 15:57:43.994608 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-nb-0"] Oct 01 15:57:44 crc kubenswrapper[4949]: W1001 15:57:44.049519 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod321b87e2_0290_4062_9ae3_a7370005b2e4.slice/crio-3c6612b158dbb586d583998af1d162cd4b8b5bc55cb9cddd86b229e957c05f77 WatchSource:0}: Error finding container 3c6612b158dbb586d583998af1d162cd4b8b5bc55cb9cddd86b229e957c05f77: Status 404 returned error can't find the container with id 3c6612b158dbb586d583998af1d162cd4b8b5bc55cb9cddd86b229e957c05f77 Oct 01 15:57:44 crc kubenswrapper[4949]: I1001 15:57:44.122656 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovsdbserver-sb-0"] Oct 01 15:57:44 crc kubenswrapper[4949]: W1001 15:57:44.125966 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod394391aa_ba00_4197_870b_33f881a1afda.slice/crio-5d4ccaaa5908ab5e07f5b43c17f302a2a92fbc0e7ef8748e7a96117a0f9e784a WatchSource:0}: Error finding container 5d4ccaaa5908ab5e07f5b43c17f302a2a92fbc0e7ef8748e7a96117a0f9e784a: Status 404 returned error can't find the container with id 5d4ccaaa5908ab5e07f5b43c17f302a2a92fbc0e7ef8748e7a96117a0f9e784a Oct 01 15:57:44 crc kubenswrapper[4949]: I1001 15:57:44.215229 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-ovs-248dp"] Oct 01 15:57:44 crc kubenswrapper[4949]: W1001 15:57:44.218431 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9db16e40_d8f9_4ee2_bcfc_b093ecdacc7c.slice/crio-7ec1e848c7ed54b72fcb508d208003a27234b6a8e75aaa716062db07fba1d236 WatchSource:0}: Error finding container 7ec1e848c7ed54b72fcb508d208003a27234b6a8e75aaa716062db07fba1d236: Status 404 returned error can't find the container with id 7ec1e848c7ed54b72fcb508d208003a27234b6a8e75aaa716062db07fba1d236 Oct 01 15:57:44 crc kubenswrapper[4949]: I1001 15:57:44.613748 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"ecccaacc-6b05-4bbb-bba1-523c5b3de332","Type":"ContainerStarted","Data":"e2918ed328e7090710c08349864d89594a8ad9170d14875360ea5847c1c43e0b"} Oct 01 15:57:44 crc kubenswrapper[4949]: I1001 15:57:44.615232 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"321b87e2-0290-4062-9ae3-a7370005b2e4","Type":"ContainerStarted","Data":"3c6612b158dbb586d583998af1d162cd4b8b5bc55cb9cddd86b229e957c05f77"} Oct 01 15:57:44 crc kubenswrapper[4949]: I1001 15:57:44.616809 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2","Type":"ContainerStarted","Data":"6ca829030abdba0e6e2294139faa3220f1c3ec37d7dab9e2c8d904b469b527e7"} Oct 01 15:57:44 crc kubenswrapper[4949]: I1001 15:57:44.621738 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" event={"ID":"0e15cd67-d4ad-49b8-96a6-da114105e558","Type":"ContainerStarted","Data":"7dd8953e4b2c2c8892c46fcb9ba2ef1fa5099f63e2374f27e45351f899f750d3"} Oct 01 15:57:44 crc kubenswrapper[4949]: I1001 15:57:44.623742 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d5cdf223-529e-4d39-bfc1-7483fbd94a69","Type":"ContainerStarted","Data":"96419946e70f0695f7266ab3ccb288c834ea01cdc276c5444188d0f3cf0710a7"} Oct 01 15:57:44 crc kubenswrapper[4949]: I1001 15:57:44.625432 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-248dp" event={"ID":"9db16e40-d8f9-4ee2-bcfc-b093ecdacc7c","Type":"ContainerStarted","Data":"7ec1e848c7ed54b72fcb508d208003a27234b6a8e75aaa716062db07fba1d236"} Oct 01 15:57:44 crc kubenswrapper[4949]: I1001 15:57:44.627305 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"394391aa-ba00-4197-870b-33f881a1afda","Type":"ContainerStarted","Data":"5d4ccaaa5908ab5e07f5b43c17f302a2a92fbc0e7ef8748e7a96117a0f9e784a"} Oct 01 15:57:44 crc kubenswrapper[4949]: I1001 15:57:44.629114 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"fc01020a-ebfd-4c4b-b211-d7da1f9aa357","Type":"ContainerStarted","Data":"900d1bb92a14522414bf62ac3d1791d474cda985fc20b345ef03b5dee6849459"} Oct 01 15:57:44 crc kubenswrapper[4949]: I1001 15:57:44.631607 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a","Type":"ContainerStarted","Data":"a542a40fed197c0dd4a77c617b207e79bbc00650bb3e87148e9e2f8a3d0a8bed"} Oct 01 15:57:44 crc kubenswrapper[4949]: I1001 15:57:44.633615 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a8530d62-62b2-46a8-be1c-7061ce71f1c2","Type":"ContainerStarted","Data":"bafeda7a5783bda51d7bfb433b86f713b0f6beafe670e7dfb5118258a0eb0eb3"} Oct 01 15:57:44 crc kubenswrapper[4949]: I1001 15:57:44.635363 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4kkzs" event={"ID":"230fbfcd-f990-42cf-88bb-9e4c4ae45a7d","Type":"ContainerStarted","Data":"613a47fb3a36012e2809f5147bba030627d58f6d1bffa264c9387d0950e83712"} Oct 01 15:57:44 crc kubenswrapper[4949]: E1001 15:57:44.946755 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Oct 01 15:57:44 crc kubenswrapper[4949]: E1001 15:57:44.946947 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n68chd6h679hbfh55fhc6h5ffh5d8h94h56ch589hb4hc5h57bh677hcdh655h8dh667h675h654h66ch567h8fh659h5b4h675h566h55bh54h67dh6dq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-djbzn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-666b6646f7-p9pv5_openstack(762283e6-fbe2-4174-92e3-ffc1dc5e76f7): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 01 15:57:44 crc kubenswrapper[4949]: E1001 15:57:44.948109 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-666b6646f7-p9pv5" podUID="762283e6-fbe2-4174-92e3-ffc1dc5e76f7" Oct 01 15:57:45 crc kubenswrapper[4949]: E1001 15:57:45.134615 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Oct 01 15:57:45 crc kubenswrapper[4949]: E1001 15:57:45.134788 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:ndfhb5h667h568h584h5f9h58dh565h664h587h597h577h64bh5c4h66fh647hbdh68ch5c5h68dh686h5f7h64hd7hc6h55fh57bh98h57fh87h5fh57fq,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wkmww,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-78dd6ddcc-6s8xp_openstack(f9c1d21c-695d-40b8-959a-36cd897d56f6): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 01 15:57:45 crc kubenswrapper[4949]: E1001 15:57:45.136146 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-78dd6ddcc-6s8xp" podUID="f9c1d21c-695d-40b8-959a-36cd897d56f6" Oct 01 15:57:45 crc kubenswrapper[4949]: E1001 15:57:45.167079 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Oct 01 15:57:45 crc kubenswrapper[4949]: E1001 15:57:45.167274 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:n659h4h664hbh658h587h67ch89h587h8fh679hc6hf9h55fh644h5d5h698h68dh5cdh5ffh669h54ch9h689hb8hd4h5bfhd8h5d7h5fh665h574q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:dns-svc,ReadOnly:true,MountPath:/etc/dnsmasq.d/hosts/dns-svc,SubPath:dns-svc,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qkk9b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-57d769cc4f-znlf7_openstack(c27ad98d-1bd3-4d18-b0c5-882f5e080459): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 01 15:57:45 crc kubenswrapper[4949]: E1001 15:57:45.169777 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-57d769cc4f-znlf7" podUID="c27ad98d-1bd3-4d18-b0c5-882f5e080459" Oct 01 15:57:45 crc kubenswrapper[4949]: E1001 15:57:45.645356 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-666b6646f7-p9pv5" podUID="762283e6-fbe2-4174-92e3-ffc1dc5e76f7" Oct 01 15:57:45 crc kubenswrapper[4949]: E1001 15:57:45.645734 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified\\\"\"" pod="openstack/dnsmasq-dns-57d769cc4f-znlf7" podUID="c27ad98d-1bd3-4d18-b0c5-882f5e080459" Oct 01 15:57:46 crc kubenswrapper[4949]: I1001 15:57:46.122524 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-6s8xp" Oct 01 15:57:46 crc kubenswrapper[4949]: E1001 15:57:46.258428 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified" Oct 01 15:57:46 crc kubenswrapper[4949]: E1001 15:57:46.258589 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:init,Image:quay.io/podified-antelope-centos9/openstack-neutron-server:current-podified,Command:[/bin/bash],Args:[-c dnsmasq --interface=* --conf-dir=/etc/dnsmasq.d --hostsdir=/etc/dnsmasq.d/hosts --keep-in-foreground --log-debug --bind-interfaces --listen-address=$(POD_IP) --port 5353 --log-facility=- --no-hosts --domain-needed --no-resolv --bogus-priv --log-queries --test],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CONFIG_HASH,Value:nffh5bdhf4h5f8h79h55h77h58fh56dh7bh6fh578hbch55dh68h56bhd9h65dh57ch658hc9h566h666h688h58h65dh684h5d7h6ch575h5d6h88q,ValueFrom:nil,},EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/etc/dnsmasq.d/config.cfg,SubPath:dns,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-6b5vl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000650000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod dnsmasq-dns-675f4bcbfc-zh6mt_openstack(e3c4f0da-e41c-4091-9e44-989b59617c83): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 01 15:57:46 crc kubenswrapper[4949]: E1001 15:57:46.259754 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"init\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/dnsmasq-dns-675f4bcbfc-zh6mt" podUID="e3c4f0da-e41c-4091-9e44-989b59617c83" Oct 01 15:57:46 crc kubenswrapper[4949]: I1001 15:57:46.280716 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9c1d21c-695d-40b8-959a-36cd897d56f6-config\") pod \"f9c1d21c-695d-40b8-959a-36cd897d56f6\" (UID: \"f9c1d21c-695d-40b8-959a-36cd897d56f6\") " Oct 01 15:57:46 crc kubenswrapper[4949]: I1001 15:57:46.280936 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkmww\" (UniqueName: \"kubernetes.io/projected/f9c1d21c-695d-40b8-959a-36cd897d56f6-kube-api-access-wkmww\") pod \"f9c1d21c-695d-40b8-959a-36cd897d56f6\" (UID: \"f9c1d21c-695d-40b8-959a-36cd897d56f6\") " Oct 01 15:57:46 crc kubenswrapper[4949]: I1001 15:57:46.281012 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9c1d21c-695d-40b8-959a-36cd897d56f6-dns-svc\") pod \"f9c1d21c-695d-40b8-959a-36cd897d56f6\" (UID: \"f9c1d21c-695d-40b8-959a-36cd897d56f6\") " Oct 01 15:57:46 crc kubenswrapper[4949]: I1001 15:57:46.281294 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9c1d21c-695d-40b8-959a-36cd897d56f6-config" (OuterVolumeSpecName: "config") pod "f9c1d21c-695d-40b8-959a-36cd897d56f6" (UID: "f9c1d21c-695d-40b8-959a-36cd897d56f6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:57:46 crc kubenswrapper[4949]: I1001 15:57:46.281662 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9c1d21c-695d-40b8-959a-36cd897d56f6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "f9c1d21c-695d-40b8-959a-36cd897d56f6" (UID: "f9c1d21c-695d-40b8-959a-36cd897d56f6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:57:46 crc kubenswrapper[4949]: I1001 15:57:46.288623 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9c1d21c-695d-40b8-959a-36cd897d56f6-kube-api-access-wkmww" (OuterVolumeSpecName: "kube-api-access-wkmww") pod "f9c1d21c-695d-40b8-959a-36cd897d56f6" (UID: "f9c1d21c-695d-40b8-959a-36cd897d56f6"). InnerVolumeSpecName "kube-api-access-wkmww". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:57:46 crc kubenswrapper[4949]: I1001 15:57:46.382412 4949 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/f9c1d21c-695d-40b8-959a-36cd897d56f6-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 01 15:57:46 crc kubenswrapper[4949]: I1001 15:57:46.382450 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f9c1d21c-695d-40b8-959a-36cd897d56f6-config\") on node \"crc\" DevicePath \"\"" Oct 01 15:57:46 crc kubenswrapper[4949]: I1001 15:57:46.382465 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkmww\" (UniqueName: \"kubernetes.io/projected/f9c1d21c-695d-40b8-959a-36cd897d56f6-kube-api-access-wkmww\") on node \"crc\" DevicePath \"\"" Oct 01 15:57:46 crc kubenswrapper[4949]: I1001 15:57:46.652967 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-78dd6ddcc-6s8xp" Oct 01 15:57:46 crc kubenswrapper[4949]: I1001 15:57:46.653133 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-78dd6ddcc-6s8xp" event={"ID":"f9c1d21c-695d-40b8-959a-36cd897d56f6","Type":"ContainerDied","Data":"2ff52cf8620a045ee0e029bf1ac4cb62c0fba699861e2d77a9d33625fdb07591"} Oct 01 15:57:46 crc kubenswrapper[4949]: I1001 15:57:46.758487 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6s8xp"] Oct 01 15:57:46 crc kubenswrapper[4949]: I1001 15:57:46.768078 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-78dd6ddcc-6s8xp"] Oct 01 15:57:47 crc kubenswrapper[4949]: I1001 15:57:47.612074 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f9c1d21c-695d-40b8-959a-36cd897d56f6" path="/var/lib/kubelet/pods/f9c1d21c-695d-40b8-959a-36cd897d56f6/volumes" Oct 01 15:57:49 crc kubenswrapper[4949]: I1001 15:57:49.627549 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-zh6mt" Oct 01 15:57:49 crc kubenswrapper[4949]: I1001 15:57:49.673148 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-675f4bcbfc-zh6mt" event={"ID":"e3c4f0da-e41c-4091-9e44-989b59617c83","Type":"ContainerDied","Data":"d5eeec89310a54604b8a72088528aec994d2430cda3c43198185933aba3b8f0b"} Oct 01 15:57:49 crc kubenswrapper[4949]: I1001 15:57:49.673208 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-675f4bcbfc-zh6mt" Oct 01 15:57:49 crc kubenswrapper[4949]: I1001 15:57:49.674917 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6b5vl\" (UniqueName: \"kubernetes.io/projected/e3c4f0da-e41c-4091-9e44-989b59617c83-kube-api-access-6b5vl\") pod \"e3c4f0da-e41c-4091-9e44-989b59617c83\" (UID: \"e3c4f0da-e41c-4091-9e44-989b59617c83\") " Oct 01 15:57:49 crc kubenswrapper[4949]: I1001 15:57:49.674966 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3c4f0da-e41c-4091-9e44-989b59617c83-config\") pod \"e3c4f0da-e41c-4091-9e44-989b59617c83\" (UID: \"e3c4f0da-e41c-4091-9e44-989b59617c83\") " Oct 01 15:57:49 crc kubenswrapper[4949]: I1001 15:57:49.675623 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3c4f0da-e41c-4091-9e44-989b59617c83-config" (OuterVolumeSpecName: "config") pod "e3c4f0da-e41c-4091-9e44-989b59617c83" (UID: "e3c4f0da-e41c-4091-9e44-989b59617c83"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:57:49 crc kubenswrapper[4949]: I1001 15:57:49.681847 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3c4f0da-e41c-4091-9e44-989b59617c83-kube-api-access-6b5vl" (OuterVolumeSpecName: "kube-api-access-6b5vl") pod "e3c4f0da-e41c-4091-9e44-989b59617c83" (UID: "e3c4f0da-e41c-4091-9e44-989b59617c83"). InnerVolumeSpecName "kube-api-access-6b5vl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:57:49 crc kubenswrapper[4949]: I1001 15:57:49.776387 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6b5vl\" (UniqueName: \"kubernetes.io/projected/e3c4f0da-e41c-4091-9e44-989b59617c83-kube-api-access-6b5vl\") on node \"crc\" DevicePath \"\"" Oct 01 15:57:49 crc kubenswrapper[4949]: I1001 15:57:49.776632 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3c4f0da-e41c-4091-9e44-989b59617c83-config\") on node \"crc\" DevicePath \"\"" Oct 01 15:57:50 crc kubenswrapper[4949]: I1001 15:57:50.032424 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-zh6mt"] Oct 01 15:57:50 crc kubenswrapper[4949]: I1001 15:57:50.032696 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-675f4bcbfc-zh6mt"] Oct 01 15:57:51 crc kubenswrapper[4949]: I1001 15:57:51.611487 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3c4f0da-e41c-4091-9e44-989b59617c83" path="/var/lib/kubelet/pods/e3c4f0da-e41c-4091-9e44-989b59617c83/volumes" Oct 01 15:57:52 crc kubenswrapper[4949]: I1001 15:57:52.783208 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-metrics-dcnsm"] Oct 01 15:57:52 crc kubenswrapper[4949]: I1001 15:57:52.784302 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-dcnsm" Oct 01 15:57:52 crc kubenswrapper[4949]: I1001 15:57:52.792308 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-metrics-config" Oct 01 15:57:52 crc kubenswrapper[4949]: I1001 15:57:52.803223 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-dcnsm"] Oct 01 15:57:52 crc kubenswrapper[4949]: I1001 15:57:52.819694 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b840653-c566-4109-afbe-c5733092d91d-config\") pod \"ovn-controller-metrics-dcnsm\" (UID: \"6b840653-c566-4109-afbe-c5733092d91d\") " pod="openstack/ovn-controller-metrics-dcnsm" Oct 01 15:57:52 crc kubenswrapper[4949]: I1001 15:57:52.819789 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/6b840653-c566-4109-afbe-c5733092d91d-ovs-rundir\") pod \"ovn-controller-metrics-dcnsm\" (UID: \"6b840653-c566-4109-afbe-c5733092d91d\") " pod="openstack/ovn-controller-metrics-dcnsm" Oct 01 15:57:52 crc kubenswrapper[4949]: I1001 15:57:52.819817 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/6b840653-c566-4109-afbe-c5733092d91d-ovn-rundir\") pod \"ovn-controller-metrics-dcnsm\" (UID: \"6b840653-c566-4109-afbe-c5733092d91d\") " pod="openstack/ovn-controller-metrics-dcnsm" Oct 01 15:57:52 crc kubenswrapper[4949]: I1001 15:57:52.819951 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b840653-c566-4109-afbe-c5733092d91d-combined-ca-bundle\") pod \"ovn-controller-metrics-dcnsm\" (UID: \"6b840653-c566-4109-afbe-c5733092d91d\") " pod="openstack/ovn-controller-metrics-dcnsm" Oct 01 15:57:52 crc kubenswrapper[4949]: I1001 15:57:52.819983 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4grx4\" (UniqueName: \"kubernetes.io/projected/6b840653-c566-4109-afbe-c5733092d91d-kube-api-access-4grx4\") pod \"ovn-controller-metrics-dcnsm\" (UID: \"6b840653-c566-4109-afbe-c5733092d91d\") " pod="openstack/ovn-controller-metrics-dcnsm" Oct 01 15:57:52 crc kubenswrapper[4949]: I1001 15:57:52.820004 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b840653-c566-4109-afbe-c5733092d91d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-dcnsm\" (UID: \"6b840653-c566-4109-afbe-c5733092d91d\") " pod="openstack/ovn-controller-metrics-dcnsm" Oct 01 15:57:52 crc kubenswrapper[4949]: I1001 15:57:52.921785 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b840653-c566-4109-afbe-c5733092d91d-config\") pod \"ovn-controller-metrics-dcnsm\" (UID: \"6b840653-c566-4109-afbe-c5733092d91d\") " pod="openstack/ovn-controller-metrics-dcnsm" Oct 01 15:57:52 crc kubenswrapper[4949]: I1001 15:57:52.921886 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/6b840653-c566-4109-afbe-c5733092d91d-ovs-rundir\") pod \"ovn-controller-metrics-dcnsm\" (UID: \"6b840653-c566-4109-afbe-c5733092d91d\") " pod="openstack/ovn-controller-metrics-dcnsm" Oct 01 15:57:52 crc kubenswrapper[4949]: I1001 15:57:52.921913 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/6b840653-c566-4109-afbe-c5733092d91d-ovn-rundir\") pod \"ovn-controller-metrics-dcnsm\" (UID: \"6b840653-c566-4109-afbe-c5733092d91d\") " pod="openstack/ovn-controller-metrics-dcnsm" Oct 01 15:57:52 crc kubenswrapper[4949]: I1001 15:57:52.921999 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b840653-c566-4109-afbe-c5733092d91d-combined-ca-bundle\") pod \"ovn-controller-metrics-dcnsm\" (UID: \"6b840653-c566-4109-afbe-c5733092d91d\") " pod="openstack/ovn-controller-metrics-dcnsm" Oct 01 15:57:52 crc kubenswrapper[4949]: I1001 15:57:52.922025 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4grx4\" (UniqueName: \"kubernetes.io/projected/6b840653-c566-4109-afbe-c5733092d91d-kube-api-access-4grx4\") pod \"ovn-controller-metrics-dcnsm\" (UID: \"6b840653-c566-4109-afbe-c5733092d91d\") " pod="openstack/ovn-controller-metrics-dcnsm" Oct 01 15:57:52 crc kubenswrapper[4949]: I1001 15:57:52.922046 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b840653-c566-4109-afbe-c5733092d91d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-dcnsm\" (UID: \"6b840653-c566-4109-afbe-c5733092d91d\") " pod="openstack/ovn-controller-metrics-dcnsm" Oct 01 15:57:52 crc kubenswrapper[4949]: I1001 15:57:52.923260 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-rundir\" (UniqueName: \"kubernetes.io/host-path/6b840653-c566-4109-afbe-c5733092d91d-ovs-rundir\") pod \"ovn-controller-metrics-dcnsm\" (UID: \"6b840653-c566-4109-afbe-c5733092d91d\") " pod="openstack/ovn-controller-metrics-dcnsm" Oct 01 15:57:52 crc kubenswrapper[4949]: I1001 15:57:52.923474 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6b840653-c566-4109-afbe-c5733092d91d-config\") pod \"ovn-controller-metrics-dcnsm\" (UID: \"6b840653-c566-4109-afbe-c5733092d91d\") " pod="openstack/ovn-controller-metrics-dcnsm" Oct 01 15:57:52 crc kubenswrapper[4949]: I1001 15:57:52.923904 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/host-path/6b840653-c566-4109-afbe-c5733092d91d-ovn-rundir\") pod \"ovn-controller-metrics-dcnsm\" (UID: \"6b840653-c566-4109-afbe-c5733092d91d\") " pod="openstack/ovn-controller-metrics-dcnsm" Oct 01 15:57:52 crc kubenswrapper[4949]: I1001 15:57:52.932420 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/6b840653-c566-4109-afbe-c5733092d91d-metrics-certs-tls-certs\") pod \"ovn-controller-metrics-dcnsm\" (UID: \"6b840653-c566-4109-afbe-c5733092d91d\") " pod="openstack/ovn-controller-metrics-dcnsm" Oct 01 15:57:52 crc kubenswrapper[4949]: I1001 15:57:52.940786 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b840653-c566-4109-afbe-c5733092d91d-combined-ca-bundle\") pod \"ovn-controller-metrics-dcnsm\" (UID: \"6b840653-c566-4109-afbe-c5733092d91d\") " pod="openstack/ovn-controller-metrics-dcnsm" Oct 01 15:57:52 crc kubenswrapper[4949]: I1001 15:57:52.964352 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-znlf7"] Oct 01 15:57:53 crc kubenswrapper[4949]: I1001 15:57:53.075610 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4grx4\" (UniqueName: \"kubernetes.io/projected/6b840653-c566-4109-afbe-c5733092d91d-kube-api-access-4grx4\") pod \"ovn-controller-metrics-dcnsm\" (UID: \"6b840653-c566-4109-afbe-c5733092d91d\") " pod="openstack/ovn-controller-metrics-dcnsm" Oct 01 15:57:53 crc kubenswrapper[4949]: I1001 15:57:53.076046 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-865wx"] Oct 01 15:57:53 crc kubenswrapper[4949]: I1001 15:57:53.087894 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-865wx" Oct 01 15:57:53 crc kubenswrapper[4949]: I1001 15:57:53.105696 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-sb" Oct 01 15:57:53 crc kubenswrapper[4949]: I1001 15:57:53.136850 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-metrics-dcnsm" Oct 01 15:57:53 crc kubenswrapper[4949]: I1001 15:57:53.144484 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2d58424-ec27-4ff6-98e0-241e5629484d-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-865wx\" (UID: \"c2d58424-ec27-4ff6-98e0-241e5629484d\") " pod="openstack/dnsmasq-dns-7f896c8c65-865wx" Oct 01 15:57:53 crc kubenswrapper[4949]: I1001 15:57:53.144524 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2d58424-ec27-4ff6-98e0-241e5629484d-config\") pod \"dnsmasq-dns-7f896c8c65-865wx\" (UID: \"c2d58424-ec27-4ff6-98e0-241e5629484d\") " pod="openstack/dnsmasq-dns-7f896c8c65-865wx" Oct 01 15:57:53 crc kubenswrapper[4949]: I1001 15:57:53.144577 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4swdd\" (UniqueName: \"kubernetes.io/projected/c2d58424-ec27-4ff6-98e0-241e5629484d-kube-api-access-4swdd\") pod \"dnsmasq-dns-7f896c8c65-865wx\" (UID: \"c2d58424-ec27-4ff6-98e0-241e5629484d\") " pod="openstack/dnsmasq-dns-7f896c8c65-865wx" Oct 01 15:57:53 crc kubenswrapper[4949]: I1001 15:57:53.144706 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2d58424-ec27-4ff6-98e0-241e5629484d-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-865wx\" (UID: \"c2d58424-ec27-4ff6-98e0-241e5629484d\") " pod="openstack/dnsmasq-dns-7f896c8c65-865wx" Oct 01 15:57:53 crc kubenswrapper[4949]: I1001 15:57:53.164839 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-865wx"] Oct 01 15:57:53 crc kubenswrapper[4949]: I1001 15:57:53.246230 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2d58424-ec27-4ff6-98e0-241e5629484d-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-865wx\" (UID: \"c2d58424-ec27-4ff6-98e0-241e5629484d\") " pod="openstack/dnsmasq-dns-7f896c8c65-865wx" Oct 01 15:57:53 crc kubenswrapper[4949]: I1001 15:57:53.246295 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2d58424-ec27-4ff6-98e0-241e5629484d-config\") pod \"dnsmasq-dns-7f896c8c65-865wx\" (UID: \"c2d58424-ec27-4ff6-98e0-241e5629484d\") " pod="openstack/dnsmasq-dns-7f896c8c65-865wx" Oct 01 15:57:53 crc kubenswrapper[4949]: I1001 15:57:53.246357 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4swdd\" (UniqueName: \"kubernetes.io/projected/c2d58424-ec27-4ff6-98e0-241e5629484d-kube-api-access-4swdd\") pod \"dnsmasq-dns-7f896c8c65-865wx\" (UID: \"c2d58424-ec27-4ff6-98e0-241e5629484d\") " pod="openstack/dnsmasq-dns-7f896c8c65-865wx" Oct 01 15:57:53 crc kubenswrapper[4949]: I1001 15:57:53.246452 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2d58424-ec27-4ff6-98e0-241e5629484d-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-865wx\" (UID: \"c2d58424-ec27-4ff6-98e0-241e5629484d\") " pod="openstack/dnsmasq-dns-7f896c8c65-865wx" Oct 01 15:57:53 crc kubenswrapper[4949]: I1001 15:57:53.247656 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2d58424-ec27-4ff6-98e0-241e5629484d-dns-svc\") pod \"dnsmasq-dns-7f896c8c65-865wx\" (UID: \"c2d58424-ec27-4ff6-98e0-241e5629484d\") " pod="openstack/dnsmasq-dns-7f896c8c65-865wx" Oct 01 15:57:53 crc kubenswrapper[4949]: I1001 15:57:53.248093 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2d58424-ec27-4ff6-98e0-241e5629484d-config\") pod \"dnsmasq-dns-7f896c8c65-865wx\" (UID: \"c2d58424-ec27-4ff6-98e0-241e5629484d\") " pod="openstack/dnsmasq-dns-7f896c8c65-865wx" Oct 01 15:57:53 crc kubenswrapper[4949]: I1001 15:57:53.248113 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2d58424-ec27-4ff6-98e0-241e5629484d-ovsdbserver-sb\") pod \"dnsmasq-dns-7f896c8c65-865wx\" (UID: \"c2d58424-ec27-4ff6-98e0-241e5629484d\") " pod="openstack/dnsmasq-dns-7f896c8c65-865wx" Oct 01 15:57:53 crc kubenswrapper[4949]: I1001 15:57:53.272027 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-p9pv5"] Oct 01 15:57:53 crc kubenswrapper[4949]: I1001 15:57:53.294406 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4swdd\" (UniqueName: \"kubernetes.io/projected/c2d58424-ec27-4ff6-98e0-241e5629484d-kube-api-access-4swdd\") pod \"dnsmasq-dns-7f896c8c65-865wx\" (UID: \"c2d58424-ec27-4ff6-98e0-241e5629484d\") " pod="openstack/dnsmasq-dns-7f896c8c65-865wx" Oct 01 15:57:53 crc kubenswrapper[4949]: I1001 15:57:53.337147 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-tt8cw"] Oct 01 15:57:53 crc kubenswrapper[4949]: I1001 15:57:53.339621 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-tt8cw" Oct 01 15:57:53 crc kubenswrapper[4949]: I1001 15:57:53.355138 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovsdbserver-nb" Oct 01 15:57:53 crc kubenswrapper[4949]: I1001 15:57:53.368670 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-tt8cw"] Oct 01 15:57:53 crc kubenswrapper[4949]: I1001 15:57:53.458157 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3919ba46-ce04-4089-a5d1-033501df8eaf-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-tt8cw\" (UID: \"3919ba46-ce04-4089-a5d1-033501df8eaf\") " pod="openstack/dnsmasq-dns-86db49b7ff-tt8cw" Oct 01 15:57:53 crc kubenswrapper[4949]: I1001 15:57:53.458619 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3919ba46-ce04-4089-a5d1-033501df8eaf-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-tt8cw\" (UID: \"3919ba46-ce04-4089-a5d1-033501df8eaf\") " pod="openstack/dnsmasq-dns-86db49b7ff-tt8cw" Oct 01 15:57:53 crc kubenswrapper[4949]: I1001 15:57:53.458735 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3919ba46-ce04-4089-a5d1-033501df8eaf-config\") pod \"dnsmasq-dns-86db49b7ff-tt8cw\" (UID: \"3919ba46-ce04-4089-a5d1-033501df8eaf\") " pod="openstack/dnsmasq-dns-86db49b7ff-tt8cw" Oct 01 15:57:53 crc kubenswrapper[4949]: I1001 15:57:53.458878 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3919ba46-ce04-4089-a5d1-033501df8eaf-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-tt8cw\" (UID: \"3919ba46-ce04-4089-a5d1-033501df8eaf\") " pod="openstack/dnsmasq-dns-86db49b7ff-tt8cw" Oct 01 15:57:53 crc kubenswrapper[4949]: I1001 15:57:53.458947 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5rh2b\" (UniqueName: \"kubernetes.io/projected/3919ba46-ce04-4089-a5d1-033501df8eaf-kube-api-access-5rh2b\") pod \"dnsmasq-dns-86db49b7ff-tt8cw\" (UID: \"3919ba46-ce04-4089-a5d1-033501df8eaf\") " pod="openstack/dnsmasq-dns-86db49b7ff-tt8cw" Oct 01 15:57:53 crc kubenswrapper[4949]: I1001 15:57:53.521037 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-865wx" Oct 01 15:57:53 crc kubenswrapper[4949]: I1001 15:57:53.561503 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3919ba46-ce04-4089-a5d1-033501df8eaf-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-tt8cw\" (UID: \"3919ba46-ce04-4089-a5d1-033501df8eaf\") " pod="openstack/dnsmasq-dns-86db49b7ff-tt8cw" Oct 01 15:57:53 crc kubenswrapper[4949]: I1001 15:57:53.561551 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5rh2b\" (UniqueName: \"kubernetes.io/projected/3919ba46-ce04-4089-a5d1-033501df8eaf-kube-api-access-5rh2b\") pod \"dnsmasq-dns-86db49b7ff-tt8cw\" (UID: \"3919ba46-ce04-4089-a5d1-033501df8eaf\") " pod="openstack/dnsmasq-dns-86db49b7ff-tt8cw" Oct 01 15:57:53 crc kubenswrapper[4949]: I1001 15:57:53.561614 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3919ba46-ce04-4089-a5d1-033501df8eaf-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-tt8cw\" (UID: \"3919ba46-ce04-4089-a5d1-033501df8eaf\") " pod="openstack/dnsmasq-dns-86db49b7ff-tt8cw" Oct 01 15:57:53 crc kubenswrapper[4949]: I1001 15:57:53.561632 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3919ba46-ce04-4089-a5d1-033501df8eaf-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-tt8cw\" (UID: \"3919ba46-ce04-4089-a5d1-033501df8eaf\") " pod="openstack/dnsmasq-dns-86db49b7ff-tt8cw" Oct 01 15:57:53 crc kubenswrapper[4949]: I1001 15:57:53.561668 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3919ba46-ce04-4089-a5d1-033501df8eaf-config\") pod \"dnsmasq-dns-86db49b7ff-tt8cw\" (UID: \"3919ba46-ce04-4089-a5d1-033501df8eaf\") " pod="openstack/dnsmasq-dns-86db49b7ff-tt8cw" Oct 01 15:57:53 crc kubenswrapper[4949]: I1001 15:57:53.563631 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3919ba46-ce04-4089-a5d1-033501df8eaf-ovsdbserver-sb\") pod \"dnsmasq-dns-86db49b7ff-tt8cw\" (UID: \"3919ba46-ce04-4089-a5d1-033501df8eaf\") " pod="openstack/dnsmasq-dns-86db49b7ff-tt8cw" Oct 01 15:57:53 crc kubenswrapper[4949]: I1001 15:57:53.564524 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3919ba46-ce04-4089-a5d1-033501df8eaf-ovsdbserver-nb\") pod \"dnsmasq-dns-86db49b7ff-tt8cw\" (UID: \"3919ba46-ce04-4089-a5d1-033501df8eaf\") " pod="openstack/dnsmasq-dns-86db49b7ff-tt8cw" Oct 01 15:57:53 crc kubenswrapper[4949]: I1001 15:57:53.565196 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3919ba46-ce04-4089-a5d1-033501df8eaf-dns-svc\") pod \"dnsmasq-dns-86db49b7ff-tt8cw\" (UID: \"3919ba46-ce04-4089-a5d1-033501df8eaf\") " pod="openstack/dnsmasq-dns-86db49b7ff-tt8cw" Oct 01 15:57:53 crc kubenswrapper[4949]: I1001 15:57:53.568004 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3919ba46-ce04-4089-a5d1-033501df8eaf-config\") pod \"dnsmasq-dns-86db49b7ff-tt8cw\" (UID: \"3919ba46-ce04-4089-a5d1-033501df8eaf\") " pod="openstack/dnsmasq-dns-86db49b7ff-tt8cw" Oct 01 15:57:53 crc kubenswrapper[4949]: I1001 15:57:53.589052 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5rh2b\" (UniqueName: \"kubernetes.io/projected/3919ba46-ce04-4089-a5d1-033501df8eaf-kube-api-access-5rh2b\") pod \"dnsmasq-dns-86db49b7ff-tt8cw\" (UID: \"3919ba46-ce04-4089-a5d1-033501df8eaf\") " pod="openstack/dnsmasq-dns-86db49b7ff-tt8cw" Oct 01 15:57:53 crc kubenswrapper[4949]: I1001 15:57:53.713104 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"321b87e2-0290-4062-9ae3-a7370005b2e4","Type":"ContainerStarted","Data":"f7d1e99ab75bceb0fa6cbc877a6cf0dea9ccbb8674c7a16a72b17ca3f834d785"} Oct 01 15:57:53 crc kubenswrapper[4949]: I1001 15:57:53.720746 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-666b6646f7-p9pv5" event={"ID":"762283e6-fbe2-4174-92e3-ffc1dc5e76f7","Type":"ContainerDied","Data":"ab05bd7f8f1755aff14027c9f6be98de3adfd9b9217036781f59b558e227bef6"} Oct 01 15:57:53 crc kubenswrapper[4949]: I1001 15:57:53.720856 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ab05bd7f8f1755aff14027c9f6be98de3adfd9b9217036781f59b558e227bef6" Oct 01 15:57:53 crc kubenswrapper[4949]: I1001 15:57:53.723236 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"394391aa-ba00-4197-870b-33f881a1afda","Type":"ContainerStarted","Data":"68a2166c1de139b3b0366b4683d9d22e497022f4a04c2608f34486d4922c0979"} Oct 01 15:57:53 crc kubenswrapper[4949]: I1001 15:57:53.725097 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/memcached-0" event={"ID":"ecccaacc-6b05-4bbb-bba1-523c5b3de332","Type":"ContainerStarted","Data":"fc61cb6eebc46351fb41a55f3319812d68328a9eaf169a4ee1d7da8e046344c5"} Oct 01 15:57:53 crc kubenswrapper[4949]: I1001 15:57:53.726002 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/memcached-0" Oct 01 15:57:53 crc kubenswrapper[4949]: I1001 15:57:53.730163 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-57d769cc4f-znlf7" event={"ID":"c27ad98d-1bd3-4d18-b0c5-882f5e080459","Type":"ContainerDied","Data":"154da57f56647d6b3dfef7dffc3bc9c9107f464201693b08ad882f1e38101bc2"} Oct 01 15:57:53 crc kubenswrapper[4949]: I1001 15:57:53.730331 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="154da57f56647d6b3dfef7dffc3bc9c9107f464201693b08ad882f1e38101bc2" Oct 01 15:57:53 crc kubenswrapper[4949]: I1001 15:57:53.738529 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"fc01020a-ebfd-4c4b-b211-d7da1f9aa357","Type":"ContainerStarted","Data":"56815c924f9a92e211d5e5e6583da650640646383f61815b616f290b1c9da64a"} Oct 01 15:57:53 crc kubenswrapper[4949]: I1001 15:57:53.752363 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/memcached-0" podStartSLOduration=31.74402936 podStartE2EDuration="39.752339511s" podCreationTimestamp="2025-10-01 15:57:14 +0000 UTC" firstStartedPulling="2025-10-01 15:57:43.905747585 +0000 UTC m=+963.211353776" lastFinishedPulling="2025-10-01 15:57:51.914057726 +0000 UTC m=+971.219663927" observedRunningTime="2025-10-01 15:57:53.746263317 +0000 UTC m=+973.051869508" watchObservedRunningTime="2025-10-01 15:57:53.752339511 +0000 UTC m=+973.057945722" Oct 01 15:57:53 crc kubenswrapper[4949]: I1001 15:57:53.894835 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-metrics-dcnsm"] Oct 01 15:57:53 crc kubenswrapper[4949]: I1001 15:57:53.925421 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-tt8cw" Oct 01 15:57:54 crc kubenswrapper[4949]: I1001 15:57:54.031340 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-865wx"] Oct 01 15:57:54 crc kubenswrapper[4949]: W1001 15:57:54.160911 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc2d58424_ec27_4ff6_98e0_241e5629484d.slice/crio-9f4ce6185fb3d85f03dc36c0ed6c717b4a183c1ebfb5134dac328823dea9e71f WatchSource:0}: Error finding container 9f4ce6185fb3d85f03dc36c0ed6c717b4a183c1ebfb5134dac328823dea9e71f: Status 404 returned error can't find the container with id 9f4ce6185fb3d85f03dc36c0ed6c717b4a183c1ebfb5134dac328823dea9e71f Oct 01 15:57:54 crc kubenswrapper[4949]: I1001 15:57:54.216690 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-znlf7" Oct 01 15:57:54 crc kubenswrapper[4949]: I1001 15:57:54.249367 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-p9pv5" Oct 01 15:57:54 crc kubenswrapper[4949]: I1001 15:57:54.276653 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-djbzn\" (UniqueName: \"kubernetes.io/projected/762283e6-fbe2-4174-92e3-ffc1dc5e76f7-kube-api-access-djbzn\") pod \"762283e6-fbe2-4174-92e3-ffc1dc5e76f7\" (UID: \"762283e6-fbe2-4174-92e3-ffc1dc5e76f7\") " Oct 01 15:57:54 crc kubenswrapper[4949]: I1001 15:57:54.276687 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/762283e6-fbe2-4174-92e3-ffc1dc5e76f7-config\") pod \"762283e6-fbe2-4174-92e3-ffc1dc5e76f7\" (UID: \"762283e6-fbe2-4174-92e3-ffc1dc5e76f7\") " Oct 01 15:57:54 crc kubenswrapper[4949]: I1001 15:57:54.276791 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c27ad98d-1bd3-4d18-b0c5-882f5e080459-dns-svc\") pod \"c27ad98d-1bd3-4d18-b0c5-882f5e080459\" (UID: \"c27ad98d-1bd3-4d18-b0c5-882f5e080459\") " Oct 01 15:57:54 crc kubenswrapper[4949]: I1001 15:57:54.276894 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/762283e6-fbe2-4174-92e3-ffc1dc5e76f7-dns-svc\") pod \"762283e6-fbe2-4174-92e3-ffc1dc5e76f7\" (UID: \"762283e6-fbe2-4174-92e3-ffc1dc5e76f7\") " Oct 01 15:57:54 crc kubenswrapper[4949]: I1001 15:57:54.276921 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkk9b\" (UniqueName: \"kubernetes.io/projected/c27ad98d-1bd3-4d18-b0c5-882f5e080459-kube-api-access-qkk9b\") pod \"c27ad98d-1bd3-4d18-b0c5-882f5e080459\" (UID: \"c27ad98d-1bd3-4d18-b0c5-882f5e080459\") " Oct 01 15:57:54 crc kubenswrapper[4949]: I1001 15:57:54.276972 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c27ad98d-1bd3-4d18-b0c5-882f5e080459-config\") pod \"c27ad98d-1bd3-4d18-b0c5-882f5e080459\" (UID: \"c27ad98d-1bd3-4d18-b0c5-882f5e080459\") " Oct 01 15:57:54 crc kubenswrapper[4949]: I1001 15:57:54.277848 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c27ad98d-1bd3-4d18-b0c5-882f5e080459-config" (OuterVolumeSpecName: "config") pod "c27ad98d-1bd3-4d18-b0c5-882f5e080459" (UID: "c27ad98d-1bd3-4d18-b0c5-882f5e080459"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:57:54 crc kubenswrapper[4949]: I1001 15:57:54.277931 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c27ad98d-1bd3-4d18-b0c5-882f5e080459-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c27ad98d-1bd3-4d18-b0c5-882f5e080459" (UID: "c27ad98d-1bd3-4d18-b0c5-882f5e080459"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:57:54 crc kubenswrapper[4949]: I1001 15:57:54.278290 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/762283e6-fbe2-4174-92e3-ffc1dc5e76f7-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "762283e6-fbe2-4174-92e3-ffc1dc5e76f7" (UID: "762283e6-fbe2-4174-92e3-ffc1dc5e76f7"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:57:54 crc kubenswrapper[4949]: I1001 15:57:54.278557 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/762283e6-fbe2-4174-92e3-ffc1dc5e76f7-config" (OuterVolumeSpecName: "config") pod "762283e6-fbe2-4174-92e3-ffc1dc5e76f7" (UID: "762283e6-fbe2-4174-92e3-ffc1dc5e76f7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:57:54 crc kubenswrapper[4949]: I1001 15:57:54.283991 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/762283e6-fbe2-4174-92e3-ffc1dc5e76f7-kube-api-access-djbzn" (OuterVolumeSpecName: "kube-api-access-djbzn") pod "762283e6-fbe2-4174-92e3-ffc1dc5e76f7" (UID: "762283e6-fbe2-4174-92e3-ffc1dc5e76f7"). InnerVolumeSpecName "kube-api-access-djbzn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:57:54 crc kubenswrapper[4949]: I1001 15:57:54.288198 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c27ad98d-1bd3-4d18-b0c5-882f5e080459-kube-api-access-qkk9b" (OuterVolumeSpecName: "kube-api-access-qkk9b") pod "c27ad98d-1bd3-4d18-b0c5-882f5e080459" (UID: "c27ad98d-1bd3-4d18-b0c5-882f5e080459"). InnerVolumeSpecName "kube-api-access-qkk9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:57:54 crc kubenswrapper[4949]: I1001 15:57:54.294100 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-tt8cw"] Oct 01 15:57:54 crc kubenswrapper[4949]: I1001 15:57:54.378971 4949 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c27ad98d-1bd3-4d18-b0c5-882f5e080459-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 01 15:57:54 crc kubenswrapper[4949]: I1001 15:57:54.379473 4949 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/762283e6-fbe2-4174-92e3-ffc1dc5e76f7-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 01 15:57:54 crc kubenswrapper[4949]: I1001 15:57:54.379485 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkk9b\" (UniqueName: \"kubernetes.io/projected/c27ad98d-1bd3-4d18-b0c5-882f5e080459-kube-api-access-qkk9b\") on node \"crc\" DevicePath \"\"" Oct 01 15:57:54 crc kubenswrapper[4949]: I1001 15:57:54.379497 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c27ad98d-1bd3-4d18-b0c5-882f5e080459-config\") on node \"crc\" DevicePath \"\"" Oct 01 15:57:54 crc kubenswrapper[4949]: I1001 15:57:54.379506 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-djbzn\" (UniqueName: \"kubernetes.io/projected/762283e6-fbe2-4174-92e3-ffc1dc5e76f7-kube-api-access-djbzn\") on node \"crc\" DevicePath \"\"" Oct 01 15:57:54 crc kubenswrapper[4949]: I1001 15:57:54.379517 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/762283e6-fbe2-4174-92e3-ffc1dc5e76f7-config\") on node \"crc\" DevicePath \"\"" Oct 01 15:57:54 crc kubenswrapper[4949]: I1001 15:57:54.747279 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-tt8cw" event={"ID":"3919ba46-ce04-4089-a5d1-033501df8eaf","Type":"ContainerStarted","Data":"7a078b73bd05c3d994870717e70a0f472065dcc8c75558b7286a1c5b970637ae"} Oct 01 15:57:54 crc kubenswrapper[4949]: I1001 15:57:54.749535 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2","Type":"ContainerStarted","Data":"9d232863bad63a6d2e94fa2dacde909b924739702a22f1742d3468ae1c494e77"} Oct 01 15:57:54 crc kubenswrapper[4949]: I1001 15:57:54.751894 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-dcnsm" event={"ID":"6b840653-c566-4109-afbe-c5733092d91d","Type":"ContainerStarted","Data":"db699c79b5cf56dfd13f0527bb2404c168d6de60c729740637def3fc76419e6e"} Oct 01 15:57:54 crc kubenswrapper[4949]: I1001 15:57:54.754164 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a8530d62-62b2-46a8-be1c-7061ce71f1c2","Type":"ContainerStarted","Data":"5674f2d49d805f85b8a995f4a345a82f8b2bd21f686da3abdb1c3f3c76e3cb69"} Oct 01 15:57:54 crc kubenswrapper[4949]: I1001 15:57:54.754348 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Oct 01 15:57:54 crc kubenswrapper[4949]: I1001 15:57:54.757438 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d5cdf223-529e-4d39-bfc1-7483fbd94a69","Type":"ContainerStarted","Data":"76926bb4eea9db9e4abf777d634a994255183b5ec53bb5b54d1d329d186fd7fd"} Oct 01 15:57:54 crc kubenswrapper[4949]: I1001 15:57:54.765961 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4kkzs" event={"ID":"230fbfcd-f990-42cf-88bb-9e4c4ae45a7d","Type":"ContainerStarted","Data":"f218278f4d2ebbd45ac51c02e8a541947530adf105bdcdef23c58a3bfc0e51a9"} Oct 01 15:57:54 crc kubenswrapper[4949]: I1001 15:57:54.769838 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-248dp" event={"ID":"9db16e40-d8f9-4ee2-bcfc-b093ecdacc7c","Type":"ContainerStarted","Data":"b4851b33d7f92767cd34628ae0647d57cccff88f969ff30fcdc9fea01e324777"} Oct 01 15:57:54 crc kubenswrapper[4949]: I1001 15:57:54.780044 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-57d769cc4f-znlf7" Oct 01 15:57:54 crc kubenswrapper[4949]: I1001 15:57:54.781278 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-865wx" event={"ID":"c2d58424-ec27-4ff6-98e0-241e5629484d","Type":"ContainerStarted","Data":"9f4ce6185fb3d85f03dc36c0ed6c717b4a183c1ebfb5134dac328823dea9e71f"} Oct 01 15:57:54 crc kubenswrapper[4949]: I1001 15:57:54.781550 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-666b6646f7-p9pv5" Oct 01 15:57:54 crc kubenswrapper[4949]: I1001 15:57:54.846439 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=30.905412554 podStartE2EDuration="39.846422879s" podCreationTimestamp="2025-10-01 15:57:15 +0000 UTC" firstStartedPulling="2025-10-01 15:57:43.90127493 +0000 UTC m=+963.206881111" lastFinishedPulling="2025-10-01 15:57:52.842285245 +0000 UTC m=+972.147891436" observedRunningTime="2025-10-01 15:57:54.845135814 +0000 UTC m=+974.150742025" watchObservedRunningTime="2025-10-01 15:57:54.846422879 +0000 UTC m=+974.152029070" Oct 01 15:57:54 crc kubenswrapper[4949]: I1001 15:57:54.909672 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-p9pv5"] Oct 01 15:57:54 crc kubenswrapper[4949]: I1001 15:57:54.918819 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-666b6646f7-p9pv5"] Oct 01 15:57:54 crc kubenswrapper[4949]: I1001 15:57:54.930051 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-znlf7"] Oct 01 15:57:54 crc kubenswrapper[4949]: I1001 15:57:54.935916 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-57d769cc4f-znlf7"] Oct 01 15:57:55 crc kubenswrapper[4949]: I1001 15:57:55.611483 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="762283e6-fbe2-4174-92e3-ffc1dc5e76f7" path="/var/lib/kubelet/pods/762283e6-fbe2-4174-92e3-ffc1dc5e76f7/volumes" Oct 01 15:57:55 crc kubenswrapper[4949]: I1001 15:57:55.612200 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c27ad98d-1bd3-4d18-b0c5-882f5e080459" path="/var/lib/kubelet/pods/c27ad98d-1bd3-4d18-b0c5-882f5e080459/volumes" Oct 01 15:57:55 crc kubenswrapper[4949]: I1001 15:57:55.791237 4949 generic.go:334] "Generic (PLEG): container finished" podID="9db16e40-d8f9-4ee2-bcfc-b093ecdacc7c" containerID="b4851b33d7f92767cd34628ae0647d57cccff88f969ff30fcdc9fea01e324777" exitCode=0 Oct 01 15:57:55 crc kubenswrapper[4949]: I1001 15:57:55.791301 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-248dp" event={"ID":"9db16e40-d8f9-4ee2-bcfc-b093ecdacc7c","Type":"ContainerDied","Data":"b4851b33d7f92767cd34628ae0647d57cccff88f969ff30fcdc9fea01e324777"} Oct 01 15:57:55 crc kubenswrapper[4949]: I1001 15:57:55.794928 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a","Type":"ContainerStarted","Data":"37122c84d3d23987e156207e87880b9c2d0f918cad6199738cd14ab31f8b51bc"} Oct 01 15:57:55 crc kubenswrapper[4949]: I1001 15:57:55.795359 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-4kkzs" Oct 01 15:57:55 crc kubenswrapper[4949]: I1001 15:57:55.880053 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-4kkzs" podStartSLOduration=27.518526125 podStartE2EDuration="35.880023416s" podCreationTimestamp="2025-10-01 15:57:20 +0000 UTC" firstStartedPulling="2025-10-01 15:57:43.903158623 +0000 UTC m=+963.208764814" lastFinishedPulling="2025-10-01 15:57:52.264655914 +0000 UTC m=+971.570262105" observedRunningTime="2025-10-01 15:57:55.870666173 +0000 UTC m=+975.176272364" watchObservedRunningTime="2025-10-01 15:57:55.880023416 +0000 UTC m=+975.185629607" Oct 01 15:57:57 crc kubenswrapper[4949]: I1001 15:57:57.808745 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-248dp" event={"ID":"9db16e40-d8f9-4ee2-bcfc-b093ecdacc7c","Type":"ContainerStarted","Data":"0672424c7c490f1ac85aabbac993ee93855048d3708ce0225b4461d41c298328"} Oct 01 15:57:58 crc kubenswrapper[4949]: I1001 15:57:58.818516 4949 generic.go:334] "Generic (PLEG): container finished" podID="fc01020a-ebfd-4c4b-b211-d7da1f9aa357" containerID="56815c924f9a92e211d5e5e6583da650640646383f61815b616f290b1c9da64a" exitCode=0 Oct 01 15:57:58 crc kubenswrapper[4949]: I1001 15:57:58.818591 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"fc01020a-ebfd-4c4b-b211-d7da1f9aa357","Type":"ContainerDied","Data":"56815c924f9a92e211d5e5e6583da650640646383f61815b616f290b1c9da64a"} Oct 01 15:57:58 crc kubenswrapper[4949]: I1001 15:57:58.827343 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-nb-0" event={"ID":"321b87e2-0290-4062-9ae3-a7370005b2e4","Type":"ContainerStarted","Data":"1d2ad167fe61593cfe8db04f7b89123a13a72d1aed162428dceb4a7f0afb0420"} Oct 01 15:57:58 crc kubenswrapper[4949]: I1001 15:57:58.835308 4949 generic.go:334] "Generic (PLEG): container finished" podID="3919ba46-ce04-4089-a5d1-033501df8eaf" containerID="f27741557178a0cb704261316b2b5ba1ae7d5c1d0116e8f74635aab93a9ed5cd" exitCode=0 Oct 01 15:57:58 crc kubenswrapper[4949]: I1001 15:57:58.835374 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-tt8cw" event={"ID":"3919ba46-ce04-4089-a5d1-033501df8eaf","Type":"ContainerDied","Data":"f27741557178a0cb704261316b2b5ba1ae7d5c1d0116e8f74635aab93a9ed5cd"} Oct 01 15:57:58 crc kubenswrapper[4949]: I1001 15:57:58.840323 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-metrics-dcnsm" event={"ID":"6b840653-c566-4109-afbe-c5733092d91d","Type":"ContainerStarted","Data":"750ec5724a460bcbfc2ae31c7997cf7fc6da95c4dc2164db9ee0d028cc383985"} Oct 01 15:57:58 crc kubenswrapper[4949]: I1001 15:57:58.842518 4949 generic.go:334] "Generic (PLEG): container finished" podID="d5cdf223-529e-4d39-bfc1-7483fbd94a69" containerID="76926bb4eea9db9e4abf777d634a994255183b5ec53bb5b54d1d329d186fd7fd" exitCode=0 Oct 01 15:57:58 crc kubenswrapper[4949]: I1001 15:57:58.842587 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d5cdf223-529e-4d39-bfc1-7483fbd94a69","Type":"ContainerDied","Data":"76926bb4eea9db9e4abf777d634a994255183b5ec53bb5b54d1d329d186fd7fd"} Oct 01 15:57:58 crc kubenswrapper[4949]: I1001 15:57:58.842774 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-nb-0" Oct 01 15:57:58 crc kubenswrapper[4949]: I1001 15:57:58.850573 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-ovs-248dp" event={"ID":"9db16e40-d8f9-4ee2-bcfc-b093ecdacc7c","Type":"ContainerStarted","Data":"d9b1c30e268b0348af1137ad35f736dad8ff9b34427c8cfb1e81855b8055bd16"} Oct 01 15:57:58 crc kubenswrapper[4949]: I1001 15:57:58.850641 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-248dp" Oct 01 15:57:58 crc kubenswrapper[4949]: I1001 15:57:58.850742 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-controller-ovs-248dp" Oct 01 15:57:58 crc kubenswrapper[4949]: I1001 15:57:58.856917 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovsdbserver-sb-0" event={"ID":"394391aa-ba00-4197-870b-33f881a1afda","Type":"ContainerStarted","Data":"fedc1ad45dd561009443b5641e64d0e25f0ca34271b685ff4c750292b2ceb979"} Oct 01 15:57:58 crc kubenswrapper[4949]: I1001 15:57:58.861223 4949 generic.go:334] "Generic (PLEG): container finished" podID="c2d58424-ec27-4ff6-98e0-241e5629484d" containerID="65d35f733b9717d27c0d44c6eda7b5a139a17659db5ab8d4e5750c5e8e14c56a" exitCode=0 Oct 01 15:57:58 crc kubenswrapper[4949]: I1001 15:57:58.861288 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-865wx" event={"ID":"c2d58424-ec27-4ff6-98e0-241e5629484d","Type":"ContainerDied","Data":"65d35f733b9717d27c0d44c6eda7b5a139a17659db5ab8d4e5750c5e8e14c56a"} Oct 01 15:57:58 crc kubenswrapper[4949]: I1001 15:57:58.918393 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-nb-0" Oct 01 15:57:58 crc kubenswrapper[4949]: I1001 15:57:58.926007 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-nb-0" podStartSLOduration=23.766284934 podStartE2EDuration="37.92599077s" podCreationTimestamp="2025-10-01 15:57:21 +0000 UTC" firstStartedPulling="2025-10-01 15:57:44.051550237 +0000 UTC m=+963.357156448" lastFinishedPulling="2025-10-01 15:57:58.211256093 +0000 UTC m=+977.516862284" observedRunningTime="2025-10-01 15:57:58.919348081 +0000 UTC m=+978.224954282" watchObservedRunningTime="2025-10-01 15:57:58.92599077 +0000 UTC m=+978.231596961" Oct 01 15:57:58 crc kubenswrapper[4949]: I1001 15:57:58.954453 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-metrics-dcnsm" podStartSLOduration=2.537447079 podStartE2EDuration="6.954428403s" podCreationTimestamp="2025-10-01 15:57:52 +0000 UTC" firstStartedPulling="2025-10-01 15:57:53.912056944 +0000 UTC m=+973.217663135" lastFinishedPulling="2025-10-01 15:57:58.329038268 +0000 UTC m=+977.634644459" observedRunningTime="2025-10-01 15:57:58.948991995 +0000 UTC m=+978.254598196" watchObservedRunningTime="2025-10-01 15:57:58.954428403 +0000 UTC m=+978.260034594" Oct 01 15:57:58 crc kubenswrapper[4949]: I1001 15:57:58.988144 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovsdbserver-sb-0" podStartSLOduration=25.942058799 podStartE2EDuration="39.988114096s" podCreationTimestamp="2025-10-01 15:57:19 +0000 UTC" firstStartedPulling="2025-10-01 15:57:44.128594813 +0000 UTC m=+963.434201004" lastFinishedPulling="2025-10-01 15:57:58.17465011 +0000 UTC m=+977.480256301" observedRunningTime="2025-10-01 15:57:58.979170193 +0000 UTC m=+978.284776394" watchObservedRunningTime="2025-10-01 15:57:58.988114096 +0000 UTC m=+978.293720277" Oct 01 15:57:59 crc kubenswrapper[4949]: I1001 15:57:59.028076 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-ovs-248dp" podStartSLOduration=31.244923371 podStartE2EDuration="39.02805698s" podCreationTimestamp="2025-10-01 15:57:20 +0000 UTC" firstStartedPulling="2025-10-01 15:57:44.221252664 +0000 UTC m=+963.526858855" lastFinishedPulling="2025-10-01 15:57:52.004386273 +0000 UTC m=+971.309992464" observedRunningTime="2025-10-01 15:57:59.021484222 +0000 UTC m=+978.327090433" watchObservedRunningTime="2025-10-01 15:57:59.02805698 +0000 UTC m=+978.333663171" Oct 01 15:57:59 crc kubenswrapper[4949]: I1001 15:57:59.440042 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/memcached-0" Oct 01 15:57:59 crc kubenswrapper[4949]: I1001 15:57:59.857680 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/ovsdbserver-sb-0" Oct 01 15:57:59 crc kubenswrapper[4949]: I1001 15:57:59.871638 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-galera-0" event={"ID":"d5cdf223-529e-4d39-bfc1-7483fbd94a69","Type":"ContainerStarted","Data":"b58c68108028840c1ad2f883aa61021248fcf76673dbbb9f42c2d864d3143ce1"} Oct 01 15:57:59 crc kubenswrapper[4949]: I1001 15:57:59.874141 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-865wx" event={"ID":"c2d58424-ec27-4ff6-98e0-241e5629484d","Type":"ContainerStarted","Data":"071b578e723fef784e68730ac50abe7cfb563e6bcf4a0ebeaba516acdea71a80"} Oct 01 15:57:59 crc kubenswrapper[4949]: I1001 15:57:59.874299 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7f896c8c65-865wx" Oct 01 15:57:59 crc kubenswrapper[4949]: I1001 15:57:59.876731 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstack-cell1-galera-0" event={"ID":"fc01020a-ebfd-4c4b-b211-d7da1f9aa357","Type":"ContainerStarted","Data":"61068963539d74688769d93a1394c912bcf529530d564d0575be7e4ecdfed9e3"} Oct 01 15:57:59 crc kubenswrapper[4949]: I1001 15:57:59.878669 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-tt8cw" event={"ID":"3919ba46-ce04-4089-a5d1-033501df8eaf","Type":"ContainerStarted","Data":"036c124ae8721476d22934a06e2614e69914392f5db1da0d747a732fc1de6874"} Oct 01 15:57:59 crc kubenswrapper[4949]: I1001 15:57:59.878910 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-tt8cw" Oct 01 15:57:59 crc kubenswrapper[4949]: I1001 15:57:59.879158 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-nb-0" Oct 01 15:57:59 crc kubenswrapper[4949]: I1001 15:57:59.897864 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-galera-0" podStartSLOduration=39.298737003 podStartE2EDuration="47.897843313s" podCreationTimestamp="2025-10-01 15:57:12 +0000 UTC" firstStartedPulling="2025-10-01 15:57:43.73041549 +0000 UTC m=+963.036021681" lastFinishedPulling="2025-10-01 15:57:52.3295218 +0000 UTC m=+971.635127991" observedRunningTime="2025-10-01 15:57:59.894597125 +0000 UTC m=+979.200203316" watchObservedRunningTime="2025-10-01 15:57:59.897843313 +0000 UTC m=+979.203449504" Oct 01 15:57:59 crc kubenswrapper[4949]: I1001 15:57:59.903179 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/ovsdbserver-sb-0" Oct 01 15:57:59 crc kubenswrapper[4949]: I1001 15:57:59.923173 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-nb-0" Oct 01 15:57:59 crc kubenswrapper[4949]: I1001 15:57:59.933444 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstack-cell1-galera-0" podStartSLOduration=39.779897492 podStartE2EDuration="47.933425579s" podCreationTimestamp="2025-10-01 15:57:12 +0000 UTC" firstStartedPulling="2025-10-01 15:57:43.909561771 +0000 UTC m=+963.215167962" lastFinishedPulling="2025-10-01 15:57:52.063089858 +0000 UTC m=+971.368696049" observedRunningTime="2025-10-01 15:57:59.91795939 +0000 UTC m=+979.223565631" watchObservedRunningTime="2025-10-01 15:57:59.933425579 +0000 UTC m=+979.239031770" Oct 01 15:57:59 crc kubenswrapper[4949]: I1001 15:57:59.934440 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-86db49b7ff-tt8cw" podStartSLOduration=3.05784949 podStartE2EDuration="6.934432636s" podCreationTimestamp="2025-10-01 15:57:53 +0000 UTC" firstStartedPulling="2025-10-01 15:57:54.296943654 +0000 UTC m=+973.602549845" lastFinishedPulling="2025-10-01 15:57:58.1735268 +0000 UTC m=+977.479132991" observedRunningTime="2025-10-01 15:57:59.932273328 +0000 UTC m=+979.237879539" watchObservedRunningTime="2025-10-01 15:57:59.934432636 +0000 UTC m=+979.240038827" Oct 01 15:57:59 crc kubenswrapper[4949]: I1001 15:57:59.962362 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7f896c8c65-865wx" podStartSLOduration=2.905926199 podStartE2EDuration="6.962337373s" podCreationTimestamp="2025-10-01 15:57:53 +0000 UTC" firstStartedPulling="2025-10-01 15:57:54.164966884 +0000 UTC m=+973.470573075" lastFinishedPulling="2025-10-01 15:57:58.221378058 +0000 UTC m=+977.526984249" observedRunningTime="2025-10-01 15:57:59.948448326 +0000 UTC m=+979.254054517" watchObservedRunningTime="2025-10-01 15:57:59.962337373 +0000 UTC m=+979.267943564" Oct 01 15:58:00 crc kubenswrapper[4949]: I1001 15:58:00.857852 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovsdbserver-sb-0" Oct 01 15:58:00 crc kubenswrapper[4949]: I1001 15:58:00.898590 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovsdbserver-sb-0" Oct 01 15:58:01 crc kubenswrapper[4949]: I1001 15:58:01.041800 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-northd-0"] Oct 01 15:58:01 crc kubenswrapper[4949]: I1001 15:58:01.043252 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Oct 01 15:58:01 crc kubenswrapper[4949]: I1001 15:58:01.049600 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-config" Oct 01 15:58:01 crc kubenswrapper[4949]: I1001 15:58:01.049644 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ovnnorthd-ovndbs" Oct 01 15:58:01 crc kubenswrapper[4949]: I1001 15:58:01.049700 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ovnnorthd-ovnnorthd-dockercfg-4kdgt" Oct 01 15:58:01 crc kubenswrapper[4949]: I1001 15:58:01.050007 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovnnorthd-scripts" Oct 01 15:58:01 crc kubenswrapper[4949]: I1001 15:58:01.063025 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Oct 01 15:58:01 crc kubenswrapper[4949]: I1001 15:58:01.147900 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/271ab239-a9b6-47bf-a4a1-424db4c922a5-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"271ab239-a9b6-47bf-a4a1-424db4c922a5\") " pod="openstack/ovn-northd-0" Oct 01 15:58:01 crc kubenswrapper[4949]: I1001 15:58:01.147957 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz9h7\" (UniqueName: \"kubernetes.io/projected/271ab239-a9b6-47bf-a4a1-424db4c922a5-kube-api-access-vz9h7\") pod \"ovn-northd-0\" (UID: \"271ab239-a9b6-47bf-a4a1-424db4c922a5\") " pod="openstack/ovn-northd-0" Oct 01 15:58:01 crc kubenswrapper[4949]: I1001 15:58:01.148004 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/271ab239-a9b6-47bf-a4a1-424db4c922a5-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"271ab239-a9b6-47bf-a4a1-424db4c922a5\") " pod="openstack/ovn-northd-0" Oct 01 15:58:01 crc kubenswrapper[4949]: I1001 15:58:01.148025 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/271ab239-a9b6-47bf-a4a1-424db4c922a5-scripts\") pod \"ovn-northd-0\" (UID: \"271ab239-a9b6-47bf-a4a1-424db4c922a5\") " pod="openstack/ovn-northd-0" Oct 01 15:58:01 crc kubenswrapper[4949]: I1001 15:58:01.148289 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/271ab239-a9b6-47bf-a4a1-424db4c922a5-config\") pod \"ovn-northd-0\" (UID: \"271ab239-a9b6-47bf-a4a1-424db4c922a5\") " pod="openstack/ovn-northd-0" Oct 01 15:58:01 crc kubenswrapper[4949]: I1001 15:58:01.148357 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/271ab239-a9b6-47bf-a4a1-424db4c922a5-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"271ab239-a9b6-47bf-a4a1-424db4c922a5\") " pod="openstack/ovn-northd-0" Oct 01 15:58:01 crc kubenswrapper[4949]: I1001 15:58:01.148393 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/271ab239-a9b6-47bf-a4a1-424db4c922a5-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"271ab239-a9b6-47bf-a4a1-424db4c922a5\") " pod="openstack/ovn-northd-0" Oct 01 15:58:01 crc kubenswrapper[4949]: I1001 15:58:01.249561 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/271ab239-a9b6-47bf-a4a1-424db4c922a5-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"271ab239-a9b6-47bf-a4a1-424db4c922a5\") " pod="openstack/ovn-northd-0" Oct 01 15:58:01 crc kubenswrapper[4949]: I1001 15:58:01.249615 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vz9h7\" (UniqueName: \"kubernetes.io/projected/271ab239-a9b6-47bf-a4a1-424db4c922a5-kube-api-access-vz9h7\") pod \"ovn-northd-0\" (UID: \"271ab239-a9b6-47bf-a4a1-424db4c922a5\") " pod="openstack/ovn-northd-0" Oct 01 15:58:01 crc kubenswrapper[4949]: I1001 15:58:01.249654 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/271ab239-a9b6-47bf-a4a1-424db4c922a5-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"271ab239-a9b6-47bf-a4a1-424db4c922a5\") " pod="openstack/ovn-northd-0" Oct 01 15:58:01 crc kubenswrapper[4949]: I1001 15:58:01.249670 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/271ab239-a9b6-47bf-a4a1-424db4c922a5-scripts\") pod \"ovn-northd-0\" (UID: \"271ab239-a9b6-47bf-a4a1-424db4c922a5\") " pod="openstack/ovn-northd-0" Oct 01 15:58:01 crc kubenswrapper[4949]: I1001 15:58:01.249689 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/271ab239-a9b6-47bf-a4a1-424db4c922a5-config\") pod \"ovn-northd-0\" (UID: \"271ab239-a9b6-47bf-a4a1-424db4c922a5\") " pod="openstack/ovn-northd-0" Oct 01 15:58:01 crc kubenswrapper[4949]: I1001 15:58:01.249725 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/271ab239-a9b6-47bf-a4a1-424db4c922a5-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"271ab239-a9b6-47bf-a4a1-424db4c922a5\") " pod="openstack/ovn-northd-0" Oct 01 15:58:01 crc kubenswrapper[4949]: I1001 15:58:01.249749 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/271ab239-a9b6-47bf-a4a1-424db4c922a5-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"271ab239-a9b6-47bf-a4a1-424db4c922a5\") " pod="openstack/ovn-northd-0" Oct 01 15:58:01 crc kubenswrapper[4949]: I1001 15:58:01.250312 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-rundir\" (UniqueName: \"kubernetes.io/empty-dir/271ab239-a9b6-47bf-a4a1-424db4c922a5-ovn-rundir\") pod \"ovn-northd-0\" (UID: \"271ab239-a9b6-47bf-a4a1-424db4c922a5\") " pod="openstack/ovn-northd-0" Oct 01 15:58:01 crc kubenswrapper[4949]: I1001 15:58:01.250977 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/271ab239-a9b6-47bf-a4a1-424db4c922a5-config\") pod \"ovn-northd-0\" (UID: \"271ab239-a9b6-47bf-a4a1-424db4c922a5\") " pod="openstack/ovn-northd-0" Oct 01 15:58:01 crc kubenswrapper[4949]: I1001 15:58:01.251041 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/271ab239-a9b6-47bf-a4a1-424db4c922a5-scripts\") pod \"ovn-northd-0\" (UID: \"271ab239-a9b6-47bf-a4a1-424db4c922a5\") " pod="openstack/ovn-northd-0" Oct 01 15:58:01 crc kubenswrapper[4949]: I1001 15:58:01.255654 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-northd-tls-certs\" (UniqueName: \"kubernetes.io/secret/271ab239-a9b6-47bf-a4a1-424db4c922a5-ovn-northd-tls-certs\") pod \"ovn-northd-0\" (UID: \"271ab239-a9b6-47bf-a4a1-424db4c922a5\") " pod="openstack/ovn-northd-0" Oct 01 15:58:01 crc kubenswrapper[4949]: I1001 15:58:01.255803 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/271ab239-a9b6-47bf-a4a1-424db4c922a5-combined-ca-bundle\") pod \"ovn-northd-0\" (UID: \"271ab239-a9b6-47bf-a4a1-424db4c922a5\") " pod="openstack/ovn-northd-0" Oct 01 15:58:01 crc kubenswrapper[4949]: I1001 15:58:01.257700 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs-tls-certs\" (UniqueName: \"kubernetes.io/secret/271ab239-a9b6-47bf-a4a1-424db4c922a5-metrics-certs-tls-certs\") pod \"ovn-northd-0\" (UID: \"271ab239-a9b6-47bf-a4a1-424db4c922a5\") " pod="openstack/ovn-northd-0" Oct 01 15:58:01 crc kubenswrapper[4949]: I1001 15:58:01.280401 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz9h7\" (UniqueName: \"kubernetes.io/projected/271ab239-a9b6-47bf-a4a1-424db4c922a5-kube-api-access-vz9h7\") pod \"ovn-northd-0\" (UID: \"271ab239-a9b6-47bf-a4a1-424db4c922a5\") " pod="openstack/ovn-northd-0" Oct 01 15:58:01 crc kubenswrapper[4949]: I1001 15:58:01.372348 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-northd-0" Oct 01 15:58:01 crc kubenswrapper[4949]: I1001 15:58:01.798207 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-northd-0"] Oct 01 15:58:01 crc kubenswrapper[4949]: I1001 15:58:01.895702 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"271ab239-a9b6-47bf-a4a1-424db4c922a5","Type":"ContainerStarted","Data":"e4d70b8ad5b7c645e24c8b11db2a3cb4616b60071b10417fcdb713d69641223e"} Oct 01 15:58:03 crc kubenswrapper[4949]: I1001 15:58:03.790933 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-galera-0" Oct 01 15:58:03 crc kubenswrapper[4949]: I1001 15:58:03.791515 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-galera-0" Oct 01 15:58:03 crc kubenswrapper[4949]: I1001 15:58:03.834672 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-galera-0" Oct 01 15:58:03 crc kubenswrapper[4949]: I1001 15:58:03.911196 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"271ab239-a9b6-47bf-a4a1-424db4c922a5","Type":"ContainerStarted","Data":"ac3ea70a220b94c4c481ba9d7d0b619d4576d85d9db27b48f36b47d671489b4d"} Oct 01 15:58:03 crc kubenswrapper[4949]: I1001 15:58:03.911257 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-northd-0" event={"ID":"271ab239-a9b6-47bf-a4a1-424db4c922a5","Type":"ContainerStarted","Data":"7b9447a87305fc7418e7d116aa736c69584a7f3c2367c0ada869e7c1c6aff22d"} Oct 01 15:58:03 crc kubenswrapper[4949]: I1001 15:58:03.913148 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/openstack-cell1-galera-0" Oct 01 15:58:03 crc kubenswrapper[4949]: I1001 15:58:03.913672 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/openstack-cell1-galera-0" Oct 01 15:58:03 crc kubenswrapper[4949]: I1001 15:58:03.946021 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-northd-0" podStartSLOduration=1.777180738 podStartE2EDuration="2.946001933s" podCreationTimestamp="2025-10-01 15:58:01 +0000 UTC" firstStartedPulling="2025-10-01 15:58:01.817111096 +0000 UTC m=+981.122717287" lastFinishedPulling="2025-10-01 15:58:02.985932291 +0000 UTC m=+982.291538482" observedRunningTime="2025-10-01 15:58:03.94477067 +0000 UTC m=+983.250376871" watchObservedRunningTime="2025-10-01 15:58:03.946001933 +0000 UTC m=+983.251608134" Oct 01 15:58:03 crc kubenswrapper[4949]: I1001 15:58:03.963243 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-galera-0" Oct 01 15:58:04 crc kubenswrapper[4949]: I1001 15:58:04.445007 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-create-zzmsh"] Oct 01 15:58:04 crc kubenswrapper[4949]: I1001 15:58:04.446466 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zzmsh" Oct 01 15:58:04 crc kubenswrapper[4949]: I1001 15:58:04.456152 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-zzmsh"] Oct 01 15:58:04 crc kubenswrapper[4949]: I1001 15:58:04.608761 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmb92\" (UniqueName: \"kubernetes.io/projected/1ff8b95e-dfbc-4a49-b845-7a598a5acb7d-kube-api-access-dmb92\") pod \"placement-db-create-zzmsh\" (UID: \"1ff8b95e-dfbc-4a49-b845-7a598a5acb7d\") " pod="openstack/placement-db-create-zzmsh" Oct 01 15:58:04 crc kubenswrapper[4949]: I1001 15:58:04.709790 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dmb92\" (UniqueName: \"kubernetes.io/projected/1ff8b95e-dfbc-4a49-b845-7a598a5acb7d-kube-api-access-dmb92\") pod \"placement-db-create-zzmsh\" (UID: \"1ff8b95e-dfbc-4a49-b845-7a598a5acb7d\") " pod="openstack/placement-db-create-zzmsh" Oct 01 15:58:04 crc kubenswrapper[4949]: I1001 15:58:04.729190 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmb92\" (UniqueName: \"kubernetes.io/projected/1ff8b95e-dfbc-4a49-b845-7a598a5acb7d-kube-api-access-dmb92\") pod \"placement-db-create-zzmsh\" (UID: \"1ff8b95e-dfbc-4a49-b845-7a598a5acb7d\") " pod="openstack/placement-db-create-zzmsh" Oct 01 15:58:04 crc kubenswrapper[4949]: I1001 15:58:04.762743 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zzmsh" Oct 01 15:58:04 crc kubenswrapper[4949]: I1001 15:58:04.918866 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ovn-northd-0" Oct 01 15:58:05 crc kubenswrapper[4949]: I1001 15:58:05.178344 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-create-zzmsh"] Oct 01 15:58:05 crc kubenswrapper[4949]: W1001 15:58:05.187346 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1ff8b95e_dfbc_4a49_b845_7a598a5acb7d.slice/crio-a1db097451369cdb904e674d4f282cc4e5a9d1453b7cf5e54fb79367fbd88fd7 WatchSource:0}: Error finding container a1db097451369cdb904e674d4f282cc4e5a9d1453b7cf5e54fb79367fbd88fd7: Status 404 returned error can't find the container with id a1db097451369cdb904e674d4f282cc4e5a9d1453b7cf5e54fb79367fbd88fd7 Oct 01 15:58:05 crc kubenswrapper[4949]: I1001 15:58:05.926109 4949 generic.go:334] "Generic (PLEG): container finished" podID="1ff8b95e-dfbc-4a49-b845-7a598a5acb7d" containerID="6c32a17103cde505f4bf541b6fc983bb674e52d6a85dbed5e9173a599e671813" exitCode=0 Oct 01 15:58:05 crc kubenswrapper[4949]: I1001 15:58:05.926230 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-zzmsh" event={"ID":"1ff8b95e-dfbc-4a49-b845-7a598a5acb7d","Type":"ContainerDied","Data":"6c32a17103cde505f4bf541b6fc983bb674e52d6a85dbed5e9173a599e671813"} Oct 01 15:58:05 crc kubenswrapper[4949]: I1001 15:58:05.926466 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-zzmsh" event={"ID":"1ff8b95e-dfbc-4a49-b845-7a598a5acb7d","Type":"ContainerStarted","Data":"a1db097451369cdb904e674d4f282cc4e5a9d1453b7cf5e54fb79367fbd88fd7"} Oct 01 15:58:06 crc kubenswrapper[4949]: I1001 15:58:06.039689 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/openstack-cell1-galera-0" Oct 01 15:58:06 crc kubenswrapper[4949]: I1001 15:58:06.114944 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/openstack-cell1-galera-0" podUID="fc01020a-ebfd-4c4b-b211-d7da1f9aa357" containerName="galera" probeResult="failure" output=< Oct 01 15:58:06 crc kubenswrapper[4949]: wsrep_local_state_comment (Joined) differs from Synced Oct 01 15:58:06 crc kubenswrapper[4949]: > Oct 01 15:58:06 crc kubenswrapper[4949]: I1001 15:58:06.269856 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Oct 01 15:58:07 crc kubenswrapper[4949]: I1001 15:58:07.270540 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zzmsh" Oct 01 15:58:07 crc kubenswrapper[4949]: I1001 15:58:07.368247 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmb92\" (UniqueName: \"kubernetes.io/projected/1ff8b95e-dfbc-4a49-b845-7a598a5acb7d-kube-api-access-dmb92\") pod \"1ff8b95e-dfbc-4a49-b845-7a598a5acb7d\" (UID: \"1ff8b95e-dfbc-4a49-b845-7a598a5acb7d\") " Oct 01 15:58:07 crc kubenswrapper[4949]: I1001 15:58:07.373832 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ff8b95e-dfbc-4a49-b845-7a598a5acb7d-kube-api-access-dmb92" (OuterVolumeSpecName: "kube-api-access-dmb92") pod "1ff8b95e-dfbc-4a49-b845-7a598a5acb7d" (UID: "1ff8b95e-dfbc-4a49-b845-7a598a5acb7d"). InnerVolumeSpecName "kube-api-access-dmb92". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:58:07 crc kubenswrapper[4949]: I1001 15:58:07.470093 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dmb92\" (UniqueName: \"kubernetes.io/projected/1ff8b95e-dfbc-4a49-b845-7a598a5acb7d-kube-api-access-dmb92\") on node \"crc\" DevicePath \"\"" Oct 01 15:58:07 crc kubenswrapper[4949]: I1001 15:58:07.946226 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-create-zzmsh" event={"ID":"1ff8b95e-dfbc-4a49-b845-7a598a5acb7d","Type":"ContainerDied","Data":"a1db097451369cdb904e674d4f282cc4e5a9d1453b7cf5e54fb79367fbd88fd7"} Oct 01 15:58:07 crc kubenswrapper[4949]: I1001 15:58:07.946548 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1db097451369cdb904e674d4f282cc4e5a9d1453b7cf5e54fb79367fbd88fd7" Oct 01 15:58:07 crc kubenswrapper[4949]: I1001 15:58:07.946286 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-create-zzmsh" Oct 01 15:58:08 crc kubenswrapper[4949]: I1001 15:58:08.523272 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7f896c8c65-865wx" Oct 01 15:58:08 crc kubenswrapper[4949]: I1001 15:58:08.927302 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-86db49b7ff-tt8cw" Oct 01 15:58:09 crc kubenswrapper[4949]: I1001 15:58:09.018249 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-865wx"] Oct 01 15:58:09 crc kubenswrapper[4949]: I1001 15:58:09.018485 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7f896c8c65-865wx" podUID="c2d58424-ec27-4ff6-98e0-241e5629484d" containerName="dnsmasq-dns" containerID="cri-o://071b578e723fef784e68730ac50abe7cfb563e6bcf4a0ebeaba516acdea71a80" gracePeriod=10 Oct 01 15:58:09 crc kubenswrapper[4949]: I1001 15:58:09.511934 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-865wx" Oct 01 15:58:09 crc kubenswrapper[4949]: I1001 15:58:09.581176 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-create-jc87k"] Oct 01 15:58:09 crc kubenswrapper[4949]: E1001 15:58:09.581595 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ff8b95e-dfbc-4a49-b845-7a598a5acb7d" containerName="mariadb-database-create" Oct 01 15:58:09 crc kubenswrapper[4949]: I1001 15:58:09.581624 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ff8b95e-dfbc-4a49-b845-7a598a5acb7d" containerName="mariadb-database-create" Oct 01 15:58:09 crc kubenswrapper[4949]: E1001 15:58:09.581656 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2d58424-ec27-4ff6-98e0-241e5629484d" containerName="dnsmasq-dns" Oct 01 15:58:09 crc kubenswrapper[4949]: I1001 15:58:09.581663 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2d58424-ec27-4ff6-98e0-241e5629484d" containerName="dnsmasq-dns" Oct 01 15:58:09 crc kubenswrapper[4949]: E1001 15:58:09.581684 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2d58424-ec27-4ff6-98e0-241e5629484d" containerName="init" Oct 01 15:58:09 crc kubenswrapper[4949]: I1001 15:58:09.581692 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2d58424-ec27-4ff6-98e0-241e5629484d" containerName="init" Oct 01 15:58:09 crc kubenswrapper[4949]: I1001 15:58:09.581899 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2d58424-ec27-4ff6-98e0-241e5629484d" containerName="dnsmasq-dns" Oct 01 15:58:09 crc kubenswrapper[4949]: I1001 15:58:09.581919 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ff8b95e-dfbc-4a49-b845-7a598a5acb7d" containerName="mariadb-database-create" Oct 01 15:58:09 crc kubenswrapper[4949]: I1001 15:58:09.582578 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jc87k" Oct 01 15:58:09 crc kubenswrapper[4949]: I1001 15:58:09.595387 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-jc87k"] Oct 01 15:58:09 crc kubenswrapper[4949]: I1001 15:58:09.611703 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4swdd\" (UniqueName: \"kubernetes.io/projected/c2d58424-ec27-4ff6-98e0-241e5629484d-kube-api-access-4swdd\") pod \"c2d58424-ec27-4ff6-98e0-241e5629484d\" (UID: \"c2d58424-ec27-4ff6-98e0-241e5629484d\") " Oct 01 15:58:09 crc kubenswrapper[4949]: I1001 15:58:09.611769 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2d58424-ec27-4ff6-98e0-241e5629484d-dns-svc\") pod \"c2d58424-ec27-4ff6-98e0-241e5629484d\" (UID: \"c2d58424-ec27-4ff6-98e0-241e5629484d\") " Oct 01 15:58:09 crc kubenswrapper[4949]: I1001 15:58:09.611795 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2d58424-ec27-4ff6-98e0-241e5629484d-ovsdbserver-sb\") pod \"c2d58424-ec27-4ff6-98e0-241e5629484d\" (UID: \"c2d58424-ec27-4ff6-98e0-241e5629484d\") " Oct 01 15:58:09 crc kubenswrapper[4949]: I1001 15:58:09.611840 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2d58424-ec27-4ff6-98e0-241e5629484d-config\") pod \"c2d58424-ec27-4ff6-98e0-241e5629484d\" (UID: \"c2d58424-ec27-4ff6-98e0-241e5629484d\") " Oct 01 15:58:09 crc kubenswrapper[4949]: I1001 15:58:09.624477 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2d58424-ec27-4ff6-98e0-241e5629484d-kube-api-access-4swdd" (OuterVolumeSpecName: "kube-api-access-4swdd") pod "c2d58424-ec27-4ff6-98e0-241e5629484d" (UID: "c2d58424-ec27-4ff6-98e0-241e5629484d"). InnerVolumeSpecName "kube-api-access-4swdd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:58:09 crc kubenswrapper[4949]: I1001 15:58:09.648645 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2d58424-ec27-4ff6-98e0-241e5629484d-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c2d58424-ec27-4ff6-98e0-241e5629484d" (UID: "c2d58424-ec27-4ff6-98e0-241e5629484d"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:58:09 crc kubenswrapper[4949]: I1001 15:58:09.650010 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2d58424-ec27-4ff6-98e0-241e5629484d-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c2d58424-ec27-4ff6-98e0-241e5629484d" (UID: "c2d58424-ec27-4ff6-98e0-241e5629484d"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:58:09 crc kubenswrapper[4949]: I1001 15:58:09.651928 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c2d58424-ec27-4ff6-98e0-241e5629484d-config" (OuterVolumeSpecName: "config") pod "c2d58424-ec27-4ff6-98e0-241e5629484d" (UID: "c2d58424-ec27-4ff6-98e0-241e5629484d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:58:09 crc kubenswrapper[4949]: I1001 15:58:09.714062 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-smvxn\" (UniqueName: \"kubernetes.io/projected/0f2a5755-0788-460c-bc1e-0a261a9a6e0f-kube-api-access-smvxn\") pod \"glance-db-create-jc87k\" (UID: \"0f2a5755-0788-460c-bc1e-0a261a9a6e0f\") " pod="openstack/glance-db-create-jc87k" Oct 01 15:58:09 crc kubenswrapper[4949]: I1001 15:58:09.714232 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4swdd\" (UniqueName: \"kubernetes.io/projected/c2d58424-ec27-4ff6-98e0-241e5629484d-kube-api-access-4swdd\") on node \"crc\" DevicePath \"\"" Oct 01 15:58:09 crc kubenswrapper[4949]: I1001 15:58:09.714256 4949 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c2d58424-ec27-4ff6-98e0-241e5629484d-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 01 15:58:09 crc kubenswrapper[4949]: I1001 15:58:09.714275 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c2d58424-ec27-4ff6-98e0-241e5629484d-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 01 15:58:09 crc kubenswrapper[4949]: I1001 15:58:09.714291 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c2d58424-ec27-4ff6-98e0-241e5629484d-config\") on node \"crc\" DevicePath \"\"" Oct 01 15:58:09 crc kubenswrapper[4949]: I1001 15:58:09.815887 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-smvxn\" (UniqueName: \"kubernetes.io/projected/0f2a5755-0788-460c-bc1e-0a261a9a6e0f-kube-api-access-smvxn\") pod \"glance-db-create-jc87k\" (UID: \"0f2a5755-0788-460c-bc1e-0a261a9a6e0f\") " pod="openstack/glance-db-create-jc87k" Oct 01 15:58:09 crc kubenswrapper[4949]: I1001 15:58:09.832162 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-smvxn\" (UniqueName: \"kubernetes.io/projected/0f2a5755-0788-460c-bc1e-0a261a9a6e0f-kube-api-access-smvxn\") pod \"glance-db-create-jc87k\" (UID: \"0f2a5755-0788-460c-bc1e-0a261a9a6e0f\") " pod="openstack/glance-db-create-jc87k" Oct 01 15:58:09 crc kubenswrapper[4949]: I1001 15:58:09.899858 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jc87k" Oct 01 15:58:09 crc kubenswrapper[4949]: I1001 15:58:09.971842 4949 generic.go:334] "Generic (PLEG): container finished" podID="c2d58424-ec27-4ff6-98e0-241e5629484d" containerID="071b578e723fef784e68730ac50abe7cfb563e6bcf4a0ebeaba516acdea71a80" exitCode=0 Oct 01 15:58:09 crc kubenswrapper[4949]: I1001 15:58:09.971878 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-865wx" event={"ID":"c2d58424-ec27-4ff6-98e0-241e5629484d","Type":"ContainerDied","Data":"071b578e723fef784e68730ac50abe7cfb563e6bcf4a0ebeaba516acdea71a80"} Oct 01 15:58:09 crc kubenswrapper[4949]: I1001 15:58:09.971902 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7f896c8c65-865wx" event={"ID":"c2d58424-ec27-4ff6-98e0-241e5629484d","Type":"ContainerDied","Data":"9f4ce6185fb3d85f03dc36c0ed6c717b4a183c1ebfb5134dac328823dea9e71f"} Oct 01 15:58:09 crc kubenswrapper[4949]: I1001 15:58:09.971917 4949 scope.go:117] "RemoveContainer" containerID="071b578e723fef784e68730ac50abe7cfb563e6bcf4a0ebeaba516acdea71a80" Oct 01 15:58:09 crc kubenswrapper[4949]: I1001 15:58:09.972027 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7f896c8c65-865wx" Oct 01 15:58:10 crc kubenswrapper[4949]: I1001 15:58:10.010155 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-865wx"] Oct 01 15:58:10 crc kubenswrapper[4949]: I1001 15:58:10.011890 4949 scope.go:117] "RemoveContainer" containerID="65d35f733b9717d27c0d44c6eda7b5a139a17659db5ab8d4e5750c5e8e14c56a" Oct 01 15:58:10 crc kubenswrapper[4949]: I1001 15:58:10.017585 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7f896c8c65-865wx"] Oct 01 15:58:10 crc kubenswrapper[4949]: I1001 15:58:10.031584 4949 scope.go:117] "RemoveContainer" containerID="071b578e723fef784e68730ac50abe7cfb563e6bcf4a0ebeaba516acdea71a80" Oct 01 15:58:10 crc kubenswrapper[4949]: E1001 15:58:10.031931 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"071b578e723fef784e68730ac50abe7cfb563e6bcf4a0ebeaba516acdea71a80\": container with ID starting with 071b578e723fef784e68730ac50abe7cfb563e6bcf4a0ebeaba516acdea71a80 not found: ID does not exist" containerID="071b578e723fef784e68730ac50abe7cfb563e6bcf4a0ebeaba516acdea71a80" Oct 01 15:58:10 crc kubenswrapper[4949]: I1001 15:58:10.031956 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"071b578e723fef784e68730ac50abe7cfb563e6bcf4a0ebeaba516acdea71a80"} err="failed to get container status \"071b578e723fef784e68730ac50abe7cfb563e6bcf4a0ebeaba516acdea71a80\": rpc error: code = NotFound desc = could not find container \"071b578e723fef784e68730ac50abe7cfb563e6bcf4a0ebeaba516acdea71a80\": container with ID starting with 071b578e723fef784e68730ac50abe7cfb563e6bcf4a0ebeaba516acdea71a80 not found: ID does not exist" Oct 01 15:58:10 crc kubenswrapper[4949]: I1001 15:58:10.031979 4949 scope.go:117] "RemoveContainer" containerID="65d35f733b9717d27c0d44c6eda7b5a139a17659db5ab8d4e5750c5e8e14c56a" Oct 01 15:58:10 crc kubenswrapper[4949]: E1001 15:58:10.032169 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"65d35f733b9717d27c0d44c6eda7b5a139a17659db5ab8d4e5750c5e8e14c56a\": container with ID starting with 65d35f733b9717d27c0d44c6eda7b5a139a17659db5ab8d4e5750c5e8e14c56a not found: ID does not exist" containerID="65d35f733b9717d27c0d44c6eda7b5a139a17659db5ab8d4e5750c5e8e14c56a" Oct 01 15:58:10 crc kubenswrapper[4949]: I1001 15:58:10.032185 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"65d35f733b9717d27c0d44c6eda7b5a139a17659db5ab8d4e5750c5e8e14c56a"} err="failed to get container status \"65d35f733b9717d27c0d44c6eda7b5a139a17659db5ab8d4e5750c5e8e14c56a\": rpc error: code = NotFound desc = could not find container \"65d35f733b9717d27c0d44c6eda7b5a139a17659db5ab8d4e5750c5e8e14c56a\": container with ID starting with 65d35f733b9717d27c0d44c6eda7b5a139a17659db5ab8d4e5750c5e8e14c56a not found: ID does not exist" Oct 01 15:58:10 crc kubenswrapper[4949]: I1001 15:58:10.343822 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-create-jc87k"] Oct 01 15:58:10 crc kubenswrapper[4949]: I1001 15:58:10.986739 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-jc87k" event={"ID":"0f2a5755-0788-460c-bc1e-0a261a9a6e0f","Type":"ContainerStarted","Data":"3c8d984c8dbb96e2cc5f2b1b532cbeb14776817532ffbe6329c3568bdcf08790"} Oct 01 15:58:10 crc kubenswrapper[4949]: I1001 15:58:10.987209 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-jc87k" event={"ID":"0f2a5755-0788-460c-bc1e-0a261a9a6e0f","Type":"ContainerStarted","Data":"2a4f5bc8bbbc0c84a0b592ec5a8821056f739241d5c3124eb45f24666114300a"} Oct 01 15:58:11 crc kubenswrapper[4949]: I1001 15:58:11.008417 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-create-jc87k" podStartSLOduration=2.008399919 podStartE2EDuration="2.008399919s" podCreationTimestamp="2025-10-01 15:58:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:58:10.999310702 +0000 UTC m=+990.304916893" watchObservedRunningTime="2025-10-01 15:58:11.008399919 +0000 UTC m=+990.314006110" Oct 01 15:58:11 crc kubenswrapper[4949]: I1001 15:58:11.621070 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2d58424-ec27-4ff6-98e0-241e5629484d" path="/var/lib/kubelet/pods/c2d58424-ec27-4ff6-98e0-241e5629484d/volumes" Oct 01 15:58:12 crc kubenswrapper[4949]: I1001 15:58:12.005201 4949 generic.go:334] "Generic (PLEG): container finished" podID="0f2a5755-0788-460c-bc1e-0a261a9a6e0f" containerID="3c8d984c8dbb96e2cc5f2b1b532cbeb14776817532ffbe6329c3568bdcf08790" exitCode=0 Oct 01 15:58:12 crc kubenswrapper[4949]: I1001 15:58:12.005244 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-jc87k" event={"ID":"0f2a5755-0788-460c-bc1e-0a261a9a6e0f","Type":"ContainerDied","Data":"3c8d984c8dbb96e2cc5f2b1b532cbeb14776817532ffbe6329c3568bdcf08790"} Oct 01 15:58:13 crc kubenswrapper[4949]: I1001 15:58:13.296909 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jc87k" Oct 01 15:58:13 crc kubenswrapper[4949]: I1001 15:58:13.371863 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-smvxn\" (UniqueName: \"kubernetes.io/projected/0f2a5755-0788-460c-bc1e-0a261a9a6e0f-kube-api-access-smvxn\") pod \"0f2a5755-0788-460c-bc1e-0a261a9a6e0f\" (UID: \"0f2a5755-0788-460c-bc1e-0a261a9a6e0f\") " Oct 01 15:58:13 crc kubenswrapper[4949]: I1001 15:58:13.378359 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f2a5755-0788-460c-bc1e-0a261a9a6e0f-kube-api-access-smvxn" (OuterVolumeSpecName: "kube-api-access-smvxn") pod "0f2a5755-0788-460c-bc1e-0a261a9a6e0f" (UID: "0f2a5755-0788-460c-bc1e-0a261a9a6e0f"). InnerVolumeSpecName "kube-api-access-smvxn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:58:13 crc kubenswrapper[4949]: I1001 15:58:13.474216 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-smvxn\" (UniqueName: \"kubernetes.io/projected/0f2a5755-0788-460c-bc1e-0a261a9a6e0f-kube-api-access-smvxn\") on node \"crc\" DevicePath \"\"" Oct 01 15:58:13 crc kubenswrapper[4949]: I1001 15:58:13.960369 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/openstack-cell1-galera-0" Oct 01 15:58:14 crc kubenswrapper[4949]: I1001 15:58:14.020275 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-create-jc87k" event={"ID":"0f2a5755-0788-460c-bc1e-0a261a9a6e0f","Type":"ContainerDied","Data":"2a4f5bc8bbbc0c84a0b592ec5a8821056f739241d5c3124eb45f24666114300a"} Oct 01 15:58:14 crc kubenswrapper[4949]: I1001 15:58:14.020326 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a4f5bc8bbbc0c84a0b592ec5a8821056f739241d5c3124eb45f24666114300a" Oct 01 15:58:14 crc kubenswrapper[4949]: I1001 15:58:14.020352 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-create-jc87k" Oct 01 15:58:14 crc kubenswrapper[4949]: I1001 15:58:14.054635 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-create-xsqct"] Oct 01 15:58:14 crc kubenswrapper[4949]: E1001 15:58:14.055019 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0f2a5755-0788-460c-bc1e-0a261a9a6e0f" containerName="mariadb-database-create" Oct 01 15:58:14 crc kubenswrapper[4949]: I1001 15:58:14.055039 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="0f2a5755-0788-460c-bc1e-0a261a9a6e0f" containerName="mariadb-database-create" Oct 01 15:58:14 crc kubenswrapper[4949]: I1001 15:58:14.055262 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="0f2a5755-0788-460c-bc1e-0a261a9a6e0f" containerName="mariadb-database-create" Oct 01 15:58:14 crc kubenswrapper[4949]: I1001 15:58:14.055905 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-xsqct" Oct 01 15:58:14 crc kubenswrapper[4949]: I1001 15:58:14.065585 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-xsqct"] Oct 01 15:58:14 crc kubenswrapper[4949]: I1001 15:58:14.184596 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kpjs4\" (UniqueName: \"kubernetes.io/projected/fe6ee0ce-f7e0-43b6-a591-e75632e2cf00-kube-api-access-kpjs4\") pod \"keystone-db-create-xsqct\" (UID: \"fe6ee0ce-f7e0-43b6-a591-e75632e2cf00\") " pod="openstack/keystone-db-create-xsqct" Oct 01 15:58:14 crc kubenswrapper[4949]: I1001 15:58:14.286411 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kpjs4\" (UniqueName: \"kubernetes.io/projected/fe6ee0ce-f7e0-43b6-a591-e75632e2cf00-kube-api-access-kpjs4\") pod \"keystone-db-create-xsqct\" (UID: \"fe6ee0ce-f7e0-43b6-a591-e75632e2cf00\") " pod="openstack/keystone-db-create-xsqct" Oct 01 15:58:14 crc kubenswrapper[4949]: I1001 15:58:14.304662 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kpjs4\" (UniqueName: \"kubernetes.io/projected/fe6ee0ce-f7e0-43b6-a591-e75632e2cf00-kube-api-access-kpjs4\") pod \"keystone-db-create-xsqct\" (UID: \"fe6ee0ce-f7e0-43b6-a591-e75632e2cf00\") " pod="openstack/keystone-db-create-xsqct" Oct 01 15:58:14 crc kubenswrapper[4949]: I1001 15:58:14.372097 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-xsqct" Oct 01 15:58:14 crc kubenswrapper[4949]: I1001 15:58:14.483648 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-da20-account-create-hjm96"] Oct 01 15:58:14 crc kubenswrapper[4949]: I1001 15:58:14.484832 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-da20-account-create-hjm96" Oct 01 15:58:14 crc kubenswrapper[4949]: I1001 15:58:14.490815 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-db-secret" Oct 01 15:58:14 crc kubenswrapper[4949]: I1001 15:58:14.492633 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-da20-account-create-hjm96"] Oct 01 15:58:14 crc kubenswrapper[4949]: I1001 15:58:14.590451 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zvwxv\" (UniqueName: \"kubernetes.io/projected/1337a138-272f-49d7-b806-2f097cfb71b1-kube-api-access-zvwxv\") pod \"placement-da20-account-create-hjm96\" (UID: \"1337a138-272f-49d7-b806-2f097cfb71b1\") " pod="openstack/placement-da20-account-create-hjm96" Oct 01 15:58:14 crc kubenswrapper[4949]: I1001 15:58:14.691642 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zvwxv\" (UniqueName: \"kubernetes.io/projected/1337a138-272f-49d7-b806-2f097cfb71b1-kube-api-access-zvwxv\") pod \"placement-da20-account-create-hjm96\" (UID: \"1337a138-272f-49d7-b806-2f097cfb71b1\") " pod="openstack/placement-da20-account-create-hjm96" Oct 01 15:58:14 crc kubenswrapper[4949]: I1001 15:58:14.707646 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zvwxv\" (UniqueName: \"kubernetes.io/projected/1337a138-272f-49d7-b806-2f097cfb71b1-kube-api-access-zvwxv\") pod \"placement-da20-account-create-hjm96\" (UID: \"1337a138-272f-49d7-b806-2f097cfb71b1\") " pod="openstack/placement-da20-account-create-hjm96" Oct 01 15:58:14 crc kubenswrapper[4949]: I1001 15:58:14.809169 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-create-xsqct"] Oct 01 15:58:14 crc kubenswrapper[4949]: I1001 15:58:14.809428 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-da20-account-create-hjm96" Oct 01 15:58:14 crc kubenswrapper[4949]: W1001 15:58:14.816563 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podfe6ee0ce_f7e0_43b6_a591_e75632e2cf00.slice/crio-dc2f9fb2b1b5d541deb58716380739bd495000ba01d281f1cadb2a6aba506121 WatchSource:0}: Error finding container dc2f9fb2b1b5d541deb58716380739bd495000ba01d281f1cadb2a6aba506121: Status 404 returned error can't find the container with id dc2f9fb2b1b5d541deb58716380739bd495000ba01d281f1cadb2a6aba506121 Oct 01 15:58:15 crc kubenswrapper[4949]: I1001 15:58:15.036440 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-xsqct" event={"ID":"fe6ee0ce-f7e0-43b6-a591-e75632e2cf00","Type":"ContainerStarted","Data":"dc2f9fb2b1b5d541deb58716380739bd495000ba01d281f1cadb2a6aba506121"} Oct 01 15:58:15 crc kubenswrapper[4949]: W1001 15:58:15.232789 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1337a138_272f_49d7_b806_2f097cfb71b1.slice/crio-4445eb9dbfae62e7293992fe278b5b7c70db6a5da6510b5e274663bf27ca6ec8 WatchSource:0}: Error finding container 4445eb9dbfae62e7293992fe278b5b7c70db6a5da6510b5e274663bf27ca6ec8: Status 404 returned error can't find the container with id 4445eb9dbfae62e7293992fe278b5b7c70db6a5da6510b5e274663bf27ca6ec8 Oct 01 15:58:15 crc kubenswrapper[4949]: I1001 15:58:15.234449 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-da20-account-create-hjm96"] Oct 01 15:58:16 crc kubenswrapper[4949]: I1001 15:58:16.045526 4949 generic.go:334] "Generic (PLEG): container finished" podID="fe6ee0ce-f7e0-43b6-a591-e75632e2cf00" containerID="eac26c162ffa85b7dec0072552cd752e3f6c83e90644ca7299e7fb92239e443f" exitCode=0 Oct 01 15:58:16 crc kubenswrapper[4949]: I1001 15:58:16.045573 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-xsqct" event={"ID":"fe6ee0ce-f7e0-43b6-a591-e75632e2cf00","Type":"ContainerDied","Data":"eac26c162ffa85b7dec0072552cd752e3f6c83e90644ca7299e7fb92239e443f"} Oct 01 15:58:16 crc kubenswrapper[4949]: I1001 15:58:16.047434 4949 generic.go:334] "Generic (PLEG): container finished" podID="1337a138-272f-49d7-b806-2f097cfb71b1" containerID="0b05e9e3781ab725a7a1cbcdd714b6c8b99f34418c32227558eca0f90dff8adc" exitCode=0 Oct 01 15:58:16 crc kubenswrapper[4949]: I1001 15:58:16.047487 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-da20-account-create-hjm96" event={"ID":"1337a138-272f-49d7-b806-2f097cfb71b1","Type":"ContainerDied","Data":"0b05e9e3781ab725a7a1cbcdd714b6c8b99f34418c32227558eca0f90dff8adc"} Oct 01 15:58:16 crc kubenswrapper[4949]: I1001 15:58:16.047534 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-da20-account-create-hjm96" event={"ID":"1337a138-272f-49d7-b806-2f097cfb71b1","Type":"ContainerStarted","Data":"4445eb9dbfae62e7293992fe278b5b7c70db6a5da6510b5e274663bf27ca6ec8"} Oct 01 15:58:16 crc kubenswrapper[4949]: I1001 15:58:16.431850 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-northd-0" Oct 01 15:58:17 crc kubenswrapper[4949]: I1001 15:58:17.454432 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-da20-account-create-hjm96" Oct 01 15:58:17 crc kubenswrapper[4949]: I1001 15:58:17.460505 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-xsqct" Oct 01 15:58:17 crc kubenswrapper[4949]: I1001 15:58:17.532603 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zvwxv\" (UniqueName: \"kubernetes.io/projected/1337a138-272f-49d7-b806-2f097cfb71b1-kube-api-access-zvwxv\") pod \"1337a138-272f-49d7-b806-2f097cfb71b1\" (UID: \"1337a138-272f-49d7-b806-2f097cfb71b1\") " Oct 01 15:58:17 crc kubenswrapper[4949]: I1001 15:58:17.533649 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kpjs4\" (UniqueName: \"kubernetes.io/projected/fe6ee0ce-f7e0-43b6-a591-e75632e2cf00-kube-api-access-kpjs4\") pod \"fe6ee0ce-f7e0-43b6-a591-e75632e2cf00\" (UID: \"fe6ee0ce-f7e0-43b6-a591-e75632e2cf00\") " Oct 01 15:58:17 crc kubenswrapper[4949]: I1001 15:58:17.539534 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1337a138-272f-49d7-b806-2f097cfb71b1-kube-api-access-zvwxv" (OuterVolumeSpecName: "kube-api-access-zvwxv") pod "1337a138-272f-49d7-b806-2f097cfb71b1" (UID: "1337a138-272f-49d7-b806-2f097cfb71b1"). InnerVolumeSpecName "kube-api-access-zvwxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:58:17 crc kubenswrapper[4949]: I1001 15:58:17.540753 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fe6ee0ce-f7e0-43b6-a591-e75632e2cf00-kube-api-access-kpjs4" (OuterVolumeSpecName: "kube-api-access-kpjs4") pod "fe6ee0ce-f7e0-43b6-a591-e75632e2cf00" (UID: "fe6ee0ce-f7e0-43b6-a591-e75632e2cf00"). InnerVolumeSpecName "kube-api-access-kpjs4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:58:17 crc kubenswrapper[4949]: I1001 15:58:17.635316 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zvwxv\" (UniqueName: \"kubernetes.io/projected/1337a138-272f-49d7-b806-2f097cfb71b1-kube-api-access-zvwxv\") on node \"crc\" DevicePath \"\"" Oct 01 15:58:17 crc kubenswrapper[4949]: I1001 15:58:17.635360 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kpjs4\" (UniqueName: \"kubernetes.io/projected/fe6ee0ce-f7e0-43b6-a591-e75632e2cf00-kube-api-access-kpjs4\") on node \"crc\" DevicePath \"\"" Oct 01 15:58:18 crc kubenswrapper[4949]: I1001 15:58:18.063752 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-create-xsqct" Oct 01 15:58:18 crc kubenswrapper[4949]: I1001 15:58:18.063791 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-create-xsqct" event={"ID":"fe6ee0ce-f7e0-43b6-a591-e75632e2cf00","Type":"ContainerDied","Data":"dc2f9fb2b1b5d541deb58716380739bd495000ba01d281f1cadb2a6aba506121"} Oct 01 15:58:18 crc kubenswrapper[4949]: I1001 15:58:18.063824 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc2f9fb2b1b5d541deb58716380739bd495000ba01d281f1cadb2a6aba506121" Oct 01 15:58:18 crc kubenswrapper[4949]: I1001 15:58:18.065446 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-da20-account-create-hjm96" event={"ID":"1337a138-272f-49d7-b806-2f097cfb71b1","Type":"ContainerDied","Data":"4445eb9dbfae62e7293992fe278b5b7c70db6a5da6510b5e274663bf27ca6ec8"} Oct 01 15:58:18 crc kubenswrapper[4949]: I1001 15:58:18.065483 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4445eb9dbfae62e7293992fe278b5b7c70db6a5da6510b5e274663bf27ca6ec8" Oct 01 15:58:18 crc kubenswrapper[4949]: I1001 15:58:18.065533 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-da20-account-create-hjm96" Oct 01 15:58:19 crc kubenswrapper[4949]: I1001 15:58:19.701459 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-6107-account-create-2hp27"] Oct 01 15:58:19 crc kubenswrapper[4949]: E1001 15:58:19.701806 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fe6ee0ce-f7e0-43b6-a591-e75632e2cf00" containerName="mariadb-database-create" Oct 01 15:58:19 crc kubenswrapper[4949]: I1001 15:58:19.701819 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="fe6ee0ce-f7e0-43b6-a591-e75632e2cf00" containerName="mariadb-database-create" Oct 01 15:58:19 crc kubenswrapper[4949]: E1001 15:58:19.701838 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1337a138-272f-49d7-b806-2f097cfb71b1" containerName="mariadb-account-create" Oct 01 15:58:19 crc kubenswrapper[4949]: I1001 15:58:19.701844 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="1337a138-272f-49d7-b806-2f097cfb71b1" containerName="mariadb-account-create" Oct 01 15:58:19 crc kubenswrapper[4949]: I1001 15:58:19.701987 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="1337a138-272f-49d7-b806-2f097cfb71b1" containerName="mariadb-account-create" Oct 01 15:58:19 crc kubenswrapper[4949]: I1001 15:58:19.701998 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="fe6ee0ce-f7e0-43b6-a591-e75632e2cf00" containerName="mariadb-database-create" Oct 01 15:58:19 crc kubenswrapper[4949]: I1001 15:58:19.702579 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6107-account-create-2hp27" Oct 01 15:58:19 crc kubenswrapper[4949]: I1001 15:58:19.704309 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-db-secret" Oct 01 15:58:19 crc kubenswrapper[4949]: I1001 15:58:19.710740 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-6107-account-create-2hp27"] Oct 01 15:58:19 crc kubenswrapper[4949]: I1001 15:58:19.769883 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdpbj\" (UniqueName: \"kubernetes.io/projected/6ed0c84f-caed-42c8-bf16-0091acce0f6b-kube-api-access-jdpbj\") pod \"glance-6107-account-create-2hp27\" (UID: \"6ed0c84f-caed-42c8-bf16-0091acce0f6b\") " pod="openstack/glance-6107-account-create-2hp27" Oct 01 15:58:19 crc kubenswrapper[4949]: I1001 15:58:19.872006 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jdpbj\" (UniqueName: \"kubernetes.io/projected/6ed0c84f-caed-42c8-bf16-0091acce0f6b-kube-api-access-jdpbj\") pod \"glance-6107-account-create-2hp27\" (UID: \"6ed0c84f-caed-42c8-bf16-0091acce0f6b\") " pod="openstack/glance-6107-account-create-2hp27" Oct 01 15:58:19 crc kubenswrapper[4949]: I1001 15:58:19.889807 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jdpbj\" (UniqueName: \"kubernetes.io/projected/6ed0c84f-caed-42c8-bf16-0091acce0f6b-kube-api-access-jdpbj\") pod \"glance-6107-account-create-2hp27\" (UID: \"6ed0c84f-caed-42c8-bf16-0091acce0f6b\") " pod="openstack/glance-6107-account-create-2hp27" Oct 01 15:58:20 crc kubenswrapper[4949]: I1001 15:58:20.025282 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6107-account-create-2hp27" Oct 01 15:58:20 crc kubenswrapper[4949]: I1001 15:58:20.507500 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-6107-account-create-2hp27"] Oct 01 15:58:21 crc kubenswrapper[4949]: I1001 15:58:21.094770 4949 generic.go:334] "Generic (PLEG): container finished" podID="6ed0c84f-caed-42c8-bf16-0091acce0f6b" containerID="89ff5a4754b612a7bc103620e753b679367d62a4dbd39f5a9dd5230b82d3713e" exitCode=0 Oct 01 15:58:21 crc kubenswrapper[4949]: I1001 15:58:21.094847 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-6107-account-create-2hp27" event={"ID":"6ed0c84f-caed-42c8-bf16-0091acce0f6b","Type":"ContainerDied","Data":"89ff5a4754b612a7bc103620e753b679367d62a4dbd39f5a9dd5230b82d3713e"} Oct 01 15:58:21 crc kubenswrapper[4949]: I1001 15:58:21.096319 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-6107-account-create-2hp27" event={"ID":"6ed0c84f-caed-42c8-bf16-0091acce0f6b","Type":"ContainerStarted","Data":"b7a4c9e6afbb136a7f6594e84b23e5995e1d5f907383b7ddff3e2d1f0cf19c76"} Oct 01 15:58:22 crc kubenswrapper[4949]: I1001 15:58:22.397025 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6107-account-create-2hp27" Oct 01 15:58:22 crc kubenswrapper[4949]: I1001 15:58:22.413584 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdpbj\" (UniqueName: \"kubernetes.io/projected/6ed0c84f-caed-42c8-bf16-0091acce0f6b-kube-api-access-jdpbj\") pod \"6ed0c84f-caed-42c8-bf16-0091acce0f6b\" (UID: \"6ed0c84f-caed-42c8-bf16-0091acce0f6b\") " Oct 01 15:58:22 crc kubenswrapper[4949]: I1001 15:58:22.424799 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ed0c84f-caed-42c8-bf16-0091acce0f6b-kube-api-access-jdpbj" (OuterVolumeSpecName: "kube-api-access-jdpbj") pod "6ed0c84f-caed-42c8-bf16-0091acce0f6b" (UID: "6ed0c84f-caed-42c8-bf16-0091acce0f6b"). InnerVolumeSpecName "kube-api-access-jdpbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:58:22 crc kubenswrapper[4949]: I1001 15:58:22.522215 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jdpbj\" (UniqueName: \"kubernetes.io/projected/6ed0c84f-caed-42c8-bf16-0091acce0f6b-kube-api-access-jdpbj\") on node \"crc\" DevicePath \"\"" Oct 01 15:58:23 crc kubenswrapper[4949]: I1001 15:58:23.111761 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-6107-account-create-2hp27" event={"ID":"6ed0c84f-caed-42c8-bf16-0091acce0f6b","Type":"ContainerDied","Data":"b7a4c9e6afbb136a7f6594e84b23e5995e1d5f907383b7ddff3e2d1f0cf19c76"} Oct 01 15:58:23 crc kubenswrapper[4949]: I1001 15:58:23.111805 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7a4c9e6afbb136a7f6594e84b23e5995e1d5f907383b7ddff3e2d1f0cf19c76" Oct 01 15:58:23 crc kubenswrapper[4949]: I1001 15:58:23.111834 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-6107-account-create-2hp27" Oct 01 15:58:24 crc kubenswrapper[4949]: I1001 15:58:24.196694 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-57a7-account-create-lnbg5"] Oct 01 15:58:24 crc kubenswrapper[4949]: E1001 15:58:24.197282 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6ed0c84f-caed-42c8-bf16-0091acce0f6b" containerName="mariadb-account-create" Oct 01 15:58:24 crc kubenswrapper[4949]: I1001 15:58:24.197304 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="6ed0c84f-caed-42c8-bf16-0091acce0f6b" containerName="mariadb-account-create" Oct 01 15:58:24 crc kubenswrapper[4949]: I1001 15:58:24.197460 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="6ed0c84f-caed-42c8-bf16-0091acce0f6b" containerName="mariadb-account-create" Oct 01 15:58:24 crc kubenswrapper[4949]: I1001 15:58:24.197961 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-57a7-account-create-lnbg5" Oct 01 15:58:24 crc kubenswrapper[4949]: I1001 15:58:24.201061 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-db-secret" Oct 01 15:58:24 crc kubenswrapper[4949]: I1001 15:58:24.216371 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-57a7-account-create-lnbg5"] Oct 01 15:58:24 crc kubenswrapper[4949]: I1001 15:58:24.252362 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgwkr\" (UniqueName: \"kubernetes.io/projected/ce6e3295-9412-4b03-a51e-2311ec5922aa-kube-api-access-sgwkr\") pod \"keystone-57a7-account-create-lnbg5\" (UID: \"ce6e3295-9412-4b03-a51e-2311ec5922aa\") " pod="openstack/keystone-57a7-account-create-lnbg5" Oct 01 15:58:24 crc kubenswrapper[4949]: I1001 15:58:24.353922 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgwkr\" (UniqueName: \"kubernetes.io/projected/ce6e3295-9412-4b03-a51e-2311ec5922aa-kube-api-access-sgwkr\") pod \"keystone-57a7-account-create-lnbg5\" (UID: \"ce6e3295-9412-4b03-a51e-2311ec5922aa\") " pod="openstack/keystone-57a7-account-create-lnbg5" Oct 01 15:58:24 crc kubenswrapper[4949]: I1001 15:58:24.370723 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgwkr\" (UniqueName: \"kubernetes.io/projected/ce6e3295-9412-4b03-a51e-2311ec5922aa-kube-api-access-sgwkr\") pod \"keystone-57a7-account-create-lnbg5\" (UID: \"ce6e3295-9412-4b03-a51e-2311ec5922aa\") " pod="openstack/keystone-57a7-account-create-lnbg5" Oct 01 15:58:24 crc kubenswrapper[4949]: I1001 15:58:24.517726 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-57a7-account-create-lnbg5" Oct 01 15:58:24 crc kubenswrapper[4949]: I1001 15:58:24.846162 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-db-sync-76shp"] Oct 01 15:58:24 crc kubenswrapper[4949]: I1001 15:58:24.853378 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-76shp" Oct 01 15:58:24 crc kubenswrapper[4949]: I1001 15:58:24.855311 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-76shp"] Oct 01 15:58:24 crc kubenswrapper[4949]: I1001 15:58:24.855695 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-7kv4g" Oct 01 15:58:24 crc kubenswrapper[4949]: I1001 15:58:24.855710 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-config-data" Oct 01 15:58:24 crc kubenswrapper[4949]: I1001 15:58:24.962235 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9kkz\" (UniqueName: \"kubernetes.io/projected/21738a3b-69a2-4955-b45d-fe1f31585951-kube-api-access-r9kkz\") pod \"glance-db-sync-76shp\" (UID: \"21738a3b-69a2-4955-b45d-fe1f31585951\") " pod="openstack/glance-db-sync-76shp" Oct 01 15:58:24 crc kubenswrapper[4949]: I1001 15:58:24.962292 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21738a3b-69a2-4955-b45d-fe1f31585951-combined-ca-bundle\") pod \"glance-db-sync-76shp\" (UID: \"21738a3b-69a2-4955-b45d-fe1f31585951\") " pod="openstack/glance-db-sync-76shp" Oct 01 15:58:24 crc kubenswrapper[4949]: I1001 15:58:24.962315 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21738a3b-69a2-4955-b45d-fe1f31585951-config-data\") pod \"glance-db-sync-76shp\" (UID: \"21738a3b-69a2-4955-b45d-fe1f31585951\") " pod="openstack/glance-db-sync-76shp" Oct 01 15:58:24 crc kubenswrapper[4949]: I1001 15:58:24.962344 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/21738a3b-69a2-4955-b45d-fe1f31585951-db-sync-config-data\") pod \"glance-db-sync-76shp\" (UID: \"21738a3b-69a2-4955-b45d-fe1f31585951\") " pod="openstack/glance-db-sync-76shp" Oct 01 15:58:24 crc kubenswrapper[4949]: I1001 15:58:24.963550 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-57a7-account-create-lnbg5"] Oct 01 15:58:24 crc kubenswrapper[4949]: W1001 15:58:24.968285 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podce6e3295_9412_4b03_a51e_2311ec5922aa.slice/crio-7fff7e4865a4c24a42d03426f04e8596dc295268b74737c19d5a21ad9194a90f WatchSource:0}: Error finding container 7fff7e4865a4c24a42d03426f04e8596dc295268b74737c19d5a21ad9194a90f: Status 404 returned error can't find the container with id 7fff7e4865a4c24a42d03426f04e8596dc295268b74737c19d5a21ad9194a90f Oct 01 15:58:25 crc kubenswrapper[4949]: I1001 15:58:25.064177 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r9kkz\" (UniqueName: \"kubernetes.io/projected/21738a3b-69a2-4955-b45d-fe1f31585951-kube-api-access-r9kkz\") pod \"glance-db-sync-76shp\" (UID: \"21738a3b-69a2-4955-b45d-fe1f31585951\") " pod="openstack/glance-db-sync-76shp" Oct 01 15:58:25 crc kubenswrapper[4949]: I1001 15:58:25.064286 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21738a3b-69a2-4955-b45d-fe1f31585951-combined-ca-bundle\") pod \"glance-db-sync-76shp\" (UID: \"21738a3b-69a2-4955-b45d-fe1f31585951\") " pod="openstack/glance-db-sync-76shp" Oct 01 15:58:25 crc kubenswrapper[4949]: I1001 15:58:25.064326 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21738a3b-69a2-4955-b45d-fe1f31585951-config-data\") pod \"glance-db-sync-76shp\" (UID: \"21738a3b-69a2-4955-b45d-fe1f31585951\") " pod="openstack/glance-db-sync-76shp" Oct 01 15:58:25 crc kubenswrapper[4949]: I1001 15:58:25.064366 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/21738a3b-69a2-4955-b45d-fe1f31585951-db-sync-config-data\") pod \"glance-db-sync-76shp\" (UID: \"21738a3b-69a2-4955-b45d-fe1f31585951\") " pod="openstack/glance-db-sync-76shp" Oct 01 15:58:25 crc kubenswrapper[4949]: I1001 15:58:25.070538 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21738a3b-69a2-4955-b45d-fe1f31585951-config-data\") pod \"glance-db-sync-76shp\" (UID: \"21738a3b-69a2-4955-b45d-fe1f31585951\") " pod="openstack/glance-db-sync-76shp" Oct 01 15:58:25 crc kubenswrapper[4949]: I1001 15:58:25.070728 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21738a3b-69a2-4955-b45d-fe1f31585951-combined-ca-bundle\") pod \"glance-db-sync-76shp\" (UID: \"21738a3b-69a2-4955-b45d-fe1f31585951\") " pod="openstack/glance-db-sync-76shp" Oct 01 15:58:25 crc kubenswrapper[4949]: I1001 15:58:25.071462 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/21738a3b-69a2-4955-b45d-fe1f31585951-db-sync-config-data\") pod \"glance-db-sync-76shp\" (UID: \"21738a3b-69a2-4955-b45d-fe1f31585951\") " pod="openstack/glance-db-sync-76shp" Oct 01 15:58:25 crc kubenswrapper[4949]: I1001 15:58:25.085473 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9kkz\" (UniqueName: \"kubernetes.io/projected/21738a3b-69a2-4955-b45d-fe1f31585951-kube-api-access-r9kkz\") pod \"glance-db-sync-76shp\" (UID: \"21738a3b-69a2-4955-b45d-fe1f31585951\") " pod="openstack/glance-db-sync-76shp" Oct 01 15:58:25 crc kubenswrapper[4949]: I1001 15:58:25.127819 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-57a7-account-create-lnbg5" event={"ID":"ce6e3295-9412-4b03-a51e-2311ec5922aa","Type":"ContainerStarted","Data":"7fff7e4865a4c24a42d03426f04e8596dc295268b74737c19d5a21ad9194a90f"} Oct 01 15:58:25 crc kubenswrapper[4949]: I1001 15:58:25.176359 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-76shp" Oct 01 15:58:25 crc kubenswrapper[4949]: I1001 15:58:25.504617 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-db-sync-76shp"] Oct 01 15:58:26 crc kubenswrapper[4949]: I1001 15:58:26.050549 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-4kkzs" podUID="230fbfcd-f990-42cf-88bb-9e4c4ae45a7d" containerName="ovn-controller" probeResult="failure" output=< Oct 01 15:58:26 crc kubenswrapper[4949]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Oct 01 15:58:26 crc kubenswrapper[4949]: > Oct 01 15:58:26 crc kubenswrapper[4949]: I1001 15:58:26.136468 4949 generic.go:334] "Generic (PLEG): container finished" podID="01cf3bff-bdb8-43f9-bf81-c106d5c5dae2" containerID="9d232863bad63a6d2e94fa2dacde909b924739702a22f1742d3468ae1c494e77" exitCode=0 Oct 01 15:58:26 crc kubenswrapper[4949]: I1001 15:58:26.136534 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2","Type":"ContainerDied","Data":"9d232863bad63a6d2e94fa2dacde909b924739702a22f1742d3468ae1c494e77"} Oct 01 15:58:26 crc kubenswrapper[4949]: I1001 15:58:26.138300 4949 generic.go:334] "Generic (PLEG): container finished" podID="ce6e3295-9412-4b03-a51e-2311ec5922aa" containerID="f303af1128e13f4226b028c9c93eeb28cfe40722f1830db8be37439cfa7047db" exitCode=0 Oct 01 15:58:26 crc kubenswrapper[4949]: I1001 15:58:26.138393 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-57a7-account-create-lnbg5" event={"ID":"ce6e3295-9412-4b03-a51e-2311ec5922aa","Type":"ContainerDied","Data":"f303af1128e13f4226b028c9c93eeb28cfe40722f1830db8be37439cfa7047db"} Oct 01 15:58:26 crc kubenswrapper[4949]: I1001 15:58:26.140688 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-76shp" event={"ID":"21738a3b-69a2-4955-b45d-fe1f31585951","Type":"ContainerStarted","Data":"d24e4bc377dac47c0ef189c5a8ac1d4ac871a365d8ffbd02f21d5f9a58bd533b"} Oct 01 15:58:27 crc kubenswrapper[4949]: I1001 15:58:27.149897 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2","Type":"ContainerStarted","Data":"988a4aea139d70fee8c74440ebe2c28bff716f0cf678b079a97952a5beef6bf4"} Oct 01 15:58:27 crc kubenswrapper[4949]: I1001 15:58:27.150432 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Oct 01 15:58:27 crc kubenswrapper[4949]: I1001 15:58:27.176868 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=69.985774052 podStartE2EDuration="1m18.176850504s" podCreationTimestamp="2025-10-01 15:57:09 +0000 UTC" firstStartedPulling="2025-10-01 15:57:43.722880251 +0000 UTC m=+963.028486442" lastFinishedPulling="2025-10-01 15:57:51.913956703 +0000 UTC m=+971.219562894" observedRunningTime="2025-10-01 15:58:27.174842029 +0000 UTC m=+1006.480448240" watchObservedRunningTime="2025-10-01 15:58:27.176850504 +0000 UTC m=+1006.482456695" Oct 01 15:58:27 crc kubenswrapper[4949]: I1001 15:58:27.451252 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-57a7-account-create-lnbg5" Oct 01 15:58:27 crc kubenswrapper[4949]: I1001 15:58:27.605511 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgwkr\" (UniqueName: \"kubernetes.io/projected/ce6e3295-9412-4b03-a51e-2311ec5922aa-kube-api-access-sgwkr\") pod \"ce6e3295-9412-4b03-a51e-2311ec5922aa\" (UID: \"ce6e3295-9412-4b03-a51e-2311ec5922aa\") " Oct 01 15:58:27 crc kubenswrapper[4949]: I1001 15:58:27.610746 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce6e3295-9412-4b03-a51e-2311ec5922aa-kube-api-access-sgwkr" (OuterVolumeSpecName: "kube-api-access-sgwkr") pod "ce6e3295-9412-4b03-a51e-2311ec5922aa" (UID: "ce6e3295-9412-4b03-a51e-2311ec5922aa"). InnerVolumeSpecName "kube-api-access-sgwkr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:58:27 crc kubenswrapper[4949]: I1001 15:58:27.708259 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgwkr\" (UniqueName: \"kubernetes.io/projected/ce6e3295-9412-4b03-a51e-2311ec5922aa-kube-api-access-sgwkr\") on node \"crc\" DevicePath \"\"" Oct 01 15:58:28 crc kubenswrapper[4949]: I1001 15:58:28.169033 4949 generic.go:334] "Generic (PLEG): container finished" podID="b9ca2257-0b8d-4f57-8772-8b6d5d28b10a" containerID="37122c84d3d23987e156207e87880b9c2d0f918cad6199738cd14ab31f8b51bc" exitCode=0 Oct 01 15:58:28 crc kubenswrapper[4949]: I1001 15:58:28.169086 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a","Type":"ContainerDied","Data":"37122c84d3d23987e156207e87880b9c2d0f918cad6199738cd14ab31f8b51bc"} Oct 01 15:58:28 crc kubenswrapper[4949]: I1001 15:58:28.188205 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-57a7-account-create-lnbg5" Oct 01 15:58:28 crc kubenswrapper[4949]: I1001 15:58:28.188274 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-57a7-account-create-lnbg5" event={"ID":"ce6e3295-9412-4b03-a51e-2311ec5922aa","Type":"ContainerDied","Data":"7fff7e4865a4c24a42d03426f04e8596dc295268b74737c19d5a21ad9194a90f"} Oct 01 15:58:28 crc kubenswrapper[4949]: I1001 15:58:28.189283 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fff7e4865a4c24a42d03426f04e8596dc295268b74737c19d5a21ad9194a90f" Oct 01 15:58:29 crc kubenswrapper[4949]: I1001 15:58:29.209177 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a","Type":"ContainerStarted","Data":"2e1045bad23fe7d809b70ff5cc0749488cda57fe86e9807366144059f0c42bde"} Oct 01 15:58:30 crc kubenswrapper[4949]: I1001 15:58:30.216463 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Oct 01 15:58:30 crc kubenswrapper[4949]: I1001 15:58:30.245875 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=72.905670907 podStartE2EDuration="1m21.245856344s" podCreationTimestamp="2025-10-01 15:57:09 +0000 UTC" firstStartedPulling="2025-10-01 15:57:43.722931542 +0000 UTC m=+963.028537733" lastFinishedPulling="2025-10-01 15:57:52.063116979 +0000 UTC m=+971.368723170" observedRunningTime="2025-10-01 15:58:30.240406536 +0000 UTC m=+1009.546012737" watchObservedRunningTime="2025-10-01 15:58:30.245856344 +0000 UTC m=+1009.551462535" Oct 01 15:58:31 crc kubenswrapper[4949]: I1001 15:58:31.058958 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-4kkzs" podUID="230fbfcd-f990-42cf-88bb-9e4c4ae45a7d" containerName="ovn-controller" probeResult="failure" output=< Oct 01 15:58:31 crc kubenswrapper[4949]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Oct 01 15:58:31 crc kubenswrapper[4949]: > Oct 01 15:58:31 crc kubenswrapper[4949]: I1001 15:58:31.070196 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-248dp" Oct 01 15:58:31 crc kubenswrapper[4949]: I1001 15:58:31.076381 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-ovs-248dp" Oct 01 15:58:31 crc kubenswrapper[4949]: I1001 15:58:31.297618 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-4kkzs-config-b2flw"] Oct 01 15:58:31 crc kubenswrapper[4949]: E1001 15:58:31.298300 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce6e3295-9412-4b03-a51e-2311ec5922aa" containerName="mariadb-account-create" Oct 01 15:58:31 crc kubenswrapper[4949]: I1001 15:58:31.298318 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce6e3295-9412-4b03-a51e-2311ec5922aa" containerName="mariadb-account-create" Oct 01 15:58:31 crc kubenswrapper[4949]: I1001 15:58:31.298507 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce6e3295-9412-4b03-a51e-2311ec5922aa" containerName="mariadb-account-create" Oct 01 15:58:31 crc kubenswrapper[4949]: I1001 15:58:31.299138 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4kkzs-config-b2flw" Oct 01 15:58:31 crc kubenswrapper[4949]: I1001 15:58:31.302759 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-extra-scripts" Oct 01 15:58:31 crc kubenswrapper[4949]: I1001 15:58:31.310073 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4kkzs-config-b2flw"] Oct 01 15:58:31 crc kubenswrapper[4949]: I1001 15:58:31.375226 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/66114d21-bc4b-4fa6-ac9b-47f854d50e92-additional-scripts\") pod \"ovn-controller-4kkzs-config-b2flw\" (UID: \"66114d21-bc4b-4fa6-ac9b-47f854d50e92\") " pod="openstack/ovn-controller-4kkzs-config-b2flw" Oct 01 15:58:31 crc kubenswrapper[4949]: I1001 15:58:31.375275 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssgr9\" (UniqueName: \"kubernetes.io/projected/66114d21-bc4b-4fa6-ac9b-47f854d50e92-kube-api-access-ssgr9\") pod \"ovn-controller-4kkzs-config-b2flw\" (UID: \"66114d21-bc4b-4fa6-ac9b-47f854d50e92\") " pod="openstack/ovn-controller-4kkzs-config-b2flw" Oct 01 15:58:31 crc kubenswrapper[4949]: I1001 15:58:31.375490 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/66114d21-bc4b-4fa6-ac9b-47f854d50e92-var-run\") pod \"ovn-controller-4kkzs-config-b2flw\" (UID: \"66114d21-bc4b-4fa6-ac9b-47f854d50e92\") " pod="openstack/ovn-controller-4kkzs-config-b2flw" Oct 01 15:58:31 crc kubenswrapper[4949]: I1001 15:58:31.375570 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/66114d21-bc4b-4fa6-ac9b-47f854d50e92-var-log-ovn\") pod \"ovn-controller-4kkzs-config-b2flw\" (UID: \"66114d21-bc4b-4fa6-ac9b-47f854d50e92\") " pod="openstack/ovn-controller-4kkzs-config-b2flw" Oct 01 15:58:31 crc kubenswrapper[4949]: I1001 15:58:31.375639 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/66114d21-bc4b-4fa6-ac9b-47f854d50e92-var-run-ovn\") pod \"ovn-controller-4kkzs-config-b2flw\" (UID: \"66114d21-bc4b-4fa6-ac9b-47f854d50e92\") " pod="openstack/ovn-controller-4kkzs-config-b2flw" Oct 01 15:58:31 crc kubenswrapper[4949]: I1001 15:58:31.375676 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/66114d21-bc4b-4fa6-ac9b-47f854d50e92-scripts\") pod \"ovn-controller-4kkzs-config-b2flw\" (UID: \"66114d21-bc4b-4fa6-ac9b-47f854d50e92\") " pod="openstack/ovn-controller-4kkzs-config-b2flw" Oct 01 15:58:31 crc kubenswrapper[4949]: I1001 15:58:31.478069 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/66114d21-bc4b-4fa6-ac9b-47f854d50e92-var-log-ovn\") pod \"ovn-controller-4kkzs-config-b2flw\" (UID: \"66114d21-bc4b-4fa6-ac9b-47f854d50e92\") " pod="openstack/ovn-controller-4kkzs-config-b2flw" Oct 01 15:58:31 crc kubenswrapper[4949]: I1001 15:58:31.478171 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/66114d21-bc4b-4fa6-ac9b-47f854d50e92-var-run-ovn\") pod \"ovn-controller-4kkzs-config-b2flw\" (UID: \"66114d21-bc4b-4fa6-ac9b-47f854d50e92\") " pod="openstack/ovn-controller-4kkzs-config-b2flw" Oct 01 15:58:31 crc kubenswrapper[4949]: I1001 15:58:31.478201 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/66114d21-bc4b-4fa6-ac9b-47f854d50e92-scripts\") pod \"ovn-controller-4kkzs-config-b2flw\" (UID: \"66114d21-bc4b-4fa6-ac9b-47f854d50e92\") " pod="openstack/ovn-controller-4kkzs-config-b2flw" Oct 01 15:58:31 crc kubenswrapper[4949]: I1001 15:58:31.478263 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/66114d21-bc4b-4fa6-ac9b-47f854d50e92-additional-scripts\") pod \"ovn-controller-4kkzs-config-b2flw\" (UID: \"66114d21-bc4b-4fa6-ac9b-47f854d50e92\") " pod="openstack/ovn-controller-4kkzs-config-b2flw" Oct 01 15:58:31 crc kubenswrapper[4949]: I1001 15:58:31.478291 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ssgr9\" (UniqueName: \"kubernetes.io/projected/66114d21-bc4b-4fa6-ac9b-47f854d50e92-kube-api-access-ssgr9\") pod \"ovn-controller-4kkzs-config-b2flw\" (UID: \"66114d21-bc4b-4fa6-ac9b-47f854d50e92\") " pod="openstack/ovn-controller-4kkzs-config-b2flw" Oct 01 15:58:31 crc kubenswrapper[4949]: I1001 15:58:31.478369 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/66114d21-bc4b-4fa6-ac9b-47f854d50e92-var-run\") pod \"ovn-controller-4kkzs-config-b2flw\" (UID: \"66114d21-bc4b-4fa6-ac9b-47f854d50e92\") " pod="openstack/ovn-controller-4kkzs-config-b2flw" Oct 01 15:58:31 crc kubenswrapper[4949]: I1001 15:58:31.478676 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/66114d21-bc4b-4fa6-ac9b-47f854d50e92-var-run\") pod \"ovn-controller-4kkzs-config-b2flw\" (UID: \"66114d21-bc4b-4fa6-ac9b-47f854d50e92\") " pod="openstack/ovn-controller-4kkzs-config-b2flw" Oct 01 15:58:31 crc kubenswrapper[4949]: I1001 15:58:31.478686 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/66114d21-bc4b-4fa6-ac9b-47f854d50e92-var-log-ovn\") pod \"ovn-controller-4kkzs-config-b2flw\" (UID: \"66114d21-bc4b-4fa6-ac9b-47f854d50e92\") " pod="openstack/ovn-controller-4kkzs-config-b2flw" Oct 01 15:58:31 crc kubenswrapper[4949]: I1001 15:58:31.478762 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/66114d21-bc4b-4fa6-ac9b-47f854d50e92-var-run-ovn\") pod \"ovn-controller-4kkzs-config-b2flw\" (UID: \"66114d21-bc4b-4fa6-ac9b-47f854d50e92\") " pod="openstack/ovn-controller-4kkzs-config-b2flw" Oct 01 15:58:31 crc kubenswrapper[4949]: I1001 15:58:31.479442 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/66114d21-bc4b-4fa6-ac9b-47f854d50e92-additional-scripts\") pod \"ovn-controller-4kkzs-config-b2flw\" (UID: \"66114d21-bc4b-4fa6-ac9b-47f854d50e92\") " pod="openstack/ovn-controller-4kkzs-config-b2flw" Oct 01 15:58:31 crc kubenswrapper[4949]: I1001 15:58:31.480612 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/66114d21-bc4b-4fa6-ac9b-47f854d50e92-scripts\") pod \"ovn-controller-4kkzs-config-b2flw\" (UID: \"66114d21-bc4b-4fa6-ac9b-47f854d50e92\") " pod="openstack/ovn-controller-4kkzs-config-b2flw" Oct 01 15:58:31 crc kubenswrapper[4949]: I1001 15:58:31.508562 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssgr9\" (UniqueName: \"kubernetes.io/projected/66114d21-bc4b-4fa6-ac9b-47f854d50e92-kube-api-access-ssgr9\") pod \"ovn-controller-4kkzs-config-b2flw\" (UID: \"66114d21-bc4b-4fa6-ac9b-47f854d50e92\") " pod="openstack/ovn-controller-4kkzs-config-b2flw" Oct 01 15:58:31 crc kubenswrapper[4949]: I1001 15:58:31.629045 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4kkzs-config-b2flw" Oct 01 15:58:36 crc kubenswrapper[4949]: I1001 15:58:36.059114 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/ovn-controller-4kkzs" podUID="230fbfcd-f990-42cf-88bb-9e4c4ae45a7d" containerName="ovn-controller" probeResult="failure" output=< Oct 01 15:58:36 crc kubenswrapper[4949]: ERROR - ovn-controller connection status is 'not connected', expecting 'connected' status Oct 01 15:58:36 crc kubenswrapper[4949]: > Oct 01 15:58:36 crc kubenswrapper[4949]: I1001 15:58:36.956608 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4kkzs-config-b2flw"] Oct 01 15:58:37 crc kubenswrapper[4949]: I1001 15:58:37.271263 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-76shp" event={"ID":"21738a3b-69a2-4955-b45d-fe1f31585951","Type":"ContainerStarted","Data":"879c9bb8ce7b67391592a9dd882ef7bb6b59f58559de77033d3eb297815fbf35"} Oct 01 15:58:37 crc kubenswrapper[4949]: I1001 15:58:37.275319 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4kkzs-config-b2flw" event={"ID":"66114d21-bc4b-4fa6-ac9b-47f854d50e92","Type":"ContainerStarted","Data":"ec2d0370e6be55fc2f394b7ef2a90e67e3323a05ffe1e940ef749a614cc67a4b"} Oct 01 15:58:37 crc kubenswrapper[4949]: I1001 15:58:37.275372 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4kkzs-config-b2flw" event={"ID":"66114d21-bc4b-4fa6-ac9b-47f854d50e92","Type":"ContainerStarted","Data":"137ca0fd1e9d3803e7ecfe5387f79bb6be463914907a3d0dee63b1be91265148"} Oct 01 15:58:37 crc kubenswrapper[4949]: I1001 15:58:37.284596 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-db-sync-76shp" podStartSLOduration=2.129563774 podStartE2EDuration="13.284580646s" podCreationTimestamp="2025-10-01 15:58:24 +0000 UTC" firstStartedPulling="2025-10-01 15:58:25.519943629 +0000 UTC m=+1004.825549820" lastFinishedPulling="2025-10-01 15:58:36.674960501 +0000 UTC m=+1015.980566692" observedRunningTime="2025-10-01 15:58:37.283183439 +0000 UTC m=+1016.588789630" watchObservedRunningTime="2025-10-01 15:58:37.284580646 +0000 UTC m=+1016.590186837" Oct 01 15:58:37 crc kubenswrapper[4949]: I1001 15:58:37.309279 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-controller-4kkzs-config-b2flw" podStartSLOduration=6.309251406 podStartE2EDuration="6.309251406s" podCreationTimestamp="2025-10-01 15:58:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:58:37.303566252 +0000 UTC m=+1016.609172453" watchObservedRunningTime="2025-10-01 15:58:37.309251406 +0000 UTC m=+1016.614857617" Oct 01 15:58:38 crc kubenswrapper[4949]: I1001 15:58:38.285402 4949 generic.go:334] "Generic (PLEG): container finished" podID="66114d21-bc4b-4fa6-ac9b-47f854d50e92" containerID="ec2d0370e6be55fc2f394b7ef2a90e67e3323a05ffe1e940ef749a614cc67a4b" exitCode=0 Oct 01 15:58:38 crc kubenswrapper[4949]: I1001 15:58:38.286996 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4kkzs-config-b2flw" event={"ID":"66114d21-bc4b-4fa6-ac9b-47f854d50e92","Type":"ContainerDied","Data":"ec2d0370e6be55fc2f394b7ef2a90e67e3323a05ffe1e940ef749a614cc67a4b"} Oct 01 15:58:39 crc kubenswrapper[4949]: I1001 15:58:39.600100 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4kkzs-config-b2flw" Oct 01 15:58:39 crc kubenswrapper[4949]: I1001 15:58:39.714901 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/66114d21-bc4b-4fa6-ac9b-47f854d50e92-additional-scripts\") pod \"66114d21-bc4b-4fa6-ac9b-47f854d50e92\" (UID: \"66114d21-bc4b-4fa6-ac9b-47f854d50e92\") " Oct 01 15:58:39 crc kubenswrapper[4949]: I1001 15:58:39.715068 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/66114d21-bc4b-4fa6-ac9b-47f854d50e92-var-run\") pod \"66114d21-bc4b-4fa6-ac9b-47f854d50e92\" (UID: \"66114d21-bc4b-4fa6-ac9b-47f854d50e92\") " Oct 01 15:58:39 crc kubenswrapper[4949]: I1001 15:58:39.715178 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssgr9\" (UniqueName: \"kubernetes.io/projected/66114d21-bc4b-4fa6-ac9b-47f854d50e92-kube-api-access-ssgr9\") pod \"66114d21-bc4b-4fa6-ac9b-47f854d50e92\" (UID: \"66114d21-bc4b-4fa6-ac9b-47f854d50e92\") " Oct 01 15:58:39 crc kubenswrapper[4949]: I1001 15:58:39.715212 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/66114d21-bc4b-4fa6-ac9b-47f854d50e92-var-run-ovn\") pod \"66114d21-bc4b-4fa6-ac9b-47f854d50e92\" (UID: \"66114d21-bc4b-4fa6-ac9b-47f854d50e92\") " Oct 01 15:58:39 crc kubenswrapper[4949]: I1001 15:58:39.715243 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/66114d21-bc4b-4fa6-ac9b-47f854d50e92-var-log-ovn\") pod \"66114d21-bc4b-4fa6-ac9b-47f854d50e92\" (UID: \"66114d21-bc4b-4fa6-ac9b-47f854d50e92\") " Oct 01 15:58:39 crc kubenswrapper[4949]: I1001 15:58:39.715250 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/66114d21-bc4b-4fa6-ac9b-47f854d50e92-var-run" (OuterVolumeSpecName: "var-run") pod "66114d21-bc4b-4fa6-ac9b-47f854d50e92" (UID: "66114d21-bc4b-4fa6-ac9b-47f854d50e92"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 01 15:58:39 crc kubenswrapper[4949]: I1001 15:58:39.715343 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/66114d21-bc4b-4fa6-ac9b-47f854d50e92-scripts\") pod \"66114d21-bc4b-4fa6-ac9b-47f854d50e92\" (UID: \"66114d21-bc4b-4fa6-ac9b-47f854d50e92\") " Oct 01 15:58:39 crc kubenswrapper[4949]: I1001 15:58:39.715329 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/66114d21-bc4b-4fa6-ac9b-47f854d50e92-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "66114d21-bc4b-4fa6-ac9b-47f854d50e92" (UID: "66114d21-bc4b-4fa6-ac9b-47f854d50e92"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 01 15:58:39 crc kubenswrapper[4949]: I1001 15:58:39.715373 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/66114d21-bc4b-4fa6-ac9b-47f854d50e92-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "66114d21-bc4b-4fa6-ac9b-47f854d50e92" (UID: "66114d21-bc4b-4fa6-ac9b-47f854d50e92"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 01 15:58:39 crc kubenswrapper[4949]: I1001 15:58:39.715796 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66114d21-bc4b-4fa6-ac9b-47f854d50e92-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "66114d21-bc4b-4fa6-ac9b-47f854d50e92" (UID: "66114d21-bc4b-4fa6-ac9b-47f854d50e92"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:58:39 crc kubenswrapper[4949]: I1001 15:58:39.715987 4949 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/66114d21-bc4b-4fa6-ac9b-47f854d50e92-var-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 01 15:58:39 crc kubenswrapper[4949]: I1001 15:58:39.716021 4949 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/66114d21-bc4b-4fa6-ac9b-47f854d50e92-var-log-ovn\") on node \"crc\" DevicePath \"\"" Oct 01 15:58:39 crc kubenswrapper[4949]: I1001 15:58:39.716035 4949 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/66114d21-bc4b-4fa6-ac9b-47f854d50e92-additional-scripts\") on node \"crc\" DevicePath \"\"" Oct 01 15:58:39 crc kubenswrapper[4949]: I1001 15:58:39.716057 4949 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/66114d21-bc4b-4fa6-ac9b-47f854d50e92-var-run\") on node \"crc\" DevicePath \"\"" Oct 01 15:58:39 crc kubenswrapper[4949]: I1001 15:58:39.716107 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/66114d21-bc4b-4fa6-ac9b-47f854d50e92-scripts" (OuterVolumeSpecName: "scripts") pod "66114d21-bc4b-4fa6-ac9b-47f854d50e92" (UID: "66114d21-bc4b-4fa6-ac9b-47f854d50e92"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:58:39 crc kubenswrapper[4949]: I1001 15:58:39.723286 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66114d21-bc4b-4fa6-ac9b-47f854d50e92-kube-api-access-ssgr9" (OuterVolumeSpecName: "kube-api-access-ssgr9") pod "66114d21-bc4b-4fa6-ac9b-47f854d50e92" (UID: "66114d21-bc4b-4fa6-ac9b-47f854d50e92"). InnerVolumeSpecName "kube-api-access-ssgr9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:58:39 crc kubenswrapper[4949]: I1001 15:58:39.817672 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/66114d21-bc4b-4fa6-ac9b-47f854d50e92-scripts\") on node \"crc\" DevicePath \"\"" Oct 01 15:58:39 crc kubenswrapper[4949]: I1001 15:58:39.817705 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ssgr9\" (UniqueName: \"kubernetes.io/projected/66114d21-bc4b-4fa6-ac9b-47f854d50e92-kube-api-access-ssgr9\") on node \"crc\" DevicePath \"\"" Oct 01 15:58:40 crc kubenswrapper[4949]: I1001 15:58:40.065324 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-4kkzs-config-b2flw"] Oct 01 15:58:40 crc kubenswrapper[4949]: I1001 15:58:40.071613 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-4kkzs-config-b2flw"] Oct 01 15:58:40 crc kubenswrapper[4949]: I1001 15:58:40.171805 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-controller-4kkzs-config-sbg22"] Oct 01 15:58:40 crc kubenswrapper[4949]: E1001 15:58:40.172136 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="66114d21-bc4b-4fa6-ac9b-47f854d50e92" containerName="ovn-config" Oct 01 15:58:40 crc kubenswrapper[4949]: I1001 15:58:40.172151 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="66114d21-bc4b-4fa6-ac9b-47f854d50e92" containerName="ovn-config" Oct 01 15:58:40 crc kubenswrapper[4949]: I1001 15:58:40.172374 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="66114d21-bc4b-4fa6-ac9b-47f854d50e92" containerName="ovn-config" Oct 01 15:58:40 crc kubenswrapper[4949]: I1001 15:58:40.172905 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4kkzs-config-sbg22" Oct 01 15:58:40 crc kubenswrapper[4949]: I1001 15:58:40.185384 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4kkzs-config-sbg22"] Oct 01 15:58:40 crc kubenswrapper[4949]: I1001 15:58:40.222724 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4c72ce66-451c-4da1-a580-c19f7a2f242e-additional-scripts\") pod \"ovn-controller-4kkzs-config-sbg22\" (UID: \"4c72ce66-451c-4da1-a580-c19f7a2f242e\") " pod="openstack/ovn-controller-4kkzs-config-sbg22" Oct 01 15:58:40 crc kubenswrapper[4949]: I1001 15:58:40.222857 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4c72ce66-451c-4da1-a580-c19f7a2f242e-scripts\") pod \"ovn-controller-4kkzs-config-sbg22\" (UID: \"4c72ce66-451c-4da1-a580-c19f7a2f242e\") " pod="openstack/ovn-controller-4kkzs-config-sbg22" Oct 01 15:58:40 crc kubenswrapper[4949]: I1001 15:58:40.222903 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4c72ce66-451c-4da1-a580-c19f7a2f242e-var-run-ovn\") pod \"ovn-controller-4kkzs-config-sbg22\" (UID: \"4c72ce66-451c-4da1-a580-c19f7a2f242e\") " pod="openstack/ovn-controller-4kkzs-config-sbg22" Oct 01 15:58:40 crc kubenswrapper[4949]: I1001 15:58:40.222925 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dskj6\" (UniqueName: \"kubernetes.io/projected/4c72ce66-451c-4da1-a580-c19f7a2f242e-kube-api-access-dskj6\") pod \"ovn-controller-4kkzs-config-sbg22\" (UID: \"4c72ce66-451c-4da1-a580-c19f7a2f242e\") " pod="openstack/ovn-controller-4kkzs-config-sbg22" Oct 01 15:58:40 crc kubenswrapper[4949]: I1001 15:58:40.222953 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4c72ce66-451c-4da1-a580-c19f7a2f242e-var-log-ovn\") pod \"ovn-controller-4kkzs-config-sbg22\" (UID: \"4c72ce66-451c-4da1-a580-c19f7a2f242e\") " pod="openstack/ovn-controller-4kkzs-config-sbg22" Oct 01 15:58:40 crc kubenswrapper[4949]: I1001 15:58:40.223043 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4c72ce66-451c-4da1-a580-c19f7a2f242e-var-run\") pod \"ovn-controller-4kkzs-config-sbg22\" (UID: \"4c72ce66-451c-4da1-a580-c19f7a2f242e\") " pod="openstack/ovn-controller-4kkzs-config-sbg22" Oct 01 15:58:40 crc kubenswrapper[4949]: I1001 15:58:40.301949 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="137ca0fd1e9d3803e7ecfe5387f79bb6be463914907a3d0dee63b1be91265148" Oct 01 15:58:40 crc kubenswrapper[4949]: I1001 15:58:40.302032 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4kkzs-config-b2flw" Oct 01 15:58:40 crc kubenswrapper[4949]: I1001 15:58:40.324374 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4c72ce66-451c-4da1-a580-c19f7a2f242e-scripts\") pod \"ovn-controller-4kkzs-config-sbg22\" (UID: \"4c72ce66-451c-4da1-a580-c19f7a2f242e\") " pod="openstack/ovn-controller-4kkzs-config-sbg22" Oct 01 15:58:40 crc kubenswrapper[4949]: I1001 15:58:40.324453 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4c72ce66-451c-4da1-a580-c19f7a2f242e-var-run-ovn\") pod \"ovn-controller-4kkzs-config-sbg22\" (UID: \"4c72ce66-451c-4da1-a580-c19f7a2f242e\") " pod="openstack/ovn-controller-4kkzs-config-sbg22" Oct 01 15:58:40 crc kubenswrapper[4949]: I1001 15:58:40.324486 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dskj6\" (UniqueName: \"kubernetes.io/projected/4c72ce66-451c-4da1-a580-c19f7a2f242e-kube-api-access-dskj6\") pod \"ovn-controller-4kkzs-config-sbg22\" (UID: \"4c72ce66-451c-4da1-a580-c19f7a2f242e\") " pod="openstack/ovn-controller-4kkzs-config-sbg22" Oct 01 15:58:40 crc kubenswrapper[4949]: I1001 15:58:40.324517 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4c72ce66-451c-4da1-a580-c19f7a2f242e-var-log-ovn\") pod \"ovn-controller-4kkzs-config-sbg22\" (UID: \"4c72ce66-451c-4da1-a580-c19f7a2f242e\") " pod="openstack/ovn-controller-4kkzs-config-sbg22" Oct 01 15:58:40 crc kubenswrapper[4949]: I1001 15:58:40.324572 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4c72ce66-451c-4da1-a580-c19f7a2f242e-var-run\") pod \"ovn-controller-4kkzs-config-sbg22\" (UID: \"4c72ce66-451c-4da1-a580-c19f7a2f242e\") " pod="openstack/ovn-controller-4kkzs-config-sbg22" Oct 01 15:58:40 crc kubenswrapper[4949]: I1001 15:58:40.324605 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4c72ce66-451c-4da1-a580-c19f7a2f242e-additional-scripts\") pod \"ovn-controller-4kkzs-config-sbg22\" (UID: \"4c72ce66-451c-4da1-a580-c19f7a2f242e\") " pod="openstack/ovn-controller-4kkzs-config-sbg22" Oct 01 15:58:40 crc kubenswrapper[4949]: I1001 15:58:40.324849 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4c72ce66-451c-4da1-a580-c19f7a2f242e-var-log-ovn\") pod \"ovn-controller-4kkzs-config-sbg22\" (UID: \"4c72ce66-451c-4da1-a580-c19f7a2f242e\") " pod="openstack/ovn-controller-4kkzs-config-sbg22" Oct 01 15:58:40 crc kubenswrapper[4949]: I1001 15:58:40.324876 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4c72ce66-451c-4da1-a580-c19f7a2f242e-var-run-ovn\") pod \"ovn-controller-4kkzs-config-sbg22\" (UID: \"4c72ce66-451c-4da1-a580-c19f7a2f242e\") " pod="openstack/ovn-controller-4kkzs-config-sbg22" Oct 01 15:58:40 crc kubenswrapper[4949]: I1001 15:58:40.324882 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4c72ce66-451c-4da1-a580-c19f7a2f242e-var-run\") pod \"ovn-controller-4kkzs-config-sbg22\" (UID: \"4c72ce66-451c-4da1-a580-c19f7a2f242e\") " pod="openstack/ovn-controller-4kkzs-config-sbg22" Oct 01 15:58:40 crc kubenswrapper[4949]: I1001 15:58:40.325437 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4c72ce66-451c-4da1-a580-c19f7a2f242e-additional-scripts\") pod \"ovn-controller-4kkzs-config-sbg22\" (UID: \"4c72ce66-451c-4da1-a580-c19f7a2f242e\") " pod="openstack/ovn-controller-4kkzs-config-sbg22" Oct 01 15:58:40 crc kubenswrapper[4949]: I1001 15:58:40.328302 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4c72ce66-451c-4da1-a580-c19f7a2f242e-scripts\") pod \"ovn-controller-4kkzs-config-sbg22\" (UID: \"4c72ce66-451c-4da1-a580-c19f7a2f242e\") " pod="openstack/ovn-controller-4kkzs-config-sbg22" Oct 01 15:58:40 crc kubenswrapper[4949]: I1001 15:58:40.354948 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dskj6\" (UniqueName: \"kubernetes.io/projected/4c72ce66-451c-4da1-a580-c19f7a2f242e-kube-api-access-dskj6\") pod \"ovn-controller-4kkzs-config-sbg22\" (UID: \"4c72ce66-451c-4da1-a580-c19f7a2f242e\") " pod="openstack/ovn-controller-4kkzs-config-sbg22" Oct 01 15:58:40 crc kubenswrapper[4949]: I1001 15:58:40.489030 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4kkzs-config-sbg22" Oct 01 15:58:40 crc kubenswrapper[4949]: I1001 15:58:40.742361 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-controller-4kkzs-config-sbg22"] Oct 01 15:58:40 crc kubenswrapper[4949]: W1001 15:58:40.813286 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c72ce66_451c_4da1_a580_c19f7a2f242e.slice/crio-224d938114a37142e816641c0db2d375ff62596cb545c845c0834d837c4fd666 WatchSource:0}: Error finding container 224d938114a37142e816641c0db2d375ff62596cb545c845c0834d837c4fd666: Status 404 returned error can't find the container with id 224d938114a37142e816641c0db2d375ff62596cb545c845c0834d837c4fd666 Oct 01 15:58:40 crc kubenswrapper[4949]: I1001 15:58:40.995333 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Oct 01 15:58:41 crc kubenswrapper[4949]: I1001 15:58:41.108318 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ovn-controller-4kkzs" Oct 01 15:58:41 crc kubenswrapper[4949]: I1001 15:58:41.278320 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Oct 01 15:58:41 crc kubenswrapper[4949]: I1001 15:58:41.291333 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-create-rrf4c"] Oct 01 15:58:41 crc kubenswrapper[4949]: I1001 15:58:41.292641 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-rrf4c" Oct 01 15:58:41 crc kubenswrapper[4949]: I1001 15:58:41.300726 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-rrf4c"] Oct 01 15:58:41 crc kubenswrapper[4949]: I1001 15:58:41.310923 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4kkzs-config-sbg22" event={"ID":"4c72ce66-451c-4da1-a580-c19f7a2f242e","Type":"ContainerStarted","Data":"224d938114a37142e816641c0db2d375ff62596cb545c845c0834d837c4fd666"} Oct 01 15:58:41 crc kubenswrapper[4949]: I1001 15:58:41.341696 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6zgx\" (UniqueName: \"kubernetes.io/projected/b92402c1-7aa8-4a18-8603-a88c7d5b3735-kube-api-access-f6zgx\") pod \"cinder-db-create-rrf4c\" (UID: \"b92402c1-7aa8-4a18-8603-a88c7d5b3735\") " pod="openstack/cinder-db-create-rrf4c" Oct 01 15:58:41 crc kubenswrapper[4949]: I1001 15:58:41.397433 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-create-nrvpq"] Oct 01 15:58:41 crc kubenswrapper[4949]: I1001 15:58:41.399356 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-nrvpq" Oct 01 15:58:41 crc kubenswrapper[4949]: I1001 15:58:41.411760 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-nrvpq"] Oct 01 15:58:41 crc kubenswrapper[4949]: I1001 15:58:41.445690 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2std\" (UniqueName: \"kubernetes.io/projected/1c5b1392-74a7-44b0-8475-7795e34531ca-kube-api-access-c2std\") pod \"barbican-db-create-nrvpq\" (UID: \"1c5b1392-74a7-44b0-8475-7795e34531ca\") " pod="openstack/barbican-db-create-nrvpq" Oct 01 15:58:41 crc kubenswrapper[4949]: I1001 15:58:41.445799 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6zgx\" (UniqueName: \"kubernetes.io/projected/b92402c1-7aa8-4a18-8603-a88c7d5b3735-kube-api-access-f6zgx\") pod \"cinder-db-create-rrf4c\" (UID: \"b92402c1-7aa8-4a18-8603-a88c7d5b3735\") " pod="openstack/cinder-db-create-rrf4c" Oct 01 15:58:41 crc kubenswrapper[4949]: I1001 15:58:41.468833 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6zgx\" (UniqueName: \"kubernetes.io/projected/b92402c1-7aa8-4a18-8603-a88c7d5b3735-kube-api-access-f6zgx\") pod \"cinder-db-create-rrf4c\" (UID: \"b92402c1-7aa8-4a18-8603-a88c7d5b3735\") " pod="openstack/cinder-db-create-rrf4c" Oct 01 15:58:41 crc kubenswrapper[4949]: I1001 15:58:41.547294 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c2std\" (UniqueName: \"kubernetes.io/projected/1c5b1392-74a7-44b0-8475-7795e34531ca-kube-api-access-c2std\") pod \"barbican-db-create-nrvpq\" (UID: \"1c5b1392-74a7-44b0-8475-7795e34531ca\") " pod="openstack/barbican-db-create-nrvpq" Oct 01 15:58:41 crc kubenswrapper[4949]: I1001 15:58:41.598298 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-create-dv4j7"] Oct 01 15:58:41 crc kubenswrapper[4949]: I1001 15:58:41.599490 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-dv4j7" Oct 01 15:58:41 crc kubenswrapper[4949]: I1001 15:58:41.601469 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c2std\" (UniqueName: \"kubernetes.io/projected/1c5b1392-74a7-44b0-8475-7795e34531ca-kube-api-access-c2std\") pod \"barbican-db-create-nrvpq\" (UID: \"1c5b1392-74a7-44b0-8475-7795e34531ca\") " pod="openstack/barbican-db-create-nrvpq" Oct 01 15:58:41 crc kubenswrapper[4949]: I1001 15:58:41.613540 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-rrf4c" Oct 01 15:58:41 crc kubenswrapper[4949]: I1001 15:58:41.618812 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="66114d21-bc4b-4fa6-ac9b-47f854d50e92" path="/var/lib/kubelet/pods/66114d21-bc4b-4fa6-ac9b-47f854d50e92/volumes" Oct 01 15:58:41 crc kubenswrapper[4949]: I1001 15:58:41.619635 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-dv4j7"] Oct 01 15:58:41 crc kubenswrapper[4949]: I1001 15:58:41.652925 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tbrm\" (UniqueName: \"kubernetes.io/projected/8f41fa71-2325-4f08-9fe0-1268868683cf-kube-api-access-8tbrm\") pod \"neutron-db-create-dv4j7\" (UID: \"8f41fa71-2325-4f08-9fe0-1268868683cf\") " pod="openstack/neutron-db-create-dv4j7" Oct 01 15:58:41 crc kubenswrapper[4949]: I1001 15:58:41.734536 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-nrvpq" Oct 01 15:58:41 crc kubenswrapper[4949]: I1001 15:58:41.755050 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8tbrm\" (UniqueName: \"kubernetes.io/projected/8f41fa71-2325-4f08-9fe0-1268868683cf-kube-api-access-8tbrm\") pod \"neutron-db-create-dv4j7\" (UID: \"8f41fa71-2325-4f08-9fe0-1268868683cf\") " pod="openstack/neutron-db-create-dv4j7" Oct 01 15:58:41 crc kubenswrapper[4949]: I1001 15:58:41.762215 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-db-sync-t2pgk"] Oct 01 15:58:41 crc kubenswrapper[4949]: I1001 15:58:41.763881 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-t2pgk" Oct 01 15:58:41 crc kubenswrapper[4949]: I1001 15:58:41.767990 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-kckdb" Oct 01 15:58:41 crc kubenswrapper[4949]: I1001 15:58:41.768441 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 01 15:58:41 crc kubenswrapper[4949]: I1001 15:58:41.772466 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 01 15:58:41 crc kubenswrapper[4949]: I1001 15:58:41.774385 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 01 15:58:41 crc kubenswrapper[4949]: I1001 15:58:41.782270 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-t2pgk"] Oct 01 15:58:41 crc kubenswrapper[4949]: I1001 15:58:41.782876 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8tbrm\" (UniqueName: \"kubernetes.io/projected/8f41fa71-2325-4f08-9fe0-1268868683cf-kube-api-access-8tbrm\") pod \"neutron-db-create-dv4j7\" (UID: \"8f41fa71-2325-4f08-9fe0-1268868683cf\") " pod="openstack/neutron-db-create-dv4j7" Oct 01 15:58:41 crc kubenswrapper[4949]: I1001 15:58:41.856817 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b1a3c6f-40a7-4816-8404-4910abf14478-combined-ca-bundle\") pod \"keystone-db-sync-t2pgk\" (UID: \"4b1a3c6f-40a7-4816-8404-4910abf14478\") " pod="openstack/keystone-db-sync-t2pgk" Oct 01 15:58:41 crc kubenswrapper[4949]: I1001 15:58:41.856877 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27mdz\" (UniqueName: \"kubernetes.io/projected/4b1a3c6f-40a7-4816-8404-4910abf14478-kube-api-access-27mdz\") pod \"keystone-db-sync-t2pgk\" (UID: \"4b1a3c6f-40a7-4816-8404-4910abf14478\") " pod="openstack/keystone-db-sync-t2pgk" Oct 01 15:58:41 crc kubenswrapper[4949]: I1001 15:58:41.856939 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b1a3c6f-40a7-4816-8404-4910abf14478-config-data\") pod \"keystone-db-sync-t2pgk\" (UID: \"4b1a3c6f-40a7-4816-8404-4910abf14478\") " pod="openstack/keystone-db-sync-t2pgk" Oct 01 15:58:41 crc kubenswrapper[4949]: I1001 15:58:41.923285 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-create-rrf4c"] Oct 01 15:58:41 crc kubenswrapper[4949]: I1001 15:58:41.957892 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-27mdz\" (UniqueName: \"kubernetes.io/projected/4b1a3c6f-40a7-4816-8404-4910abf14478-kube-api-access-27mdz\") pod \"keystone-db-sync-t2pgk\" (UID: \"4b1a3c6f-40a7-4816-8404-4910abf14478\") " pod="openstack/keystone-db-sync-t2pgk" Oct 01 15:58:41 crc kubenswrapper[4949]: I1001 15:58:41.957984 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b1a3c6f-40a7-4816-8404-4910abf14478-config-data\") pod \"keystone-db-sync-t2pgk\" (UID: \"4b1a3c6f-40a7-4816-8404-4910abf14478\") " pod="openstack/keystone-db-sync-t2pgk" Oct 01 15:58:41 crc kubenswrapper[4949]: I1001 15:58:41.958057 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b1a3c6f-40a7-4816-8404-4910abf14478-combined-ca-bundle\") pod \"keystone-db-sync-t2pgk\" (UID: \"4b1a3c6f-40a7-4816-8404-4910abf14478\") " pod="openstack/keystone-db-sync-t2pgk" Oct 01 15:58:41 crc kubenswrapper[4949]: I1001 15:58:41.963675 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b1a3c6f-40a7-4816-8404-4910abf14478-config-data\") pod \"keystone-db-sync-t2pgk\" (UID: \"4b1a3c6f-40a7-4816-8404-4910abf14478\") " pod="openstack/keystone-db-sync-t2pgk" Oct 01 15:58:41 crc kubenswrapper[4949]: I1001 15:58:41.963831 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b1a3c6f-40a7-4816-8404-4910abf14478-combined-ca-bundle\") pod \"keystone-db-sync-t2pgk\" (UID: \"4b1a3c6f-40a7-4816-8404-4910abf14478\") " pod="openstack/keystone-db-sync-t2pgk" Oct 01 15:58:41 crc kubenswrapper[4949]: I1001 15:58:41.976753 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-27mdz\" (UniqueName: \"kubernetes.io/projected/4b1a3c6f-40a7-4816-8404-4910abf14478-kube-api-access-27mdz\") pod \"keystone-db-sync-t2pgk\" (UID: \"4b1a3c6f-40a7-4816-8404-4910abf14478\") " pod="openstack/keystone-db-sync-t2pgk" Oct 01 15:58:42 crc kubenswrapper[4949]: I1001 15:58:42.004771 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-dv4j7" Oct 01 15:58:42 crc kubenswrapper[4949]: I1001 15:58:42.148320 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-t2pgk" Oct 01 15:58:42 crc kubenswrapper[4949]: I1001 15:58:42.268917 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-create-nrvpq"] Oct 01 15:58:42 crc kubenswrapper[4949]: W1001 15:58:42.280844 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c5b1392_74a7_44b0_8475_7795e34531ca.slice/crio-509b631ec51b773d5e6463098821a4e1c6e14c29a805cda3f9e9841b3aa1b835 WatchSource:0}: Error finding container 509b631ec51b773d5e6463098821a4e1c6e14c29a805cda3f9e9841b3aa1b835: Status 404 returned error can't find the container with id 509b631ec51b773d5e6463098821a4e1c6e14c29a805cda3f9e9841b3aa1b835 Oct 01 15:58:42 crc kubenswrapper[4949]: I1001 15:58:42.328476 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-nrvpq" event={"ID":"1c5b1392-74a7-44b0-8475-7795e34531ca","Type":"ContainerStarted","Data":"509b631ec51b773d5e6463098821a4e1c6e14c29a805cda3f9e9841b3aa1b835"} Oct 01 15:58:42 crc kubenswrapper[4949]: I1001 15:58:42.330167 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-rrf4c" event={"ID":"b92402c1-7aa8-4a18-8603-a88c7d5b3735","Type":"ContainerStarted","Data":"0348f03511f01ce66f92b3f4a3b0cf7ef0a302c351d7500f9a8a6ce9e9cc82ce"} Oct 01 15:58:42 crc kubenswrapper[4949]: I1001 15:58:42.330201 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-rrf4c" event={"ID":"b92402c1-7aa8-4a18-8603-a88c7d5b3735","Type":"ContainerStarted","Data":"a1a37a19c90ba490f99702a4b975c9f43eeeb9fd1f507cfa190cc76e43528388"} Oct 01 15:58:42 crc kubenswrapper[4949]: I1001 15:58:42.339567 4949 generic.go:334] "Generic (PLEG): container finished" podID="4c72ce66-451c-4da1-a580-c19f7a2f242e" containerID="75d734ecd766ed27af83402c8abe1a9e364172aed99922df783b9a5c8c7227a8" exitCode=0 Oct 01 15:58:42 crc kubenswrapper[4949]: I1001 15:58:42.339616 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4kkzs-config-sbg22" event={"ID":"4c72ce66-451c-4da1-a580-c19f7a2f242e","Type":"ContainerDied","Data":"75d734ecd766ed27af83402c8abe1a9e364172aed99922df783b9a5c8c7227a8"} Oct 01 15:58:42 crc kubenswrapper[4949]: I1001 15:58:42.353739 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-create-rrf4c" podStartSLOduration=1.353721643 podStartE2EDuration="1.353721643s" podCreationTimestamp="2025-10-01 15:58:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:58:42.343897736 +0000 UTC m=+1021.649503927" watchObservedRunningTime="2025-10-01 15:58:42.353721643 +0000 UTC m=+1021.659327824" Oct 01 15:58:42 crc kubenswrapper[4949]: I1001 15:58:42.485416 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-create-dv4j7"] Oct 01 15:58:42 crc kubenswrapper[4949]: W1001 15:58:42.487316 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8f41fa71_2325_4f08_9fe0_1268868683cf.slice/crio-740ac7ff6e0a8769dc7d92032ada0401f714b7eff378e27bfaf5031014e67452 WatchSource:0}: Error finding container 740ac7ff6e0a8769dc7d92032ada0401f714b7eff378e27bfaf5031014e67452: Status 404 returned error can't find the container with id 740ac7ff6e0a8769dc7d92032ada0401f714b7eff378e27bfaf5031014e67452 Oct 01 15:58:42 crc kubenswrapper[4949]: I1001 15:58:42.620864 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-db-sync-t2pgk"] Oct 01 15:58:42 crc kubenswrapper[4949]: W1001 15:58:42.621154 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4b1a3c6f_40a7_4816_8404_4910abf14478.slice/crio-2df95f5c8c4acb09fa5eed50959da890372ee8a2dd6ecd654b7eac2491fe9de1 WatchSource:0}: Error finding container 2df95f5c8c4acb09fa5eed50959da890372ee8a2dd6ecd654b7eac2491fe9de1: Status 404 returned error can't find the container with id 2df95f5c8c4acb09fa5eed50959da890372ee8a2dd6ecd654b7eac2491fe9de1 Oct 01 15:58:43 crc kubenswrapper[4949]: I1001 15:58:43.349532 4949 generic.go:334] "Generic (PLEG): container finished" podID="8f41fa71-2325-4f08-9fe0-1268868683cf" containerID="956aa20118237b3229f3f50e0fe2cb6f61d8253275f1e7b716e45a8fbbffa576" exitCode=0 Oct 01 15:58:43 crc kubenswrapper[4949]: I1001 15:58:43.349589 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-dv4j7" event={"ID":"8f41fa71-2325-4f08-9fe0-1268868683cf","Type":"ContainerDied","Data":"956aa20118237b3229f3f50e0fe2cb6f61d8253275f1e7b716e45a8fbbffa576"} Oct 01 15:58:43 crc kubenswrapper[4949]: I1001 15:58:43.349902 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-dv4j7" event={"ID":"8f41fa71-2325-4f08-9fe0-1268868683cf","Type":"ContainerStarted","Data":"740ac7ff6e0a8769dc7d92032ada0401f714b7eff378e27bfaf5031014e67452"} Oct 01 15:58:43 crc kubenswrapper[4949]: I1001 15:58:43.351515 4949 generic.go:334] "Generic (PLEG): container finished" podID="1c5b1392-74a7-44b0-8475-7795e34531ca" containerID="939f8ab3f1f8d4d87c43c17b33dbea9d54926d4dbd45b55434a8a27502ae2a03" exitCode=0 Oct 01 15:58:43 crc kubenswrapper[4949]: I1001 15:58:43.351574 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-nrvpq" event={"ID":"1c5b1392-74a7-44b0-8475-7795e34531ca","Type":"ContainerDied","Data":"939f8ab3f1f8d4d87c43c17b33dbea9d54926d4dbd45b55434a8a27502ae2a03"} Oct 01 15:58:43 crc kubenswrapper[4949]: I1001 15:58:43.353454 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-t2pgk" event={"ID":"4b1a3c6f-40a7-4816-8404-4910abf14478","Type":"ContainerStarted","Data":"2df95f5c8c4acb09fa5eed50959da890372ee8a2dd6ecd654b7eac2491fe9de1"} Oct 01 15:58:43 crc kubenswrapper[4949]: I1001 15:58:43.355380 4949 generic.go:334] "Generic (PLEG): container finished" podID="b92402c1-7aa8-4a18-8603-a88c7d5b3735" containerID="0348f03511f01ce66f92b3f4a3b0cf7ef0a302c351d7500f9a8a6ce9e9cc82ce" exitCode=0 Oct 01 15:58:43 crc kubenswrapper[4949]: I1001 15:58:43.355460 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-rrf4c" event={"ID":"b92402c1-7aa8-4a18-8603-a88c7d5b3735","Type":"ContainerDied","Data":"0348f03511f01ce66f92b3f4a3b0cf7ef0a302c351d7500f9a8a6ce9e9cc82ce"} Oct 01 15:58:43 crc kubenswrapper[4949]: I1001 15:58:43.694857 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4kkzs-config-sbg22" Oct 01 15:58:43 crc kubenswrapper[4949]: I1001 15:58:43.784257 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4c72ce66-451c-4da1-a580-c19f7a2f242e-scripts\") pod \"4c72ce66-451c-4da1-a580-c19f7a2f242e\" (UID: \"4c72ce66-451c-4da1-a580-c19f7a2f242e\") " Oct 01 15:58:43 crc kubenswrapper[4949]: I1001 15:58:43.784350 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4c72ce66-451c-4da1-a580-c19f7a2f242e-var-run-ovn\") pod \"4c72ce66-451c-4da1-a580-c19f7a2f242e\" (UID: \"4c72ce66-451c-4da1-a580-c19f7a2f242e\") " Oct 01 15:58:43 crc kubenswrapper[4949]: I1001 15:58:43.784379 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dskj6\" (UniqueName: \"kubernetes.io/projected/4c72ce66-451c-4da1-a580-c19f7a2f242e-kube-api-access-dskj6\") pod \"4c72ce66-451c-4da1-a580-c19f7a2f242e\" (UID: \"4c72ce66-451c-4da1-a580-c19f7a2f242e\") " Oct 01 15:58:43 crc kubenswrapper[4949]: I1001 15:58:43.784420 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4c72ce66-451c-4da1-a580-c19f7a2f242e-var-run\") pod \"4c72ce66-451c-4da1-a580-c19f7a2f242e\" (UID: \"4c72ce66-451c-4da1-a580-c19f7a2f242e\") " Oct 01 15:58:43 crc kubenswrapper[4949]: I1001 15:58:43.784449 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4c72ce66-451c-4da1-a580-c19f7a2f242e-additional-scripts\") pod \"4c72ce66-451c-4da1-a580-c19f7a2f242e\" (UID: \"4c72ce66-451c-4da1-a580-c19f7a2f242e\") " Oct 01 15:58:43 crc kubenswrapper[4949]: I1001 15:58:43.784465 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c72ce66-451c-4da1-a580-c19f7a2f242e-var-run-ovn" (OuterVolumeSpecName: "var-run-ovn") pod "4c72ce66-451c-4da1-a580-c19f7a2f242e" (UID: "4c72ce66-451c-4da1-a580-c19f7a2f242e"). InnerVolumeSpecName "var-run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 01 15:58:43 crc kubenswrapper[4949]: I1001 15:58:43.784545 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4c72ce66-451c-4da1-a580-c19f7a2f242e-var-log-ovn\") pod \"4c72ce66-451c-4da1-a580-c19f7a2f242e\" (UID: \"4c72ce66-451c-4da1-a580-c19f7a2f242e\") " Oct 01 15:58:43 crc kubenswrapper[4949]: I1001 15:58:43.784542 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c72ce66-451c-4da1-a580-c19f7a2f242e-var-run" (OuterVolumeSpecName: "var-run") pod "4c72ce66-451c-4da1-a580-c19f7a2f242e" (UID: "4c72ce66-451c-4da1-a580-c19f7a2f242e"). InnerVolumeSpecName "var-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 01 15:58:43 crc kubenswrapper[4949]: I1001 15:58:43.784693 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4c72ce66-451c-4da1-a580-c19f7a2f242e-var-log-ovn" (OuterVolumeSpecName: "var-log-ovn") pod "4c72ce66-451c-4da1-a580-c19f7a2f242e" (UID: "4c72ce66-451c-4da1-a580-c19f7a2f242e"). InnerVolumeSpecName "var-log-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 01 15:58:43 crc kubenswrapper[4949]: I1001 15:58:43.785022 4949 reconciler_common.go:293] "Volume detached for volume \"var-run\" (UniqueName: \"kubernetes.io/host-path/4c72ce66-451c-4da1-a580-c19f7a2f242e-var-run\") on node \"crc\" DevicePath \"\"" Oct 01 15:58:43 crc kubenswrapper[4949]: I1001 15:58:43.785035 4949 reconciler_common.go:293] "Volume detached for volume \"var-log-ovn\" (UniqueName: \"kubernetes.io/host-path/4c72ce66-451c-4da1-a580-c19f7a2f242e-var-log-ovn\") on node \"crc\" DevicePath \"\"" Oct 01 15:58:43 crc kubenswrapper[4949]: I1001 15:58:43.785043 4949 reconciler_common.go:293] "Volume detached for volume \"var-run-ovn\" (UniqueName: \"kubernetes.io/host-path/4c72ce66-451c-4da1-a580-c19f7a2f242e-var-run-ovn\") on node \"crc\" DevicePath \"\"" Oct 01 15:58:43 crc kubenswrapper[4949]: I1001 15:58:43.785284 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c72ce66-451c-4da1-a580-c19f7a2f242e-additional-scripts" (OuterVolumeSpecName: "additional-scripts") pod "4c72ce66-451c-4da1-a580-c19f7a2f242e" (UID: "4c72ce66-451c-4da1-a580-c19f7a2f242e"). InnerVolumeSpecName "additional-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:58:43 crc kubenswrapper[4949]: I1001 15:58:43.785582 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c72ce66-451c-4da1-a580-c19f7a2f242e-scripts" (OuterVolumeSpecName: "scripts") pod "4c72ce66-451c-4da1-a580-c19f7a2f242e" (UID: "4c72ce66-451c-4da1-a580-c19f7a2f242e"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:58:43 crc kubenswrapper[4949]: I1001 15:58:43.789550 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c72ce66-451c-4da1-a580-c19f7a2f242e-kube-api-access-dskj6" (OuterVolumeSpecName: "kube-api-access-dskj6") pod "4c72ce66-451c-4da1-a580-c19f7a2f242e" (UID: "4c72ce66-451c-4da1-a580-c19f7a2f242e"). InnerVolumeSpecName "kube-api-access-dskj6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:58:43 crc kubenswrapper[4949]: I1001 15:58:43.887621 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/4c72ce66-451c-4da1-a580-c19f7a2f242e-scripts\") on node \"crc\" DevicePath \"\"" Oct 01 15:58:43 crc kubenswrapper[4949]: I1001 15:58:43.887667 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dskj6\" (UniqueName: \"kubernetes.io/projected/4c72ce66-451c-4da1-a580-c19f7a2f242e-kube-api-access-dskj6\") on node \"crc\" DevicePath \"\"" Oct 01 15:58:43 crc kubenswrapper[4949]: I1001 15:58:43.887685 4949 reconciler_common.go:293] "Volume detached for volume \"additional-scripts\" (UniqueName: \"kubernetes.io/configmap/4c72ce66-451c-4da1-a580-c19f7a2f242e-additional-scripts\") on node \"crc\" DevicePath \"\"" Oct 01 15:58:44 crc kubenswrapper[4949]: I1001 15:58:44.387428 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-controller-4kkzs-config-sbg22" event={"ID":"4c72ce66-451c-4da1-a580-c19f7a2f242e","Type":"ContainerDied","Data":"224d938114a37142e816641c0db2d375ff62596cb545c845c0834d837c4fd666"} Oct 01 15:58:44 crc kubenswrapper[4949]: I1001 15:58:44.387497 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="224d938114a37142e816641c0db2d375ff62596cb545c845c0834d837c4fd666" Oct 01 15:58:44 crc kubenswrapper[4949]: I1001 15:58:44.387559 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-controller-4kkzs-config-sbg22" Oct 01 15:58:44 crc kubenswrapper[4949]: I1001 15:58:44.836273 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ovn-controller-4kkzs-config-sbg22"] Oct 01 15:58:44 crc kubenswrapper[4949]: I1001 15:58:44.850607 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ovn-controller-4kkzs-config-sbg22"] Oct 01 15:58:44 crc kubenswrapper[4949]: I1001 15:58:44.962573 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-rrf4c" Oct 01 15:58:45 crc kubenswrapper[4949]: I1001 15:58:45.010596 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-f6zgx\" (UniqueName: \"kubernetes.io/projected/b92402c1-7aa8-4a18-8603-a88c7d5b3735-kube-api-access-f6zgx\") pod \"b92402c1-7aa8-4a18-8603-a88c7d5b3735\" (UID: \"b92402c1-7aa8-4a18-8603-a88c7d5b3735\") " Oct 01 15:58:45 crc kubenswrapper[4949]: I1001 15:58:45.024064 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b92402c1-7aa8-4a18-8603-a88c7d5b3735-kube-api-access-f6zgx" (OuterVolumeSpecName: "kube-api-access-f6zgx") pod "b92402c1-7aa8-4a18-8603-a88c7d5b3735" (UID: "b92402c1-7aa8-4a18-8603-a88c7d5b3735"). InnerVolumeSpecName "kube-api-access-f6zgx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:58:45 crc kubenswrapper[4949]: I1001 15:58:45.078211 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-dv4j7" Oct 01 15:58:45 crc kubenswrapper[4949]: I1001 15:58:45.090823 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-nrvpq" Oct 01 15:58:45 crc kubenswrapper[4949]: I1001 15:58:45.114082 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tbrm\" (UniqueName: \"kubernetes.io/projected/8f41fa71-2325-4f08-9fe0-1268868683cf-kube-api-access-8tbrm\") pod \"8f41fa71-2325-4f08-9fe0-1268868683cf\" (UID: \"8f41fa71-2325-4f08-9fe0-1268868683cf\") " Oct 01 15:58:45 crc kubenswrapper[4949]: I1001 15:58:45.114320 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2std\" (UniqueName: \"kubernetes.io/projected/1c5b1392-74a7-44b0-8475-7795e34531ca-kube-api-access-c2std\") pod \"1c5b1392-74a7-44b0-8475-7795e34531ca\" (UID: \"1c5b1392-74a7-44b0-8475-7795e34531ca\") " Oct 01 15:58:45 crc kubenswrapper[4949]: I1001 15:58:45.114883 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-f6zgx\" (UniqueName: \"kubernetes.io/projected/b92402c1-7aa8-4a18-8603-a88c7d5b3735-kube-api-access-f6zgx\") on node \"crc\" DevicePath \"\"" Oct 01 15:58:45 crc kubenswrapper[4949]: I1001 15:58:45.120725 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c5b1392-74a7-44b0-8475-7795e34531ca-kube-api-access-c2std" (OuterVolumeSpecName: "kube-api-access-c2std") pod "1c5b1392-74a7-44b0-8475-7795e34531ca" (UID: "1c5b1392-74a7-44b0-8475-7795e34531ca"). InnerVolumeSpecName "kube-api-access-c2std". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:58:45 crc kubenswrapper[4949]: I1001 15:58:45.133448 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f41fa71-2325-4f08-9fe0-1268868683cf-kube-api-access-8tbrm" (OuterVolumeSpecName: "kube-api-access-8tbrm") pod "8f41fa71-2325-4f08-9fe0-1268868683cf" (UID: "8f41fa71-2325-4f08-9fe0-1268868683cf"). InnerVolumeSpecName "kube-api-access-8tbrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:58:45 crc kubenswrapper[4949]: I1001 15:58:45.217475 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c2std\" (UniqueName: \"kubernetes.io/projected/1c5b1392-74a7-44b0-8475-7795e34531ca-kube-api-access-c2std\") on node \"crc\" DevicePath \"\"" Oct 01 15:58:45 crc kubenswrapper[4949]: I1001 15:58:45.217542 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tbrm\" (UniqueName: \"kubernetes.io/projected/8f41fa71-2325-4f08-9fe0-1268868683cf-kube-api-access-8tbrm\") on node \"crc\" DevicePath \"\"" Oct 01 15:58:45 crc kubenswrapper[4949]: I1001 15:58:45.396413 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-create-nrvpq" Oct 01 15:58:45 crc kubenswrapper[4949]: I1001 15:58:45.396571 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-create-nrvpq" event={"ID":"1c5b1392-74a7-44b0-8475-7795e34531ca","Type":"ContainerDied","Data":"509b631ec51b773d5e6463098821a4e1c6e14c29a805cda3f9e9841b3aa1b835"} Oct 01 15:58:45 crc kubenswrapper[4949]: I1001 15:58:45.396616 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="509b631ec51b773d5e6463098821a4e1c6e14c29a805cda3f9e9841b3aa1b835" Oct 01 15:58:45 crc kubenswrapper[4949]: I1001 15:58:45.399097 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-create-rrf4c" event={"ID":"b92402c1-7aa8-4a18-8603-a88c7d5b3735","Type":"ContainerDied","Data":"a1a37a19c90ba490f99702a4b975c9f43eeeb9fd1f507cfa190cc76e43528388"} Oct 01 15:58:45 crc kubenswrapper[4949]: I1001 15:58:45.399147 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a1a37a19c90ba490f99702a4b975c9f43eeeb9fd1f507cfa190cc76e43528388" Oct 01 15:58:45 crc kubenswrapper[4949]: I1001 15:58:45.399211 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-create-rrf4c" Oct 01 15:58:45 crc kubenswrapper[4949]: I1001 15:58:45.411100 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-create-dv4j7" event={"ID":"8f41fa71-2325-4f08-9fe0-1268868683cf","Type":"ContainerDied","Data":"740ac7ff6e0a8769dc7d92032ada0401f714b7eff378e27bfaf5031014e67452"} Oct 01 15:58:45 crc kubenswrapper[4949]: I1001 15:58:45.411161 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="740ac7ff6e0a8769dc7d92032ada0401f714b7eff378e27bfaf5031014e67452" Oct 01 15:58:45 crc kubenswrapper[4949]: I1001 15:58:45.411184 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-create-dv4j7" Oct 01 15:58:45 crc kubenswrapper[4949]: I1001 15:58:45.616633 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c72ce66-451c-4da1-a580-c19f7a2f242e" path="/var/lib/kubelet/pods/4c72ce66-451c-4da1-a580-c19f7a2f242e/volumes" Oct 01 15:58:49 crc kubenswrapper[4949]: I1001 15:58:49.446847 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-t2pgk" event={"ID":"4b1a3c6f-40a7-4816-8404-4910abf14478","Type":"ContainerStarted","Data":"19c91af858546721d0015fe3ff605169bd9bf4c54c48ff9ba386466a8a79de49"} Oct 01 15:58:49 crc kubenswrapper[4949]: I1001 15:58:49.448762 4949 generic.go:334] "Generic (PLEG): container finished" podID="21738a3b-69a2-4955-b45d-fe1f31585951" containerID="879c9bb8ce7b67391592a9dd882ef7bb6b59f58559de77033d3eb297815fbf35" exitCode=0 Oct 01 15:58:49 crc kubenswrapper[4949]: I1001 15:58:49.448803 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-76shp" event={"ID":"21738a3b-69a2-4955-b45d-fe1f31585951","Type":"ContainerDied","Data":"879c9bb8ce7b67391592a9dd882ef7bb6b59f58559de77033d3eb297815fbf35"} Oct 01 15:58:49 crc kubenswrapper[4949]: I1001 15:58:49.486618 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-db-sync-t2pgk" podStartSLOduration=2.338834915 podStartE2EDuration="8.486596409s" podCreationTimestamp="2025-10-01 15:58:41 +0000 UTC" firstStartedPulling="2025-10-01 15:58:42.622854313 +0000 UTC m=+1021.928460504" lastFinishedPulling="2025-10-01 15:58:48.770615787 +0000 UTC m=+1028.076221998" observedRunningTime="2025-10-01 15:58:49.463955625 +0000 UTC m=+1028.769561836" watchObservedRunningTime="2025-10-01 15:58:49.486596409 +0000 UTC m=+1028.792202600" Oct 01 15:58:50 crc kubenswrapper[4949]: I1001 15:58:50.824008 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-76shp" Oct 01 15:58:50 crc kubenswrapper[4949]: I1001 15:58:50.910880 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r9kkz\" (UniqueName: \"kubernetes.io/projected/21738a3b-69a2-4955-b45d-fe1f31585951-kube-api-access-r9kkz\") pod \"21738a3b-69a2-4955-b45d-fe1f31585951\" (UID: \"21738a3b-69a2-4955-b45d-fe1f31585951\") " Oct 01 15:58:50 crc kubenswrapper[4949]: I1001 15:58:50.911035 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21738a3b-69a2-4955-b45d-fe1f31585951-config-data\") pod \"21738a3b-69a2-4955-b45d-fe1f31585951\" (UID: \"21738a3b-69a2-4955-b45d-fe1f31585951\") " Oct 01 15:58:50 crc kubenswrapper[4949]: I1001 15:58:50.911251 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21738a3b-69a2-4955-b45d-fe1f31585951-combined-ca-bundle\") pod \"21738a3b-69a2-4955-b45d-fe1f31585951\" (UID: \"21738a3b-69a2-4955-b45d-fe1f31585951\") " Oct 01 15:58:50 crc kubenswrapper[4949]: I1001 15:58:50.911344 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/21738a3b-69a2-4955-b45d-fe1f31585951-db-sync-config-data\") pod \"21738a3b-69a2-4955-b45d-fe1f31585951\" (UID: \"21738a3b-69a2-4955-b45d-fe1f31585951\") " Oct 01 15:58:50 crc kubenswrapper[4949]: I1001 15:58:50.916440 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21738a3b-69a2-4955-b45d-fe1f31585951-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "21738a3b-69a2-4955-b45d-fe1f31585951" (UID: "21738a3b-69a2-4955-b45d-fe1f31585951"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:58:50 crc kubenswrapper[4949]: I1001 15:58:50.916689 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21738a3b-69a2-4955-b45d-fe1f31585951-kube-api-access-r9kkz" (OuterVolumeSpecName: "kube-api-access-r9kkz") pod "21738a3b-69a2-4955-b45d-fe1f31585951" (UID: "21738a3b-69a2-4955-b45d-fe1f31585951"). InnerVolumeSpecName "kube-api-access-r9kkz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:58:50 crc kubenswrapper[4949]: I1001 15:58:50.942682 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21738a3b-69a2-4955-b45d-fe1f31585951-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "21738a3b-69a2-4955-b45d-fe1f31585951" (UID: "21738a3b-69a2-4955-b45d-fe1f31585951"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:58:50 crc kubenswrapper[4949]: I1001 15:58:50.958889 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21738a3b-69a2-4955-b45d-fe1f31585951-config-data" (OuterVolumeSpecName: "config-data") pod "21738a3b-69a2-4955-b45d-fe1f31585951" (UID: "21738a3b-69a2-4955-b45d-fe1f31585951"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:58:51 crc kubenswrapper[4949]: I1001 15:58:51.012511 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21738a3b-69a2-4955-b45d-fe1f31585951-config-data\") on node \"crc\" DevicePath \"\"" Oct 01 15:58:51 crc kubenswrapper[4949]: I1001 15:58:51.012542 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21738a3b-69a2-4955-b45d-fe1f31585951-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 15:58:51 crc kubenswrapper[4949]: I1001 15:58:51.012552 4949 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/21738a3b-69a2-4955-b45d-fe1f31585951-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 01 15:58:51 crc kubenswrapper[4949]: I1001 15:58:51.012562 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r9kkz\" (UniqueName: \"kubernetes.io/projected/21738a3b-69a2-4955-b45d-fe1f31585951-kube-api-access-r9kkz\") on node \"crc\" DevicePath \"\"" Oct 01 15:58:51 crc kubenswrapper[4949]: I1001 15:58:51.431073 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-5d51-account-create-ctjvl"] Oct 01 15:58:51 crc kubenswrapper[4949]: E1001 15:58:51.434324 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c5b1392-74a7-44b0-8475-7795e34531ca" containerName="mariadb-database-create" Oct 01 15:58:51 crc kubenswrapper[4949]: I1001 15:58:51.434371 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c5b1392-74a7-44b0-8475-7795e34531ca" containerName="mariadb-database-create" Oct 01 15:58:51 crc kubenswrapper[4949]: E1001 15:58:51.434402 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c72ce66-451c-4da1-a580-c19f7a2f242e" containerName="ovn-config" Oct 01 15:58:51 crc kubenswrapper[4949]: I1001 15:58:51.434416 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c72ce66-451c-4da1-a580-c19f7a2f242e" containerName="ovn-config" Oct 01 15:58:51 crc kubenswrapper[4949]: E1001 15:58:51.434428 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b92402c1-7aa8-4a18-8603-a88c7d5b3735" containerName="mariadb-database-create" Oct 01 15:58:51 crc kubenswrapper[4949]: I1001 15:58:51.434439 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="b92402c1-7aa8-4a18-8603-a88c7d5b3735" containerName="mariadb-database-create" Oct 01 15:58:51 crc kubenswrapper[4949]: E1001 15:58:51.434467 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8f41fa71-2325-4f08-9fe0-1268868683cf" containerName="mariadb-database-create" Oct 01 15:58:51 crc kubenswrapper[4949]: I1001 15:58:51.434477 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="8f41fa71-2325-4f08-9fe0-1268868683cf" containerName="mariadb-database-create" Oct 01 15:58:51 crc kubenswrapper[4949]: E1001 15:58:51.434495 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21738a3b-69a2-4955-b45d-fe1f31585951" containerName="glance-db-sync" Oct 01 15:58:51 crc kubenswrapper[4949]: I1001 15:58:51.434506 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="21738a3b-69a2-4955-b45d-fe1f31585951" containerName="glance-db-sync" Oct 01 15:58:51 crc kubenswrapper[4949]: I1001 15:58:51.434808 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="b92402c1-7aa8-4a18-8603-a88c7d5b3735" containerName="mariadb-database-create" Oct 01 15:58:51 crc kubenswrapper[4949]: I1001 15:58:51.434849 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="21738a3b-69a2-4955-b45d-fe1f31585951" containerName="glance-db-sync" Oct 01 15:58:51 crc kubenswrapper[4949]: I1001 15:58:51.434867 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c72ce66-451c-4da1-a580-c19f7a2f242e" containerName="ovn-config" Oct 01 15:58:51 crc kubenswrapper[4949]: I1001 15:58:51.434886 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c5b1392-74a7-44b0-8475-7795e34531ca" containerName="mariadb-database-create" Oct 01 15:58:51 crc kubenswrapper[4949]: I1001 15:58:51.434910 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="8f41fa71-2325-4f08-9fe0-1268868683cf" containerName="mariadb-database-create" Oct 01 15:58:51 crc kubenswrapper[4949]: I1001 15:58:51.435730 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5d51-account-create-ctjvl" Oct 01 15:58:51 crc kubenswrapper[4949]: I1001 15:58:51.439397 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-db-secret" Oct 01 15:58:51 crc kubenswrapper[4949]: I1001 15:58:51.454436 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-5d51-account-create-ctjvl"] Oct 01 15:58:51 crc kubenswrapper[4949]: I1001 15:58:51.469653 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-db-sync-76shp" event={"ID":"21738a3b-69a2-4955-b45d-fe1f31585951","Type":"ContainerDied","Data":"d24e4bc377dac47c0ef189c5a8ac1d4ac871a365d8ffbd02f21d5f9a58bd533b"} Oct 01 15:58:51 crc kubenswrapper[4949]: I1001 15:58:51.469687 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d24e4bc377dac47c0ef189c5a8ac1d4ac871a365d8ffbd02f21d5f9a58bd533b" Oct 01 15:58:51 crc kubenswrapper[4949]: I1001 15:58:51.469740 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-db-sync-76shp" Oct 01 15:58:51 crc kubenswrapper[4949]: I1001 15:58:51.500010 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-fd00-account-create-rg9pt"] Oct 01 15:58:51 crc kubenswrapper[4949]: I1001 15:58:51.501041 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fd00-account-create-rg9pt" Oct 01 15:58:51 crc kubenswrapper[4949]: I1001 15:58:51.504000 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-db-secret" Oct 01 15:58:51 crc kubenswrapper[4949]: I1001 15:58:51.507816 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-fd00-account-create-rg9pt"] Oct 01 15:58:51 crc kubenswrapper[4949]: I1001 15:58:51.524224 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf86r\" (UniqueName: \"kubernetes.io/projected/9f548f94-239b-4712-bc59-5dfa63311d7f-kube-api-access-tf86r\") pod \"cinder-5d51-account-create-ctjvl\" (UID: \"9f548f94-239b-4712-bc59-5dfa63311d7f\") " pod="openstack/cinder-5d51-account-create-ctjvl" Oct 01 15:58:51 crc kubenswrapper[4949]: I1001 15:58:51.627303 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tf86r\" (UniqueName: \"kubernetes.io/projected/9f548f94-239b-4712-bc59-5dfa63311d7f-kube-api-access-tf86r\") pod \"cinder-5d51-account-create-ctjvl\" (UID: \"9f548f94-239b-4712-bc59-5dfa63311d7f\") " pod="openstack/cinder-5d51-account-create-ctjvl" Oct 01 15:58:51 crc kubenswrapper[4949]: I1001 15:58:51.627386 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twmmh\" (UniqueName: \"kubernetes.io/projected/db479bb0-2fc9-4a74-aeaa-8bb8446f2657-kube-api-access-twmmh\") pod \"barbican-fd00-account-create-rg9pt\" (UID: \"db479bb0-2fc9-4a74-aeaa-8bb8446f2657\") " pod="openstack/barbican-fd00-account-create-rg9pt" Oct 01 15:58:51 crc kubenswrapper[4949]: I1001 15:58:51.648461 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf86r\" (UniqueName: \"kubernetes.io/projected/9f548f94-239b-4712-bc59-5dfa63311d7f-kube-api-access-tf86r\") pod \"cinder-5d51-account-create-ctjvl\" (UID: \"9f548f94-239b-4712-bc59-5dfa63311d7f\") " pod="openstack/cinder-5d51-account-create-ctjvl" Oct 01 15:58:51 crc kubenswrapper[4949]: I1001 15:58:51.728018 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-6041-account-create-67kfb"] Oct 01 15:58:51 crc kubenswrapper[4949]: I1001 15:58:51.729018 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6041-account-create-67kfb" Oct 01 15:58:51 crc kubenswrapper[4949]: I1001 15:58:51.729008 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twmmh\" (UniqueName: \"kubernetes.io/projected/db479bb0-2fc9-4a74-aeaa-8bb8446f2657-kube-api-access-twmmh\") pod \"barbican-fd00-account-create-rg9pt\" (UID: \"db479bb0-2fc9-4a74-aeaa-8bb8446f2657\") " pod="openstack/barbican-fd00-account-create-rg9pt" Oct 01 15:58:51 crc kubenswrapper[4949]: I1001 15:58:51.731581 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-db-secret" Oct 01 15:58:51 crc kubenswrapper[4949]: I1001 15:58:51.750784 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twmmh\" (UniqueName: \"kubernetes.io/projected/db479bb0-2fc9-4a74-aeaa-8bb8446f2657-kube-api-access-twmmh\") pod \"barbican-fd00-account-create-rg9pt\" (UID: \"db479bb0-2fc9-4a74-aeaa-8bb8446f2657\") " pod="openstack/barbican-fd00-account-create-rg9pt" Oct 01 15:58:51 crc kubenswrapper[4949]: I1001 15:58:51.760174 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6041-account-create-67kfb"] Oct 01 15:58:51 crc kubenswrapper[4949]: I1001 15:58:51.764667 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5d51-account-create-ctjvl" Oct 01 15:58:51 crc kubenswrapper[4949]: I1001 15:58:51.829479 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fd00-account-create-rg9pt" Oct 01 15:58:51 crc kubenswrapper[4949]: I1001 15:58:51.831564 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8p26\" (UniqueName: \"kubernetes.io/projected/24061dd1-dc8e-4fb2-b372-25983f927a74-kube-api-access-j8p26\") pod \"neutron-6041-account-create-67kfb\" (UID: \"24061dd1-dc8e-4fb2-b372-25983f927a74\") " pod="openstack/neutron-6041-account-create-67kfb" Oct 01 15:58:51 crc kubenswrapper[4949]: I1001 15:58:51.937474 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8p26\" (UniqueName: \"kubernetes.io/projected/24061dd1-dc8e-4fb2-b372-25983f927a74-kube-api-access-j8p26\") pod \"neutron-6041-account-create-67kfb\" (UID: \"24061dd1-dc8e-4fb2-b372-25983f927a74\") " pod="openstack/neutron-6041-account-create-67kfb" Oct 01 15:58:51 crc kubenswrapper[4949]: I1001 15:58:51.946964 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-trvwm"] Oct 01 15:58:51 crc kubenswrapper[4949]: I1001 15:58:51.957088 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-trvwm" Oct 01 15:58:51 crc kubenswrapper[4949]: I1001 15:58:51.986219 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-trvwm"] Oct 01 15:58:52 crc kubenswrapper[4949]: I1001 15:58:52.043459 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/845971ca-cc3f-4227-adcf-8049ce7fd830-ovsdbserver-nb\") pod \"dnsmasq-dns-54f9b7b8d9-trvwm\" (UID: \"845971ca-cc3f-4227-adcf-8049ce7fd830\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-trvwm" Oct 01 15:58:52 crc kubenswrapper[4949]: I1001 15:58:52.043766 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/845971ca-cc3f-4227-adcf-8049ce7fd830-dns-svc\") pod \"dnsmasq-dns-54f9b7b8d9-trvwm\" (UID: \"845971ca-cc3f-4227-adcf-8049ce7fd830\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-trvwm" Oct 01 15:58:52 crc kubenswrapper[4949]: I1001 15:58:52.043806 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/845971ca-cc3f-4227-adcf-8049ce7fd830-config\") pod \"dnsmasq-dns-54f9b7b8d9-trvwm\" (UID: \"845971ca-cc3f-4227-adcf-8049ce7fd830\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-trvwm" Oct 01 15:58:52 crc kubenswrapper[4949]: I1001 15:58:52.043832 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/845971ca-cc3f-4227-adcf-8049ce7fd830-ovsdbserver-sb\") pod \"dnsmasq-dns-54f9b7b8d9-trvwm\" (UID: \"845971ca-cc3f-4227-adcf-8049ce7fd830\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-trvwm" Oct 01 15:58:52 crc kubenswrapper[4949]: I1001 15:58:52.043870 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xw867\" (UniqueName: \"kubernetes.io/projected/845971ca-cc3f-4227-adcf-8049ce7fd830-kube-api-access-xw867\") pod \"dnsmasq-dns-54f9b7b8d9-trvwm\" (UID: \"845971ca-cc3f-4227-adcf-8049ce7fd830\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-trvwm" Oct 01 15:58:52 crc kubenswrapper[4949]: I1001 15:58:52.049061 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8p26\" (UniqueName: \"kubernetes.io/projected/24061dd1-dc8e-4fb2-b372-25983f927a74-kube-api-access-j8p26\") pod \"neutron-6041-account-create-67kfb\" (UID: \"24061dd1-dc8e-4fb2-b372-25983f927a74\") " pod="openstack/neutron-6041-account-create-67kfb" Oct 01 15:58:52 crc kubenswrapper[4949]: I1001 15:58:52.145240 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/845971ca-cc3f-4227-adcf-8049ce7fd830-config\") pod \"dnsmasq-dns-54f9b7b8d9-trvwm\" (UID: \"845971ca-cc3f-4227-adcf-8049ce7fd830\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-trvwm" Oct 01 15:58:52 crc kubenswrapper[4949]: I1001 15:58:52.145294 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/845971ca-cc3f-4227-adcf-8049ce7fd830-ovsdbserver-sb\") pod \"dnsmasq-dns-54f9b7b8d9-trvwm\" (UID: \"845971ca-cc3f-4227-adcf-8049ce7fd830\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-trvwm" Oct 01 15:58:52 crc kubenswrapper[4949]: I1001 15:58:52.145334 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xw867\" (UniqueName: \"kubernetes.io/projected/845971ca-cc3f-4227-adcf-8049ce7fd830-kube-api-access-xw867\") pod \"dnsmasq-dns-54f9b7b8d9-trvwm\" (UID: \"845971ca-cc3f-4227-adcf-8049ce7fd830\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-trvwm" Oct 01 15:58:52 crc kubenswrapper[4949]: I1001 15:58:52.145381 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/845971ca-cc3f-4227-adcf-8049ce7fd830-ovsdbserver-nb\") pod \"dnsmasq-dns-54f9b7b8d9-trvwm\" (UID: \"845971ca-cc3f-4227-adcf-8049ce7fd830\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-trvwm" Oct 01 15:58:52 crc kubenswrapper[4949]: I1001 15:58:52.145417 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/845971ca-cc3f-4227-adcf-8049ce7fd830-dns-svc\") pod \"dnsmasq-dns-54f9b7b8d9-trvwm\" (UID: \"845971ca-cc3f-4227-adcf-8049ce7fd830\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-trvwm" Oct 01 15:58:52 crc kubenswrapper[4949]: I1001 15:58:52.146297 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/845971ca-cc3f-4227-adcf-8049ce7fd830-dns-svc\") pod \"dnsmasq-dns-54f9b7b8d9-trvwm\" (UID: \"845971ca-cc3f-4227-adcf-8049ce7fd830\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-trvwm" Oct 01 15:58:52 crc kubenswrapper[4949]: I1001 15:58:52.147214 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/845971ca-cc3f-4227-adcf-8049ce7fd830-config\") pod \"dnsmasq-dns-54f9b7b8d9-trvwm\" (UID: \"845971ca-cc3f-4227-adcf-8049ce7fd830\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-trvwm" Oct 01 15:58:52 crc kubenswrapper[4949]: I1001 15:58:52.147237 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/845971ca-cc3f-4227-adcf-8049ce7fd830-ovsdbserver-sb\") pod \"dnsmasq-dns-54f9b7b8d9-trvwm\" (UID: \"845971ca-cc3f-4227-adcf-8049ce7fd830\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-trvwm" Oct 01 15:58:52 crc kubenswrapper[4949]: I1001 15:58:52.149068 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/845971ca-cc3f-4227-adcf-8049ce7fd830-ovsdbserver-nb\") pod \"dnsmasq-dns-54f9b7b8d9-trvwm\" (UID: \"845971ca-cc3f-4227-adcf-8049ce7fd830\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-trvwm" Oct 01 15:58:52 crc kubenswrapper[4949]: I1001 15:58:52.189733 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6041-account-create-67kfb" Oct 01 15:58:52 crc kubenswrapper[4949]: I1001 15:58:52.198411 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xw867\" (UniqueName: \"kubernetes.io/projected/845971ca-cc3f-4227-adcf-8049ce7fd830-kube-api-access-xw867\") pod \"dnsmasq-dns-54f9b7b8d9-trvwm\" (UID: \"845971ca-cc3f-4227-adcf-8049ce7fd830\") " pod="openstack/dnsmasq-dns-54f9b7b8d9-trvwm" Oct 01 15:58:52 crc kubenswrapper[4949]: I1001 15:58:52.391202 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-trvwm" Oct 01 15:58:52 crc kubenswrapper[4949]: I1001 15:58:52.546482 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-5d51-account-create-ctjvl"] Oct 01 15:58:52 crc kubenswrapper[4949]: I1001 15:58:52.603815 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-fd00-account-create-rg9pt"] Oct 01 15:58:52 crc kubenswrapper[4949]: W1001 15:58:52.607574 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb479bb0_2fc9_4a74_aeaa_8bb8446f2657.slice/crio-fe1e91c607fb5a18423fd9c6a4aff4078ca8bd79d40ef95a40d52f930749d539 WatchSource:0}: Error finding container fe1e91c607fb5a18423fd9c6a4aff4078ca8bd79d40ef95a40d52f930749d539: Status 404 returned error can't find the container with id fe1e91c607fb5a18423fd9c6a4aff4078ca8bd79d40ef95a40d52f930749d539 Oct 01 15:58:52 crc kubenswrapper[4949]: I1001 15:58:52.711167 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-6041-account-create-67kfb"] Oct 01 15:58:52 crc kubenswrapper[4949]: W1001 15:58:52.716599 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod24061dd1_dc8e_4fb2_b372_25983f927a74.slice/crio-9d370d38c403950916ca8de1d0d724c998517155ebf0c18204902db22788fa1a WatchSource:0}: Error finding container 9d370d38c403950916ca8de1d0d724c998517155ebf0c18204902db22788fa1a: Status 404 returned error can't find the container with id 9d370d38c403950916ca8de1d0d724c998517155ebf0c18204902db22788fa1a Oct 01 15:58:52 crc kubenswrapper[4949]: I1001 15:58:52.881202 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-trvwm"] Oct 01 15:58:52 crc kubenswrapper[4949]: W1001 15:58:52.961831 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod845971ca_cc3f_4227_adcf_8049ce7fd830.slice/crio-4dd9522e4abb653b1e503c801892321b9b6d356a4df0e5cc2902cca91fde58db WatchSource:0}: Error finding container 4dd9522e4abb653b1e503c801892321b9b6d356a4df0e5cc2902cca91fde58db: Status 404 returned error can't find the container with id 4dd9522e4abb653b1e503c801892321b9b6d356a4df0e5cc2902cca91fde58db Oct 01 15:58:53 crc kubenswrapper[4949]: I1001 15:58:53.492726 4949 generic.go:334] "Generic (PLEG): container finished" podID="9f548f94-239b-4712-bc59-5dfa63311d7f" containerID="057533705fc546a66181093688ed7cf1ed1817a45bde40d0b2b8ff79fd047084" exitCode=0 Oct 01 15:58:53 crc kubenswrapper[4949]: I1001 15:58:53.492806 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5d51-account-create-ctjvl" event={"ID":"9f548f94-239b-4712-bc59-5dfa63311d7f","Type":"ContainerDied","Data":"057533705fc546a66181093688ed7cf1ed1817a45bde40d0b2b8ff79fd047084"} Oct 01 15:58:53 crc kubenswrapper[4949]: I1001 15:58:53.492839 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5d51-account-create-ctjvl" event={"ID":"9f548f94-239b-4712-bc59-5dfa63311d7f","Type":"ContainerStarted","Data":"de9f67740a0aff35374a1a796c8993f8291ab8d6d68986d428cc58b1dfdaaa82"} Oct 01 15:58:53 crc kubenswrapper[4949]: I1001 15:58:53.494988 4949 generic.go:334] "Generic (PLEG): container finished" podID="845971ca-cc3f-4227-adcf-8049ce7fd830" containerID="133bccde878cfdc1f2993d13c997ff10a6564a61a177295435f586138233d9e3" exitCode=0 Oct 01 15:58:53 crc kubenswrapper[4949]: I1001 15:58:53.495057 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-trvwm" event={"ID":"845971ca-cc3f-4227-adcf-8049ce7fd830","Type":"ContainerDied","Data":"133bccde878cfdc1f2993d13c997ff10a6564a61a177295435f586138233d9e3"} Oct 01 15:58:53 crc kubenswrapper[4949]: I1001 15:58:53.495086 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-trvwm" event={"ID":"845971ca-cc3f-4227-adcf-8049ce7fd830","Type":"ContainerStarted","Data":"4dd9522e4abb653b1e503c801892321b9b6d356a4df0e5cc2902cca91fde58db"} Oct 01 15:58:53 crc kubenswrapper[4949]: I1001 15:58:53.497229 4949 generic.go:334] "Generic (PLEG): container finished" podID="24061dd1-dc8e-4fb2-b372-25983f927a74" containerID="e4e742ff462b7aceeeee88ae2f34bb8fc52715249faea4416550d16da5a1c878" exitCode=0 Oct 01 15:58:53 crc kubenswrapper[4949]: I1001 15:58:53.497302 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6041-account-create-67kfb" event={"ID":"24061dd1-dc8e-4fb2-b372-25983f927a74","Type":"ContainerDied","Data":"e4e742ff462b7aceeeee88ae2f34bb8fc52715249faea4416550d16da5a1c878"} Oct 01 15:58:53 crc kubenswrapper[4949]: I1001 15:58:53.497334 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6041-account-create-67kfb" event={"ID":"24061dd1-dc8e-4fb2-b372-25983f927a74","Type":"ContainerStarted","Data":"9d370d38c403950916ca8de1d0d724c998517155ebf0c18204902db22788fa1a"} Oct 01 15:58:53 crc kubenswrapper[4949]: I1001 15:58:53.499414 4949 generic.go:334] "Generic (PLEG): container finished" podID="db479bb0-2fc9-4a74-aeaa-8bb8446f2657" containerID="02af25e7090bb3712bc983bfe089f4a2119797a976d6b6ca073a53bee130f13f" exitCode=0 Oct 01 15:58:53 crc kubenswrapper[4949]: I1001 15:58:53.499459 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-fd00-account-create-rg9pt" event={"ID":"db479bb0-2fc9-4a74-aeaa-8bb8446f2657","Type":"ContainerDied","Data":"02af25e7090bb3712bc983bfe089f4a2119797a976d6b6ca073a53bee130f13f"} Oct 01 15:58:53 crc kubenswrapper[4949]: I1001 15:58:53.499480 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-fd00-account-create-rg9pt" event={"ID":"db479bb0-2fc9-4a74-aeaa-8bb8446f2657","Type":"ContainerStarted","Data":"fe1e91c607fb5a18423fd9c6a4aff4078ca8bd79d40ef95a40d52f930749d539"} Oct 01 15:58:54 crc kubenswrapper[4949]: I1001 15:58:54.508427 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-trvwm" event={"ID":"845971ca-cc3f-4227-adcf-8049ce7fd830","Type":"ContainerStarted","Data":"55822a2dac7b886a43628718190968899263698b708796715a2afb318c948b34"} Oct 01 15:58:54 crc kubenswrapper[4949]: I1001 15:58:54.508743 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-54f9b7b8d9-trvwm" Oct 01 15:58:54 crc kubenswrapper[4949]: I1001 15:58:54.509759 4949 generic.go:334] "Generic (PLEG): container finished" podID="4b1a3c6f-40a7-4816-8404-4910abf14478" containerID="19c91af858546721d0015fe3ff605169bd9bf4c54c48ff9ba386466a8a79de49" exitCode=0 Oct 01 15:58:54 crc kubenswrapper[4949]: I1001 15:58:54.509845 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-t2pgk" event={"ID":"4b1a3c6f-40a7-4816-8404-4910abf14478","Type":"ContainerDied","Data":"19c91af858546721d0015fe3ff605169bd9bf4c54c48ff9ba386466a8a79de49"} Oct 01 15:58:54 crc kubenswrapper[4949]: I1001 15:58:54.529167 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-54f9b7b8d9-trvwm" podStartSLOduration=3.529148014 podStartE2EDuration="3.529148014s" podCreationTimestamp="2025-10-01 15:58:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:58:54.526919413 +0000 UTC m=+1033.832525604" watchObservedRunningTime="2025-10-01 15:58:54.529148014 +0000 UTC m=+1033.834754205" Oct 01 15:58:55 crc kubenswrapper[4949]: I1001 15:58:55.101768 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6041-account-create-67kfb" Oct 01 15:58:55 crc kubenswrapper[4949]: I1001 15:58:55.110579 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8p26\" (UniqueName: \"kubernetes.io/projected/24061dd1-dc8e-4fb2-b372-25983f927a74-kube-api-access-j8p26\") pod \"24061dd1-dc8e-4fb2-b372-25983f927a74\" (UID: \"24061dd1-dc8e-4fb2-b372-25983f927a74\") " Oct 01 15:58:55 crc kubenswrapper[4949]: I1001 15:58:55.115089 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5d51-account-create-ctjvl" Oct 01 15:58:55 crc kubenswrapper[4949]: I1001 15:58:55.116223 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/24061dd1-dc8e-4fb2-b372-25983f927a74-kube-api-access-j8p26" (OuterVolumeSpecName: "kube-api-access-j8p26") pod "24061dd1-dc8e-4fb2-b372-25983f927a74" (UID: "24061dd1-dc8e-4fb2-b372-25983f927a74"). InnerVolumeSpecName "kube-api-access-j8p26". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:58:55 crc kubenswrapper[4949]: I1001 15:58:55.163272 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fd00-account-create-rg9pt" Oct 01 15:58:55 crc kubenswrapper[4949]: I1001 15:58:55.212394 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twmmh\" (UniqueName: \"kubernetes.io/projected/db479bb0-2fc9-4a74-aeaa-8bb8446f2657-kube-api-access-twmmh\") pod \"db479bb0-2fc9-4a74-aeaa-8bb8446f2657\" (UID: \"db479bb0-2fc9-4a74-aeaa-8bb8446f2657\") " Oct 01 15:58:55 crc kubenswrapper[4949]: I1001 15:58:55.212473 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tf86r\" (UniqueName: \"kubernetes.io/projected/9f548f94-239b-4712-bc59-5dfa63311d7f-kube-api-access-tf86r\") pod \"9f548f94-239b-4712-bc59-5dfa63311d7f\" (UID: \"9f548f94-239b-4712-bc59-5dfa63311d7f\") " Oct 01 15:58:55 crc kubenswrapper[4949]: I1001 15:58:55.213114 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8p26\" (UniqueName: \"kubernetes.io/projected/24061dd1-dc8e-4fb2-b372-25983f927a74-kube-api-access-j8p26\") on node \"crc\" DevicePath \"\"" Oct 01 15:58:55 crc kubenswrapper[4949]: I1001 15:58:55.215728 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db479bb0-2fc9-4a74-aeaa-8bb8446f2657-kube-api-access-twmmh" (OuterVolumeSpecName: "kube-api-access-twmmh") pod "db479bb0-2fc9-4a74-aeaa-8bb8446f2657" (UID: "db479bb0-2fc9-4a74-aeaa-8bb8446f2657"). InnerVolumeSpecName "kube-api-access-twmmh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:58:55 crc kubenswrapper[4949]: I1001 15:58:55.216354 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f548f94-239b-4712-bc59-5dfa63311d7f-kube-api-access-tf86r" (OuterVolumeSpecName: "kube-api-access-tf86r") pod "9f548f94-239b-4712-bc59-5dfa63311d7f" (UID: "9f548f94-239b-4712-bc59-5dfa63311d7f"). InnerVolumeSpecName "kube-api-access-tf86r". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:58:55 crc kubenswrapper[4949]: I1001 15:58:55.315247 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twmmh\" (UniqueName: \"kubernetes.io/projected/db479bb0-2fc9-4a74-aeaa-8bb8446f2657-kube-api-access-twmmh\") on node \"crc\" DevicePath \"\"" Oct 01 15:58:55 crc kubenswrapper[4949]: I1001 15:58:55.315291 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tf86r\" (UniqueName: \"kubernetes.io/projected/9f548f94-239b-4712-bc59-5dfa63311d7f-kube-api-access-tf86r\") on node \"crc\" DevicePath \"\"" Oct 01 15:58:55 crc kubenswrapper[4949]: I1001 15:58:55.518789 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-6041-account-create-67kfb" event={"ID":"24061dd1-dc8e-4fb2-b372-25983f927a74","Type":"ContainerDied","Data":"9d370d38c403950916ca8de1d0d724c998517155ebf0c18204902db22788fa1a"} Oct 01 15:58:55 crc kubenswrapper[4949]: I1001 15:58:55.518834 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d370d38c403950916ca8de1d0d724c998517155ebf0c18204902db22788fa1a" Oct 01 15:58:55 crc kubenswrapper[4949]: I1001 15:58:55.518854 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-6041-account-create-67kfb" Oct 01 15:58:55 crc kubenswrapper[4949]: I1001 15:58:55.519901 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-fd00-account-create-rg9pt" event={"ID":"db479bb0-2fc9-4a74-aeaa-8bb8446f2657","Type":"ContainerDied","Data":"fe1e91c607fb5a18423fd9c6a4aff4078ca8bd79d40ef95a40d52f930749d539"} Oct 01 15:58:55 crc kubenswrapper[4949]: I1001 15:58:55.519937 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe1e91c607fb5a18423fd9c6a4aff4078ca8bd79d40ef95a40d52f930749d539" Oct 01 15:58:55 crc kubenswrapper[4949]: I1001 15:58:55.519986 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-fd00-account-create-rg9pt" Oct 01 15:58:55 crc kubenswrapper[4949]: I1001 15:58:55.521856 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-5d51-account-create-ctjvl" Oct 01 15:58:55 crc kubenswrapper[4949]: I1001 15:58:55.521898 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-5d51-account-create-ctjvl" event={"ID":"9f548f94-239b-4712-bc59-5dfa63311d7f","Type":"ContainerDied","Data":"de9f67740a0aff35374a1a796c8993f8291ab8d6d68986d428cc58b1dfdaaa82"} Oct 01 15:58:55 crc kubenswrapper[4949]: I1001 15:58:55.521915 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="de9f67740a0aff35374a1a796c8993f8291ab8d6d68986d428cc58b1dfdaaa82" Oct 01 15:58:55 crc kubenswrapper[4949]: E1001 15:58:55.664229 4949 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f548f94_239b_4712_bc59_5dfa63311d7f.slice/crio-de9f67740a0aff35374a1a796c8993f8291ab8d6d68986d428cc58b1dfdaaa82\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddb479bb0_2fc9_4a74_aeaa_8bb8446f2657.slice\": RecentStats: unable to find data in memory cache]" Oct 01 15:58:55 crc kubenswrapper[4949]: I1001 15:58:55.766471 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-t2pgk" Oct 01 15:58:55 crc kubenswrapper[4949]: I1001 15:58:55.819880 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b1a3c6f-40a7-4816-8404-4910abf14478-config-data\") pod \"4b1a3c6f-40a7-4816-8404-4910abf14478\" (UID: \"4b1a3c6f-40a7-4816-8404-4910abf14478\") " Oct 01 15:58:55 crc kubenswrapper[4949]: I1001 15:58:55.820003 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-27mdz\" (UniqueName: \"kubernetes.io/projected/4b1a3c6f-40a7-4816-8404-4910abf14478-kube-api-access-27mdz\") pod \"4b1a3c6f-40a7-4816-8404-4910abf14478\" (UID: \"4b1a3c6f-40a7-4816-8404-4910abf14478\") " Oct 01 15:58:55 crc kubenswrapper[4949]: I1001 15:58:55.820078 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b1a3c6f-40a7-4816-8404-4910abf14478-combined-ca-bundle\") pod \"4b1a3c6f-40a7-4816-8404-4910abf14478\" (UID: \"4b1a3c6f-40a7-4816-8404-4910abf14478\") " Oct 01 15:58:55 crc kubenswrapper[4949]: I1001 15:58:55.826261 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b1a3c6f-40a7-4816-8404-4910abf14478-kube-api-access-27mdz" (OuterVolumeSpecName: "kube-api-access-27mdz") pod "4b1a3c6f-40a7-4816-8404-4910abf14478" (UID: "4b1a3c6f-40a7-4816-8404-4910abf14478"). InnerVolumeSpecName "kube-api-access-27mdz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:58:55 crc kubenswrapper[4949]: I1001 15:58:55.848874 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b1a3c6f-40a7-4816-8404-4910abf14478-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4b1a3c6f-40a7-4816-8404-4910abf14478" (UID: "4b1a3c6f-40a7-4816-8404-4910abf14478"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:58:55 crc kubenswrapper[4949]: I1001 15:58:55.867011 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4b1a3c6f-40a7-4816-8404-4910abf14478-config-data" (OuterVolumeSpecName: "config-data") pod "4b1a3c6f-40a7-4816-8404-4910abf14478" (UID: "4b1a3c6f-40a7-4816-8404-4910abf14478"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:58:55 crc kubenswrapper[4949]: I1001 15:58:55.921494 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-27mdz\" (UniqueName: \"kubernetes.io/projected/4b1a3c6f-40a7-4816-8404-4910abf14478-kube-api-access-27mdz\") on node \"crc\" DevicePath \"\"" Oct 01 15:58:55 crc kubenswrapper[4949]: I1001 15:58:55.921526 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4b1a3c6f-40a7-4816-8404-4910abf14478-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 15:58:55 crc kubenswrapper[4949]: I1001 15:58:55.921538 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4b1a3c6f-40a7-4816-8404-4910abf14478-config-data\") on node \"crc\" DevicePath \"\"" Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.533204 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-db-sync-t2pgk" event={"ID":"4b1a3c6f-40a7-4816-8404-4910abf14478","Type":"ContainerDied","Data":"2df95f5c8c4acb09fa5eed50959da890372ee8a2dd6ecd654b7eac2491fe9de1"} Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.533716 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2df95f5c8c4acb09fa5eed50959da890372ee8a2dd6ecd654b7eac2491fe9de1" Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.533282 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-db-sync-t2pgk" Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.754882 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-trvwm"] Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.755440 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-54f9b7b8d9-trvwm" podUID="845971ca-cc3f-4227-adcf-8049ce7fd830" containerName="dnsmasq-dns" containerID="cri-o://55822a2dac7b886a43628718190968899263698b708796715a2afb318c948b34" gracePeriod=10 Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.781899 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-gb56z"] Oct 01 15:58:56 crc kubenswrapper[4949]: E1001 15:58:56.782246 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="24061dd1-dc8e-4fb2-b372-25983f927a74" containerName="mariadb-account-create" Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.782263 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="24061dd1-dc8e-4fb2-b372-25983f927a74" containerName="mariadb-account-create" Oct 01 15:58:56 crc kubenswrapper[4949]: E1001 15:58:56.782290 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f548f94-239b-4712-bc59-5dfa63311d7f" containerName="mariadb-account-create" Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.782297 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f548f94-239b-4712-bc59-5dfa63311d7f" containerName="mariadb-account-create" Oct 01 15:58:56 crc kubenswrapper[4949]: E1001 15:58:56.782310 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4b1a3c6f-40a7-4816-8404-4910abf14478" containerName="keystone-db-sync" Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.782317 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="4b1a3c6f-40a7-4816-8404-4910abf14478" containerName="keystone-db-sync" Oct 01 15:58:56 crc kubenswrapper[4949]: E1001 15:58:56.782337 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db479bb0-2fc9-4a74-aeaa-8bb8446f2657" containerName="mariadb-account-create" Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.782343 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="db479bb0-2fc9-4a74-aeaa-8bb8446f2657" containerName="mariadb-account-create" Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.782471 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="4b1a3c6f-40a7-4816-8404-4910abf14478" containerName="keystone-db-sync" Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.782487 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="db479bb0-2fc9-4a74-aeaa-8bb8446f2657" containerName="mariadb-account-create" Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.782504 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f548f94-239b-4712-bc59-5dfa63311d7f" containerName="mariadb-account-create" Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.782513 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="24061dd1-dc8e-4fb2-b372-25983f927a74" containerName="mariadb-account-create" Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.783039 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-gb56z" Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.786922 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.787055 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.787064 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-kckdb" Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.788860 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.790374 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-nt4wr"] Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.791754 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-nt4wr" Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.806346 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-gb56z"] Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.841484 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-nt4wr"] Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.848927 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0dd0278-64c0-4068-8f4e-42fadcd3df42-scripts\") pod \"keystone-bootstrap-gb56z\" (UID: \"b0dd0278-64c0-4068-8f4e-42fadcd3df42\") " pod="openstack/keystone-bootstrap-gb56z" Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.848986 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e396de6-8111-4a48-b1ec-ae8b7bcfdd24-config\") pod \"dnsmasq-dns-6546db6db7-nt4wr\" (UID: \"4e396de6-8111-4a48-b1ec-ae8b7bcfdd24\") " pod="openstack/dnsmasq-dns-6546db6db7-nt4wr" Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.849029 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e396de6-8111-4a48-b1ec-ae8b7bcfdd24-ovsdbserver-nb\") pod \"dnsmasq-dns-6546db6db7-nt4wr\" (UID: \"4e396de6-8111-4a48-b1ec-ae8b7bcfdd24\") " pod="openstack/dnsmasq-dns-6546db6db7-nt4wr" Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.849078 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0dd0278-64c0-4068-8f4e-42fadcd3df42-combined-ca-bundle\") pod \"keystone-bootstrap-gb56z\" (UID: \"b0dd0278-64c0-4068-8f4e-42fadcd3df42\") " pod="openstack/keystone-bootstrap-gb56z" Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.849168 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e396de6-8111-4a48-b1ec-ae8b7bcfdd24-dns-svc\") pod \"dnsmasq-dns-6546db6db7-nt4wr\" (UID: \"4e396de6-8111-4a48-b1ec-ae8b7bcfdd24\") " pod="openstack/dnsmasq-dns-6546db6db7-nt4wr" Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.849194 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0dd0278-64c0-4068-8f4e-42fadcd3df42-config-data\") pod \"keystone-bootstrap-gb56z\" (UID: \"b0dd0278-64c0-4068-8f4e-42fadcd3df42\") " pod="openstack/keystone-bootstrap-gb56z" Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.849247 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b0dd0278-64c0-4068-8f4e-42fadcd3df42-fernet-keys\") pod \"keystone-bootstrap-gb56z\" (UID: \"b0dd0278-64c0-4068-8f4e-42fadcd3df42\") " pod="openstack/keystone-bootstrap-gb56z" Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.849275 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b0dd0278-64c0-4068-8f4e-42fadcd3df42-credential-keys\") pod \"keystone-bootstrap-gb56z\" (UID: \"b0dd0278-64c0-4068-8f4e-42fadcd3df42\") " pod="openstack/keystone-bootstrap-gb56z" Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.849309 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqxtk\" (UniqueName: \"kubernetes.io/projected/b0dd0278-64c0-4068-8f4e-42fadcd3df42-kube-api-access-sqxtk\") pod \"keystone-bootstrap-gb56z\" (UID: \"b0dd0278-64c0-4068-8f4e-42fadcd3df42\") " pod="openstack/keystone-bootstrap-gb56z" Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.849330 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e396de6-8111-4a48-b1ec-ae8b7bcfdd24-ovsdbserver-sb\") pod \"dnsmasq-dns-6546db6db7-nt4wr\" (UID: \"4e396de6-8111-4a48-b1ec-ae8b7bcfdd24\") " pod="openstack/dnsmasq-dns-6546db6db7-nt4wr" Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.849353 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z66tl\" (UniqueName: \"kubernetes.io/projected/4e396de6-8111-4a48-b1ec-ae8b7bcfdd24-kube-api-access-z66tl\") pod \"dnsmasq-dns-6546db6db7-nt4wr\" (UID: \"4e396de6-8111-4a48-b1ec-ae8b7bcfdd24\") " pod="openstack/dnsmasq-dns-6546db6db7-nt4wr" Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.951345 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e396de6-8111-4a48-b1ec-ae8b7bcfdd24-dns-svc\") pod \"dnsmasq-dns-6546db6db7-nt4wr\" (UID: \"4e396de6-8111-4a48-b1ec-ae8b7bcfdd24\") " pod="openstack/dnsmasq-dns-6546db6db7-nt4wr" Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.951655 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e396de6-8111-4a48-b1ec-ae8b7bcfdd24-dns-svc\") pod \"dnsmasq-dns-6546db6db7-nt4wr\" (UID: \"4e396de6-8111-4a48-b1ec-ae8b7bcfdd24\") " pod="openstack/dnsmasq-dns-6546db6db7-nt4wr" Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.951694 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0dd0278-64c0-4068-8f4e-42fadcd3df42-config-data\") pod \"keystone-bootstrap-gb56z\" (UID: \"b0dd0278-64c0-4068-8f4e-42fadcd3df42\") " pod="openstack/keystone-bootstrap-gb56z" Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.951940 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b0dd0278-64c0-4068-8f4e-42fadcd3df42-fernet-keys\") pod \"keystone-bootstrap-gb56z\" (UID: \"b0dd0278-64c0-4068-8f4e-42fadcd3df42\") " pod="openstack/keystone-bootstrap-gb56z" Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.952004 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b0dd0278-64c0-4068-8f4e-42fadcd3df42-credential-keys\") pod \"keystone-bootstrap-gb56z\" (UID: \"b0dd0278-64c0-4068-8f4e-42fadcd3df42\") " pod="openstack/keystone-bootstrap-gb56z" Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.952078 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sqxtk\" (UniqueName: \"kubernetes.io/projected/b0dd0278-64c0-4068-8f4e-42fadcd3df42-kube-api-access-sqxtk\") pod \"keystone-bootstrap-gb56z\" (UID: \"b0dd0278-64c0-4068-8f4e-42fadcd3df42\") " pod="openstack/keystone-bootstrap-gb56z" Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.952103 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e396de6-8111-4a48-b1ec-ae8b7bcfdd24-ovsdbserver-sb\") pod \"dnsmasq-dns-6546db6db7-nt4wr\" (UID: \"4e396de6-8111-4a48-b1ec-ae8b7bcfdd24\") " pod="openstack/dnsmasq-dns-6546db6db7-nt4wr" Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.952161 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z66tl\" (UniqueName: \"kubernetes.io/projected/4e396de6-8111-4a48-b1ec-ae8b7bcfdd24-kube-api-access-z66tl\") pod \"dnsmasq-dns-6546db6db7-nt4wr\" (UID: \"4e396de6-8111-4a48-b1ec-ae8b7bcfdd24\") " pod="openstack/dnsmasq-dns-6546db6db7-nt4wr" Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.952234 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0dd0278-64c0-4068-8f4e-42fadcd3df42-scripts\") pod \"keystone-bootstrap-gb56z\" (UID: \"b0dd0278-64c0-4068-8f4e-42fadcd3df42\") " pod="openstack/keystone-bootstrap-gb56z" Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.952278 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e396de6-8111-4a48-b1ec-ae8b7bcfdd24-config\") pod \"dnsmasq-dns-6546db6db7-nt4wr\" (UID: \"4e396de6-8111-4a48-b1ec-ae8b7bcfdd24\") " pod="openstack/dnsmasq-dns-6546db6db7-nt4wr" Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.952403 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e396de6-8111-4a48-b1ec-ae8b7bcfdd24-ovsdbserver-nb\") pod \"dnsmasq-dns-6546db6db7-nt4wr\" (UID: \"4e396de6-8111-4a48-b1ec-ae8b7bcfdd24\") " pod="openstack/dnsmasq-dns-6546db6db7-nt4wr" Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.952430 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0dd0278-64c0-4068-8f4e-42fadcd3df42-combined-ca-bundle\") pod \"keystone-bootstrap-gb56z\" (UID: \"b0dd0278-64c0-4068-8f4e-42fadcd3df42\") " pod="openstack/keystone-bootstrap-gb56z" Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.955209 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e396de6-8111-4a48-b1ec-ae8b7bcfdd24-ovsdbserver-sb\") pod \"dnsmasq-dns-6546db6db7-nt4wr\" (UID: \"4e396de6-8111-4a48-b1ec-ae8b7bcfdd24\") " pod="openstack/dnsmasq-dns-6546db6db7-nt4wr" Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.955886 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e396de6-8111-4a48-b1ec-ae8b7bcfdd24-ovsdbserver-nb\") pod \"dnsmasq-dns-6546db6db7-nt4wr\" (UID: \"4e396de6-8111-4a48-b1ec-ae8b7bcfdd24\") " pod="openstack/dnsmasq-dns-6546db6db7-nt4wr" Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.956050 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e396de6-8111-4a48-b1ec-ae8b7bcfdd24-config\") pod \"dnsmasq-dns-6546db6db7-nt4wr\" (UID: \"4e396de6-8111-4a48-b1ec-ae8b7bcfdd24\") " pod="openstack/dnsmasq-dns-6546db6db7-nt4wr" Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.961650 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b0dd0278-64c0-4068-8f4e-42fadcd3df42-credential-keys\") pod \"keystone-bootstrap-gb56z\" (UID: \"b0dd0278-64c0-4068-8f4e-42fadcd3df42\") " pod="openstack/keystone-bootstrap-gb56z" Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.967405 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b0dd0278-64c0-4068-8f4e-42fadcd3df42-fernet-keys\") pod \"keystone-bootstrap-gb56z\" (UID: \"b0dd0278-64c0-4068-8f4e-42fadcd3df42\") " pod="openstack/keystone-bootstrap-gb56z" Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.969993 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0dd0278-64c0-4068-8f4e-42fadcd3df42-config-data\") pod \"keystone-bootstrap-gb56z\" (UID: \"b0dd0278-64c0-4068-8f4e-42fadcd3df42\") " pod="openstack/keystone-bootstrap-gb56z" Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.970940 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0dd0278-64c0-4068-8f4e-42fadcd3df42-combined-ca-bundle\") pod \"keystone-bootstrap-gb56z\" (UID: \"b0dd0278-64c0-4068-8f4e-42fadcd3df42\") " pod="openstack/keystone-bootstrap-gb56z" Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.974622 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z66tl\" (UniqueName: \"kubernetes.io/projected/4e396de6-8111-4a48-b1ec-ae8b7bcfdd24-kube-api-access-z66tl\") pod \"dnsmasq-dns-6546db6db7-nt4wr\" (UID: \"4e396de6-8111-4a48-b1ec-ae8b7bcfdd24\") " pod="openstack/dnsmasq-dns-6546db6db7-nt4wr" Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.975746 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sqxtk\" (UniqueName: \"kubernetes.io/projected/b0dd0278-64c0-4068-8f4e-42fadcd3df42-kube-api-access-sqxtk\") pod \"keystone-bootstrap-gb56z\" (UID: \"b0dd0278-64c0-4068-8f4e-42fadcd3df42\") " pod="openstack/keystone-bootstrap-gb56z" Oct 01 15:58:56 crc kubenswrapper[4949]: I1001 15:58:56.977255 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0dd0278-64c0-4068-8f4e-42fadcd3df42-scripts\") pod \"keystone-bootstrap-gb56z\" (UID: \"b0dd0278-64c0-4068-8f4e-42fadcd3df42\") " pod="openstack/keystone-bootstrap-gb56z" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.061470 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-db-sync-tjfmw"] Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.063055 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-tjfmw" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.067113 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-rn68j" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.068194 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.076466 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.135196 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.137634 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.150186 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-db-sync-vs9qk"] Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.151204 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vs9qk" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.167630 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ebb36cc-f642-402b-8fc7-c12e7f35776a-log-httpd\") pod \"ceilometer-0\" (UID: \"2ebb36cc-f642-402b-8fc7-c12e7f35776a\") " pod="openstack/ceilometer-0" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.167716 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpbpt\" (UniqueName: \"kubernetes.io/projected/41cbbbe8-6b79-4667-ba8d-7252d0d1a998-kube-api-access-dpbpt\") pod \"cinder-db-sync-tjfmw\" (UID: \"41cbbbe8-6b79-4667-ba8d-7252d0d1a998\") " pod="openstack/cinder-db-sync-tjfmw" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.167753 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ebb36cc-f642-402b-8fc7-c12e7f35776a-run-httpd\") pod \"ceilometer-0\" (UID: \"2ebb36cc-f642-402b-8fc7-c12e7f35776a\") " pod="openstack/ceilometer-0" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.167777 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41cbbbe8-6b79-4667-ba8d-7252d0d1a998-combined-ca-bundle\") pod \"cinder-db-sync-tjfmw\" (UID: \"41cbbbe8-6b79-4667-ba8d-7252d0d1a998\") " pod="openstack/cinder-db-sync-tjfmw" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.167808 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41cbbbe8-6b79-4667-ba8d-7252d0d1a998-scripts\") pod \"cinder-db-sync-tjfmw\" (UID: \"41cbbbe8-6b79-4667-ba8d-7252d0d1a998\") " pod="openstack/cinder-db-sync-tjfmw" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.167854 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41cbbbe8-6b79-4667-ba8d-7252d0d1a998-config-data\") pod \"cinder-db-sync-tjfmw\" (UID: \"41cbbbe8-6b79-4667-ba8d-7252d0d1a998\") " pod="openstack/cinder-db-sync-tjfmw" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.167891 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2ebb36cc-f642-402b-8fc7-c12e7f35776a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2ebb36cc-f642-402b-8fc7-c12e7f35776a\") " pod="openstack/ceilometer-0" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.167925 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/41cbbbe8-6b79-4667-ba8d-7252d0d1a998-etc-machine-id\") pod \"cinder-db-sync-tjfmw\" (UID: \"41cbbbe8-6b79-4667-ba8d-7252d0d1a998\") " pod="openstack/cinder-db-sync-tjfmw" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.168096 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ebb36cc-f642-402b-8fc7-c12e7f35776a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2ebb36cc-f642-402b-8fc7-c12e7f35776a\") " pod="openstack/ceilometer-0" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.168210 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ebb36cc-f642-402b-8fc7-c12e7f35776a-config-data\") pod \"ceilometer-0\" (UID: \"2ebb36cc-f642-402b-8fc7-c12e7f35776a\") " pod="openstack/ceilometer-0" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.168267 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7zjx2\" (UniqueName: \"kubernetes.io/projected/2ebb36cc-f642-402b-8fc7-c12e7f35776a-kube-api-access-7zjx2\") pod \"ceilometer-0\" (UID: \"2ebb36cc-f642-402b-8fc7-c12e7f35776a\") " pod="openstack/ceilometer-0" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.168298 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ebb36cc-f642-402b-8fc7-c12e7f35776a-scripts\") pod \"ceilometer-0\" (UID: \"2ebb36cc-f642-402b-8fc7-c12e7f35776a\") " pod="openstack/ceilometer-0" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.168325 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/41cbbbe8-6b79-4667-ba8d-7252d0d1a998-db-sync-config-data\") pod \"cinder-db-sync-tjfmw\" (UID: \"41cbbbe8-6b79-4667-ba8d-7252d0d1a998\") " pod="openstack/cinder-db-sync-tjfmw" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.174941 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-tjfmw"] Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.181309 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.181589 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-qhg79" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.181736 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.190602 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.190915 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.196436 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.216803 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-db-sync-bm8vc"] Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.218977 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-bm8vc" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.225555 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-ttrc6" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.228787 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-gb56z" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.238444 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-vs9qk"] Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.250343 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-nt4wr" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.261426 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.273178 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-bm8vc"] Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.273964 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2ebb36cc-f642-402b-8fc7-c12e7f35776a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2ebb36cc-f642-402b-8fc7-c12e7f35776a\") " pod="openstack/ceilometer-0" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.274002 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/41cbbbe8-6b79-4667-ba8d-7252d0d1a998-etc-machine-id\") pod \"cinder-db-sync-tjfmw\" (UID: \"41cbbbe8-6b79-4667-ba8d-7252d0d1a998\") " pod="openstack/cinder-db-sync-tjfmw" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.274046 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92c26ffd-a7f6-4593-8718-8947375730ef-combined-ca-bundle\") pod \"neutron-db-sync-vs9qk\" (UID: \"92c26ffd-a7f6-4593-8718-8947375730ef\") " pod="openstack/neutron-db-sync-vs9qk" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.274077 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ebb36cc-f642-402b-8fc7-c12e7f35776a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2ebb36cc-f642-402b-8fc7-c12e7f35776a\") " pod="openstack/ceilometer-0" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.274099 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ebb36cc-f642-402b-8fc7-c12e7f35776a-config-data\") pod \"ceilometer-0\" (UID: \"2ebb36cc-f642-402b-8fc7-c12e7f35776a\") " pod="openstack/ceilometer-0" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.274120 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7zjx2\" (UniqueName: \"kubernetes.io/projected/2ebb36cc-f642-402b-8fc7-c12e7f35776a-kube-api-access-7zjx2\") pod \"ceilometer-0\" (UID: \"2ebb36cc-f642-402b-8fc7-c12e7f35776a\") " pod="openstack/ceilometer-0" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.274156 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ebb36cc-f642-402b-8fc7-c12e7f35776a-scripts\") pod \"ceilometer-0\" (UID: \"2ebb36cc-f642-402b-8fc7-c12e7f35776a\") " pod="openstack/ceilometer-0" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.274176 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/41cbbbe8-6b79-4667-ba8d-7252d0d1a998-db-sync-config-data\") pod \"cinder-db-sync-tjfmw\" (UID: \"41cbbbe8-6b79-4667-ba8d-7252d0d1a998\") " pod="openstack/cinder-db-sync-tjfmw" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.274198 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ebb36cc-f642-402b-8fc7-c12e7f35776a-log-httpd\") pod \"ceilometer-0\" (UID: \"2ebb36cc-f642-402b-8fc7-c12e7f35776a\") " pod="openstack/ceilometer-0" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.274226 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7e44454-db3f-453a-8bc9-d8f435685e32-combined-ca-bundle\") pod \"barbican-db-sync-bm8vc\" (UID: \"d7e44454-db3f-453a-8bc9-d8f435685e32\") " pod="openstack/barbican-db-sync-bm8vc" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.274269 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpbpt\" (UniqueName: \"kubernetes.io/projected/41cbbbe8-6b79-4667-ba8d-7252d0d1a998-kube-api-access-dpbpt\") pod \"cinder-db-sync-tjfmw\" (UID: \"41cbbbe8-6b79-4667-ba8d-7252d0d1a998\") " pod="openstack/cinder-db-sync-tjfmw" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.274291 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ebb36cc-f642-402b-8fc7-c12e7f35776a-run-httpd\") pod \"ceilometer-0\" (UID: \"2ebb36cc-f642-402b-8fc7-c12e7f35776a\") " pod="openstack/ceilometer-0" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.274309 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41cbbbe8-6b79-4667-ba8d-7252d0d1a998-combined-ca-bundle\") pod \"cinder-db-sync-tjfmw\" (UID: \"41cbbbe8-6b79-4667-ba8d-7252d0d1a998\") " pod="openstack/cinder-db-sync-tjfmw" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.274328 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41cbbbe8-6b79-4667-ba8d-7252d0d1a998-scripts\") pod \"cinder-db-sync-tjfmw\" (UID: \"41cbbbe8-6b79-4667-ba8d-7252d0d1a998\") " pod="openstack/cinder-db-sync-tjfmw" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.274346 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/92c26ffd-a7f6-4593-8718-8947375730ef-config\") pod \"neutron-db-sync-vs9qk\" (UID: \"92c26ffd-a7f6-4593-8718-8947375730ef\") " pod="openstack/neutron-db-sync-vs9qk" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.274361 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d7e44454-db3f-453a-8bc9-d8f435685e32-db-sync-config-data\") pod \"barbican-db-sync-bm8vc\" (UID: \"d7e44454-db3f-453a-8bc9-d8f435685e32\") " pod="openstack/barbican-db-sync-bm8vc" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.274384 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ltfj\" (UniqueName: \"kubernetes.io/projected/d7e44454-db3f-453a-8bc9-d8f435685e32-kube-api-access-9ltfj\") pod \"barbican-db-sync-bm8vc\" (UID: \"d7e44454-db3f-453a-8bc9-d8f435685e32\") " pod="openstack/barbican-db-sync-bm8vc" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.274404 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46tnl\" (UniqueName: \"kubernetes.io/projected/92c26ffd-a7f6-4593-8718-8947375730ef-kube-api-access-46tnl\") pod \"neutron-db-sync-vs9qk\" (UID: \"92c26ffd-a7f6-4593-8718-8947375730ef\") " pod="openstack/neutron-db-sync-vs9qk" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.274425 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41cbbbe8-6b79-4667-ba8d-7252d0d1a998-config-data\") pod \"cinder-db-sync-tjfmw\" (UID: \"41cbbbe8-6b79-4667-ba8d-7252d0d1a998\") " pod="openstack/cinder-db-sync-tjfmw" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.275099 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/41cbbbe8-6b79-4667-ba8d-7252d0d1a998-etc-machine-id\") pod \"cinder-db-sync-tjfmw\" (UID: \"41cbbbe8-6b79-4667-ba8d-7252d0d1a998\") " pod="openstack/cinder-db-sync-tjfmw" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.277485 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ebb36cc-f642-402b-8fc7-c12e7f35776a-run-httpd\") pod \"ceilometer-0\" (UID: \"2ebb36cc-f642-402b-8fc7-c12e7f35776a\") " pod="openstack/ceilometer-0" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.290030 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ebb36cc-f642-402b-8fc7-c12e7f35776a-log-httpd\") pod \"ceilometer-0\" (UID: \"2ebb36cc-f642-402b-8fc7-c12e7f35776a\") " pod="openstack/ceilometer-0" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.293763 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ebb36cc-f642-402b-8fc7-c12e7f35776a-scripts\") pod \"ceilometer-0\" (UID: \"2ebb36cc-f642-402b-8fc7-c12e7f35776a\") " pod="openstack/ceilometer-0" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.297090 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ebb36cc-f642-402b-8fc7-c12e7f35776a-config-data\") pod \"ceilometer-0\" (UID: \"2ebb36cc-f642-402b-8fc7-c12e7f35776a\") " pod="openstack/ceilometer-0" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.297875 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41cbbbe8-6b79-4667-ba8d-7252d0d1a998-combined-ca-bundle\") pod \"cinder-db-sync-tjfmw\" (UID: \"41cbbbe8-6b79-4667-ba8d-7252d0d1a998\") " pod="openstack/cinder-db-sync-tjfmw" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.301741 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41cbbbe8-6b79-4667-ba8d-7252d0d1a998-config-data\") pod \"cinder-db-sync-tjfmw\" (UID: \"41cbbbe8-6b79-4667-ba8d-7252d0d1a998\") " pod="openstack/cinder-db-sync-tjfmw" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.303556 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41cbbbe8-6b79-4667-ba8d-7252d0d1a998-scripts\") pod \"cinder-db-sync-tjfmw\" (UID: \"41cbbbe8-6b79-4667-ba8d-7252d0d1a998\") " pod="openstack/cinder-db-sync-tjfmw" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.309936 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2ebb36cc-f642-402b-8fc7-c12e7f35776a-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"2ebb36cc-f642-402b-8fc7-c12e7f35776a\") " pod="openstack/ceilometer-0" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.313595 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/41cbbbe8-6b79-4667-ba8d-7252d0d1a998-db-sync-config-data\") pod \"cinder-db-sync-tjfmw\" (UID: \"41cbbbe8-6b79-4667-ba8d-7252d0d1a998\") " pod="openstack/cinder-db-sync-tjfmw" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.320664 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ebb36cc-f642-402b-8fc7-c12e7f35776a-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"2ebb36cc-f642-402b-8fc7-c12e7f35776a\") " pod="openstack/ceilometer-0" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.363827 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-db-sync-wfg74"] Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.364932 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-wfg74" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.377084 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7e44454-db3f-453a-8bc9-d8f435685e32-combined-ca-bundle\") pod \"barbican-db-sync-bm8vc\" (UID: \"d7e44454-db3f-453a-8bc9-d8f435685e32\") " pod="openstack/barbican-db-sync-bm8vc" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.377175 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d7e44454-db3f-453a-8bc9-d8f435685e32-db-sync-config-data\") pod \"barbican-db-sync-bm8vc\" (UID: \"d7e44454-db3f-453a-8bc9-d8f435685e32\") " pod="openstack/barbican-db-sync-bm8vc" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.377200 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/92c26ffd-a7f6-4593-8718-8947375730ef-config\") pod \"neutron-db-sync-vs9qk\" (UID: \"92c26ffd-a7f6-4593-8718-8947375730ef\") " pod="openstack/neutron-db-sync-vs9qk" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.377227 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9ltfj\" (UniqueName: \"kubernetes.io/projected/d7e44454-db3f-453a-8bc9-d8f435685e32-kube-api-access-9ltfj\") pod \"barbican-db-sync-bm8vc\" (UID: \"d7e44454-db3f-453a-8bc9-d8f435685e32\") " pod="openstack/barbican-db-sync-bm8vc" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.377256 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46tnl\" (UniqueName: \"kubernetes.io/projected/92c26ffd-a7f6-4593-8718-8947375730ef-kube-api-access-46tnl\") pod \"neutron-db-sync-vs9qk\" (UID: \"92c26ffd-a7f6-4593-8718-8947375730ef\") " pod="openstack/neutron-db-sync-vs9qk" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.377312 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92c26ffd-a7f6-4593-8718-8947375730ef-combined-ca-bundle\") pod \"neutron-db-sync-vs9qk\" (UID: \"92c26ffd-a7f6-4593-8718-8947375730ef\") " pod="openstack/neutron-db-sync-vs9qk" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.389201 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d7e44454-db3f-453a-8bc9-d8f435685e32-db-sync-config-data\") pod \"barbican-db-sync-bm8vc\" (UID: \"d7e44454-db3f-453a-8bc9-d8f435685e32\") " pod="openstack/barbican-db-sync-bm8vc" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.395388 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7e44454-db3f-453a-8bc9-d8f435685e32-combined-ca-bundle\") pod \"barbican-db-sync-bm8vc\" (UID: \"d7e44454-db3f-453a-8bc9-d8f435685e32\") " pod="openstack/barbican-db-sync-bm8vc" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.395706 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-js7fk" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.395931 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.396265 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-wfg74"] Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.396302 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.400813 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpbpt\" (UniqueName: \"kubernetes.io/projected/41cbbbe8-6b79-4667-ba8d-7252d0d1a998-kube-api-access-dpbpt\") pod \"cinder-db-sync-tjfmw\" (UID: \"41cbbbe8-6b79-4667-ba8d-7252d0d1a998\") " pod="openstack/cinder-db-sync-tjfmw" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.400849 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7zjx2\" (UniqueName: \"kubernetes.io/projected/2ebb36cc-f642-402b-8fc7-c12e7f35776a-kube-api-access-7zjx2\") pod \"ceilometer-0\" (UID: \"2ebb36cc-f642-402b-8fc7-c12e7f35776a\") " pod="openstack/ceilometer-0" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.404570 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/92c26ffd-a7f6-4593-8718-8947375730ef-config\") pod \"neutron-db-sync-vs9qk\" (UID: \"92c26ffd-a7f6-4593-8718-8947375730ef\") " pod="openstack/neutron-db-sync-vs9qk" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.417733 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92c26ffd-a7f6-4593-8718-8947375730ef-combined-ca-bundle\") pod \"neutron-db-sync-vs9qk\" (UID: \"92c26ffd-a7f6-4593-8718-8947375730ef\") " pod="openstack/neutron-db-sync-vs9qk" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.419455 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46tnl\" (UniqueName: \"kubernetes.io/projected/92c26ffd-a7f6-4593-8718-8947375730ef-kube-api-access-46tnl\") pod \"neutron-db-sync-vs9qk\" (UID: \"92c26ffd-a7f6-4593-8718-8947375730ef\") " pod="openstack/neutron-db-sync-vs9qk" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.422651 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ltfj\" (UniqueName: \"kubernetes.io/projected/d7e44454-db3f-453a-8bc9-d8f435685e32-kube-api-access-9ltfj\") pod \"barbican-db-sync-bm8vc\" (UID: \"d7e44454-db3f-453a-8bc9-d8f435685e32\") " pod="openstack/barbican-db-sync-bm8vc" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.433255 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-nt4wr"] Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.480008 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.480741 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81bf5b66-a190-4c1a-8607-863f93075c01-logs\") pod \"placement-db-sync-wfg74\" (UID: \"81bf5b66-a190-4c1a-8607-863f93075c01\") " pod="openstack/placement-db-sync-wfg74" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.480816 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81bf5b66-a190-4c1a-8607-863f93075c01-scripts\") pod \"placement-db-sync-wfg74\" (UID: \"81bf5b66-a190-4c1a-8607-863f93075c01\") " pod="openstack/placement-db-sync-wfg74" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.480836 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81bf5b66-a190-4c1a-8607-863f93075c01-combined-ca-bundle\") pod \"placement-db-sync-wfg74\" (UID: \"81bf5b66-a190-4c1a-8607-863f93075c01\") " pod="openstack/placement-db-sync-wfg74" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.480861 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgcks\" (UniqueName: \"kubernetes.io/projected/81bf5b66-a190-4c1a-8607-863f93075c01-kube-api-access-xgcks\") pod \"placement-db-sync-wfg74\" (UID: \"81bf5b66-a190-4c1a-8607-863f93075c01\") " pod="openstack/placement-db-sync-wfg74" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.480903 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81bf5b66-a190-4c1a-8607-863f93075c01-config-data\") pod \"placement-db-sync-wfg74\" (UID: \"81bf5b66-a190-4c1a-8607-863f93075c01\") " pod="openstack/placement-db-sync-wfg74" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.499526 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vs9qk" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.507262 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-4qcv5"] Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.508539 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-4qcv5" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.517185 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-4qcv5"] Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.537188 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-bm8vc" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.636279 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e77e083-5e09-4383-8faa-1b16c353b5af-config\") pod \"dnsmasq-dns-7987f74bbc-4qcv5\" (UID: \"4e77e083-5e09-4383-8faa-1b16c353b5af\") " pod="openstack/dnsmasq-dns-7987f74bbc-4qcv5" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.636384 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e77e083-5e09-4383-8faa-1b16c353b5af-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-4qcv5\" (UID: \"4e77e083-5e09-4383-8faa-1b16c353b5af\") " pod="openstack/dnsmasq-dns-7987f74bbc-4qcv5" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.636424 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81bf5b66-a190-4c1a-8607-863f93075c01-logs\") pod \"placement-db-sync-wfg74\" (UID: \"81bf5b66-a190-4c1a-8607-863f93075c01\") " pod="openstack/placement-db-sync-wfg74" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.636505 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e77e083-5e09-4383-8faa-1b16c353b5af-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-4qcv5\" (UID: \"4e77e083-5e09-4383-8faa-1b16c353b5af\") " pod="openstack/dnsmasq-dns-7987f74bbc-4qcv5" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.636568 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4dnd\" (UniqueName: \"kubernetes.io/projected/4e77e083-5e09-4383-8faa-1b16c353b5af-kube-api-access-p4dnd\") pod \"dnsmasq-dns-7987f74bbc-4qcv5\" (UID: \"4e77e083-5e09-4383-8faa-1b16c353b5af\") " pod="openstack/dnsmasq-dns-7987f74bbc-4qcv5" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.636646 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81bf5b66-a190-4c1a-8607-863f93075c01-scripts\") pod \"placement-db-sync-wfg74\" (UID: \"81bf5b66-a190-4c1a-8607-863f93075c01\") " pod="openstack/placement-db-sync-wfg74" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.636684 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81bf5b66-a190-4c1a-8607-863f93075c01-combined-ca-bundle\") pod \"placement-db-sync-wfg74\" (UID: \"81bf5b66-a190-4c1a-8607-863f93075c01\") " pod="openstack/placement-db-sync-wfg74" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.636741 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xgcks\" (UniqueName: \"kubernetes.io/projected/81bf5b66-a190-4c1a-8607-863f93075c01-kube-api-access-xgcks\") pod \"placement-db-sync-wfg74\" (UID: \"81bf5b66-a190-4c1a-8607-863f93075c01\") " pod="openstack/placement-db-sync-wfg74" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.636790 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e77e083-5e09-4383-8faa-1b16c353b5af-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-4qcv5\" (UID: \"4e77e083-5e09-4383-8faa-1b16c353b5af\") " pod="openstack/dnsmasq-dns-7987f74bbc-4qcv5" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.636898 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81bf5b66-a190-4c1a-8607-863f93075c01-config-data\") pod \"placement-db-sync-wfg74\" (UID: \"81bf5b66-a190-4c1a-8607-863f93075c01\") " pod="openstack/placement-db-sync-wfg74" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.643491 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81bf5b66-a190-4c1a-8607-863f93075c01-config-data\") pod \"placement-db-sync-wfg74\" (UID: \"81bf5b66-a190-4c1a-8607-863f93075c01\") " pod="openstack/placement-db-sync-wfg74" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.643976 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81bf5b66-a190-4c1a-8607-863f93075c01-logs\") pod \"placement-db-sync-wfg74\" (UID: \"81bf5b66-a190-4c1a-8607-863f93075c01\") " pod="openstack/placement-db-sync-wfg74" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.647049 4949 generic.go:334] "Generic (PLEG): container finished" podID="845971ca-cc3f-4227-adcf-8049ce7fd830" containerID="55822a2dac7b886a43628718190968899263698b708796715a2afb318c948b34" exitCode=0 Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.650320 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-trvwm" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.652882 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81bf5b66-a190-4c1a-8607-863f93075c01-combined-ca-bundle\") pod \"placement-db-sync-wfg74\" (UID: \"81bf5b66-a190-4c1a-8607-863f93075c01\") " pod="openstack/placement-db-sync-wfg74" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.658811 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81bf5b66-a190-4c1a-8607-863f93075c01-scripts\") pod \"placement-db-sync-wfg74\" (UID: \"81bf5b66-a190-4c1a-8607-863f93075c01\") " pod="openstack/placement-db-sync-wfg74" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.669487 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-trvwm" event={"ID":"845971ca-cc3f-4227-adcf-8049ce7fd830","Type":"ContainerDied","Data":"55822a2dac7b886a43628718190968899263698b708796715a2afb318c948b34"} Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.669551 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-54f9b7b8d9-trvwm" event={"ID":"845971ca-cc3f-4227-adcf-8049ce7fd830","Type":"ContainerDied","Data":"4dd9522e4abb653b1e503c801892321b9b6d356a4df0e5cc2902cca91fde58db"} Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.669578 4949 scope.go:117] "RemoveContainer" containerID="55822a2dac7b886a43628718190968899263698b708796715a2afb318c948b34" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.684509 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xgcks\" (UniqueName: \"kubernetes.io/projected/81bf5b66-a190-4c1a-8607-863f93075c01-kube-api-access-xgcks\") pod \"placement-db-sync-wfg74\" (UID: \"81bf5b66-a190-4c1a-8607-863f93075c01\") " pod="openstack/placement-db-sync-wfg74" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.689647 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-tjfmw" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.738168 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/845971ca-cc3f-4227-adcf-8049ce7fd830-ovsdbserver-nb\") pod \"845971ca-cc3f-4227-adcf-8049ce7fd830\" (UID: \"845971ca-cc3f-4227-adcf-8049ce7fd830\") " Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.745271 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xw867\" (UniqueName: \"kubernetes.io/projected/845971ca-cc3f-4227-adcf-8049ce7fd830-kube-api-access-xw867\") pod \"845971ca-cc3f-4227-adcf-8049ce7fd830\" (UID: \"845971ca-cc3f-4227-adcf-8049ce7fd830\") " Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.745370 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/845971ca-cc3f-4227-adcf-8049ce7fd830-config\") pod \"845971ca-cc3f-4227-adcf-8049ce7fd830\" (UID: \"845971ca-cc3f-4227-adcf-8049ce7fd830\") " Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.745485 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/845971ca-cc3f-4227-adcf-8049ce7fd830-dns-svc\") pod \"845971ca-cc3f-4227-adcf-8049ce7fd830\" (UID: \"845971ca-cc3f-4227-adcf-8049ce7fd830\") " Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.745517 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/845971ca-cc3f-4227-adcf-8049ce7fd830-ovsdbserver-sb\") pod \"845971ca-cc3f-4227-adcf-8049ce7fd830\" (UID: \"845971ca-cc3f-4227-adcf-8049ce7fd830\") " Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.745831 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e77e083-5e09-4383-8faa-1b16c353b5af-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-4qcv5\" (UID: \"4e77e083-5e09-4383-8faa-1b16c353b5af\") " pod="openstack/dnsmasq-dns-7987f74bbc-4qcv5" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.745981 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e77e083-5e09-4383-8faa-1b16c353b5af-config\") pod \"dnsmasq-dns-7987f74bbc-4qcv5\" (UID: \"4e77e083-5e09-4383-8faa-1b16c353b5af\") " pod="openstack/dnsmasq-dns-7987f74bbc-4qcv5" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.746036 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e77e083-5e09-4383-8faa-1b16c353b5af-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-4qcv5\" (UID: \"4e77e083-5e09-4383-8faa-1b16c353b5af\") " pod="openstack/dnsmasq-dns-7987f74bbc-4qcv5" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.746103 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e77e083-5e09-4383-8faa-1b16c353b5af-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-4qcv5\" (UID: \"4e77e083-5e09-4383-8faa-1b16c353b5af\") " pod="openstack/dnsmasq-dns-7987f74bbc-4qcv5" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.746153 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p4dnd\" (UniqueName: \"kubernetes.io/projected/4e77e083-5e09-4383-8faa-1b16c353b5af-kube-api-access-p4dnd\") pod \"dnsmasq-dns-7987f74bbc-4qcv5\" (UID: \"4e77e083-5e09-4383-8faa-1b16c353b5af\") " pod="openstack/dnsmasq-dns-7987f74bbc-4qcv5" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.750016 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e77e083-5e09-4383-8faa-1b16c353b5af-ovsdbserver-nb\") pod \"dnsmasq-dns-7987f74bbc-4qcv5\" (UID: \"4e77e083-5e09-4383-8faa-1b16c353b5af\") " pod="openstack/dnsmasq-dns-7987f74bbc-4qcv5" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.750570 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e77e083-5e09-4383-8faa-1b16c353b5af-config\") pod \"dnsmasq-dns-7987f74bbc-4qcv5\" (UID: \"4e77e083-5e09-4383-8faa-1b16c353b5af\") " pod="openstack/dnsmasq-dns-7987f74bbc-4qcv5" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.751557 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/845971ca-cc3f-4227-adcf-8049ce7fd830-kube-api-access-xw867" (OuterVolumeSpecName: "kube-api-access-xw867") pod "845971ca-cc3f-4227-adcf-8049ce7fd830" (UID: "845971ca-cc3f-4227-adcf-8049ce7fd830"). InnerVolumeSpecName "kube-api-access-xw867". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.752471 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e77e083-5e09-4383-8faa-1b16c353b5af-dns-svc\") pod \"dnsmasq-dns-7987f74bbc-4qcv5\" (UID: \"4e77e083-5e09-4383-8faa-1b16c353b5af\") " pod="openstack/dnsmasq-dns-7987f74bbc-4qcv5" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.752961 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e77e083-5e09-4383-8faa-1b16c353b5af-ovsdbserver-sb\") pod \"dnsmasq-dns-7987f74bbc-4qcv5\" (UID: \"4e77e083-5e09-4383-8faa-1b16c353b5af\") " pod="openstack/dnsmasq-dns-7987f74bbc-4qcv5" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.754527 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-wfg74" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.768204 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p4dnd\" (UniqueName: \"kubernetes.io/projected/4e77e083-5e09-4383-8faa-1b16c353b5af-kube-api-access-p4dnd\") pod \"dnsmasq-dns-7987f74bbc-4qcv5\" (UID: \"4e77e083-5e09-4383-8faa-1b16c353b5af\") " pod="openstack/dnsmasq-dns-7987f74bbc-4qcv5" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.792286 4949 scope.go:117] "RemoveContainer" containerID="133bccde878cfdc1f2993d13c997ff10a6564a61a177295435f586138233d9e3" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.803113 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/845971ca-cc3f-4227-adcf-8049ce7fd830-config" (OuterVolumeSpecName: "config") pod "845971ca-cc3f-4227-adcf-8049ce7fd830" (UID: "845971ca-cc3f-4227-adcf-8049ce7fd830"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.834051 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/845971ca-cc3f-4227-adcf-8049ce7fd830-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "845971ca-cc3f-4227-adcf-8049ce7fd830" (UID: "845971ca-cc3f-4227-adcf-8049ce7fd830"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.838995 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/845971ca-cc3f-4227-adcf-8049ce7fd830-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "845971ca-cc3f-4227-adcf-8049ce7fd830" (UID: "845971ca-cc3f-4227-adcf-8049ce7fd830"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.842440 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/845971ca-cc3f-4227-adcf-8049ce7fd830-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "845971ca-cc3f-4227-adcf-8049ce7fd830" (UID: "845971ca-cc3f-4227-adcf-8049ce7fd830"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.847841 4949 scope.go:117] "RemoveContainer" containerID="55822a2dac7b886a43628718190968899263698b708796715a2afb318c948b34" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.848999 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/845971ca-cc3f-4227-adcf-8049ce7fd830-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.849027 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xw867\" (UniqueName: \"kubernetes.io/projected/845971ca-cc3f-4227-adcf-8049ce7fd830-kube-api-access-xw867\") on node \"crc\" DevicePath \"\"" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.849060 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/845971ca-cc3f-4227-adcf-8049ce7fd830-config\") on node \"crc\" DevicePath \"\"" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.849071 4949 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/845971ca-cc3f-4227-adcf-8049ce7fd830-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.849080 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/845971ca-cc3f-4227-adcf-8049ce7fd830-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 01 15:58:57 crc kubenswrapper[4949]: E1001 15:58:57.849099 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55822a2dac7b886a43628718190968899263698b708796715a2afb318c948b34\": container with ID starting with 55822a2dac7b886a43628718190968899263698b708796715a2afb318c948b34 not found: ID does not exist" containerID="55822a2dac7b886a43628718190968899263698b708796715a2afb318c948b34" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.849153 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55822a2dac7b886a43628718190968899263698b708796715a2afb318c948b34"} err="failed to get container status \"55822a2dac7b886a43628718190968899263698b708796715a2afb318c948b34\": rpc error: code = NotFound desc = could not find container \"55822a2dac7b886a43628718190968899263698b708796715a2afb318c948b34\": container with ID starting with 55822a2dac7b886a43628718190968899263698b708796715a2afb318c948b34 not found: ID does not exist" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.849180 4949 scope.go:117] "RemoveContainer" containerID="133bccde878cfdc1f2993d13c997ff10a6564a61a177295435f586138233d9e3" Oct 01 15:58:57 crc kubenswrapper[4949]: E1001 15:58:57.849494 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"133bccde878cfdc1f2993d13c997ff10a6564a61a177295435f586138233d9e3\": container with ID starting with 133bccde878cfdc1f2993d13c997ff10a6564a61a177295435f586138233d9e3 not found: ID does not exist" containerID="133bccde878cfdc1f2993d13c997ff10a6564a61a177295435f586138233d9e3" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.849534 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"133bccde878cfdc1f2993d13c997ff10a6564a61a177295435f586138233d9e3"} err="failed to get container status \"133bccde878cfdc1f2993d13c997ff10a6564a61a177295435f586138233d9e3\": rpc error: code = NotFound desc = could not find container \"133bccde878cfdc1f2993d13c997ff10a6564a61a177295435f586138233d9e3\": container with ID starting with 133bccde878cfdc1f2993d13c997ff10a6564a61a177295435f586138233d9e3 not found: ID does not exist" Oct 01 15:58:57 crc kubenswrapper[4949]: I1001 15:58:57.912499 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-4qcv5" Oct 01 15:58:58 crc kubenswrapper[4949]: I1001 15:58:58.045693 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-gb56z"] Oct 01 15:58:58 crc kubenswrapper[4949]: I1001 15:58:58.180300 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-nt4wr"] Oct 01 15:58:58 crc kubenswrapper[4949]: I1001 15:58:58.300527 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-db-sync-bm8vc"] Oct 01 15:58:58 crc kubenswrapper[4949]: I1001 15:58:58.387493 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-db-sync-vs9qk"] Oct 01 15:58:58 crc kubenswrapper[4949]: I1001 15:58:58.393610 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-db-sync-tjfmw"] Oct 01 15:58:58 crc kubenswrapper[4949]: I1001 15:58:58.417682 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 01 15:58:58 crc kubenswrapper[4949]: I1001 15:58:58.426698 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-db-sync-wfg74"] Oct 01 15:58:58 crc kubenswrapper[4949]: W1001 15:58:58.427831 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod81bf5b66_a190_4c1a_8607_863f93075c01.slice/crio-bbb4c678d2dda41eaa4a267f84b24e656ec923e26469bd966a71f87c73530fa5 WatchSource:0}: Error finding container bbb4c678d2dda41eaa4a267f84b24e656ec923e26469bd966a71f87c73530fa5: Status 404 returned error can't find the container with id bbb4c678d2dda41eaa4a267f84b24e656ec923e26469bd966a71f87c73530fa5 Oct 01 15:58:58 crc kubenswrapper[4949]: I1001 15:58:58.549528 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-4qcv5"] Oct 01 15:58:58 crc kubenswrapper[4949]: W1001 15:58:58.559780 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4e77e083_5e09_4383_8faa_1b16c353b5af.slice/crio-3da457eb9f56f34e1e29af8583b802f3c62e61060a533d456f24fcff49ad3010 WatchSource:0}: Error finding container 3da457eb9f56f34e1e29af8583b802f3c62e61060a533d456f24fcff49ad3010: Status 404 returned error can't find the container with id 3da457eb9f56f34e1e29af8583b802f3c62e61060a533d456f24fcff49ad3010 Oct 01 15:58:58 crc kubenswrapper[4949]: I1001 15:58:58.656861 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-gb56z" event={"ID":"b0dd0278-64c0-4068-8f4e-42fadcd3df42","Type":"ContainerStarted","Data":"4fa4f79ebcfd549e4d4b251e9b94b398074ed3501b9947b7e40828532899f1ca"} Oct 01 15:58:58 crc kubenswrapper[4949]: I1001 15:58:58.657156 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-gb56z" event={"ID":"b0dd0278-64c0-4068-8f4e-42fadcd3df42","Type":"ContainerStarted","Data":"0cb115372f53004189a0464d944508aa08a2768ed05485a69e594445ebf9cc3e"} Oct 01 15:58:58 crc kubenswrapper[4949]: I1001 15:58:58.659092 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-54f9b7b8d9-trvwm" Oct 01 15:58:58 crc kubenswrapper[4949]: I1001 15:58:58.666522 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vs9qk" event={"ID":"92c26ffd-a7f6-4593-8718-8947375730ef","Type":"ContainerStarted","Data":"d062929600b36141648398c096f2ff8c2539dee38d9804190b4ef4444bf280de"} Oct 01 15:58:58 crc kubenswrapper[4949]: I1001 15:58:58.666563 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vs9qk" event={"ID":"92c26ffd-a7f6-4593-8718-8947375730ef","Type":"ContainerStarted","Data":"ac9ba166b8a006538847042cde7ff5157a60178a25248d1a515303e4770a94f5"} Oct 01 15:58:58 crc kubenswrapper[4949]: I1001 15:58:58.667459 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ebb36cc-f642-402b-8fc7-c12e7f35776a","Type":"ContainerStarted","Data":"1c28869d35fff2e54a32cffb4e1ddc78b5461d5051aed6fc82b2458f23666fb8"} Oct 01 15:58:58 crc kubenswrapper[4949]: I1001 15:58:58.669479 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-wfg74" event={"ID":"81bf5b66-a190-4c1a-8607-863f93075c01","Type":"ContainerStarted","Data":"bbb4c678d2dda41eaa4a267f84b24e656ec923e26469bd966a71f87c73530fa5"} Oct 01 15:58:58 crc kubenswrapper[4949]: I1001 15:58:58.673416 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-tjfmw" event={"ID":"41cbbbe8-6b79-4667-ba8d-7252d0d1a998","Type":"ContainerStarted","Data":"8d65d9ea6759625ff172f26248c794ef1646eaadd6f52af045acd995fc353dd4"} Oct 01 15:58:58 crc kubenswrapper[4949]: I1001 15:58:58.678356 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-gb56z" podStartSLOduration=2.678339914 podStartE2EDuration="2.678339914s" podCreationTimestamp="2025-10-01 15:58:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:58:58.675460166 +0000 UTC m=+1037.981066387" watchObservedRunningTime="2025-10-01 15:58:58.678339914 +0000 UTC m=+1037.983946105" Oct 01 15:58:58 crc kubenswrapper[4949]: I1001 15:58:58.680024 4949 generic.go:334] "Generic (PLEG): container finished" podID="4e396de6-8111-4a48-b1ec-ae8b7bcfdd24" containerID="33b653c9f5e31a1a30f27e9dfb9b1427ba349953e31b2ddc6d95417bb6260867" exitCode=0 Oct 01 15:58:58 crc kubenswrapper[4949]: I1001 15:58:58.680091 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6546db6db7-nt4wr" event={"ID":"4e396de6-8111-4a48-b1ec-ae8b7bcfdd24","Type":"ContainerDied","Data":"33b653c9f5e31a1a30f27e9dfb9b1427ba349953e31b2ddc6d95417bb6260867"} Oct 01 15:58:58 crc kubenswrapper[4949]: I1001 15:58:58.680116 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6546db6db7-nt4wr" event={"ID":"4e396de6-8111-4a48-b1ec-ae8b7bcfdd24","Type":"ContainerStarted","Data":"7fb792d5904f3487cfa4063fe9f61d85fee79b87755c162115fc876a92ab2020"} Oct 01 15:58:58 crc kubenswrapper[4949]: I1001 15:58:58.681309 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-bm8vc" event={"ID":"d7e44454-db3f-453a-8bc9-d8f435685e32","Type":"ContainerStarted","Data":"f8dff302d653772dcb5afad4658c13a6b347f25be97895f33116cf2e328f75e1"} Oct 01 15:58:58 crc kubenswrapper[4949]: I1001 15:58:58.682137 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-4qcv5" event={"ID":"4e77e083-5e09-4383-8faa-1b16c353b5af","Type":"ContainerStarted","Data":"3da457eb9f56f34e1e29af8583b802f3c62e61060a533d456f24fcff49ad3010"} Oct 01 15:58:58 crc kubenswrapper[4949]: I1001 15:58:58.700087 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-db-sync-vs9qk" podStartSLOduration=1.700071264 podStartE2EDuration="1.700071264s" podCreationTimestamp="2025-10-01 15:58:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:58:58.698531412 +0000 UTC m=+1038.004137593" watchObservedRunningTime="2025-10-01 15:58:58.700071264 +0000 UTC m=+1038.005677455" Oct 01 15:58:58 crc kubenswrapper[4949]: I1001 15:58:58.755471 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-trvwm"] Oct 01 15:58:58 crc kubenswrapper[4949]: I1001 15:58:58.765441 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-54f9b7b8d9-trvwm"] Oct 01 15:58:59 crc kubenswrapper[4949]: I1001 15:58:59.106688 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-nt4wr" Oct 01 15:58:59 crc kubenswrapper[4949]: I1001 15:58:59.173364 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e396de6-8111-4a48-b1ec-ae8b7bcfdd24-dns-svc\") pod \"4e396de6-8111-4a48-b1ec-ae8b7bcfdd24\" (UID: \"4e396de6-8111-4a48-b1ec-ae8b7bcfdd24\") " Oct 01 15:58:59 crc kubenswrapper[4949]: I1001 15:58:59.173398 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e396de6-8111-4a48-b1ec-ae8b7bcfdd24-ovsdbserver-sb\") pod \"4e396de6-8111-4a48-b1ec-ae8b7bcfdd24\" (UID: \"4e396de6-8111-4a48-b1ec-ae8b7bcfdd24\") " Oct 01 15:58:59 crc kubenswrapper[4949]: I1001 15:58:59.173474 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e396de6-8111-4a48-b1ec-ae8b7bcfdd24-ovsdbserver-nb\") pod \"4e396de6-8111-4a48-b1ec-ae8b7bcfdd24\" (UID: \"4e396de6-8111-4a48-b1ec-ae8b7bcfdd24\") " Oct 01 15:58:59 crc kubenswrapper[4949]: I1001 15:58:59.173531 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z66tl\" (UniqueName: \"kubernetes.io/projected/4e396de6-8111-4a48-b1ec-ae8b7bcfdd24-kube-api-access-z66tl\") pod \"4e396de6-8111-4a48-b1ec-ae8b7bcfdd24\" (UID: \"4e396de6-8111-4a48-b1ec-ae8b7bcfdd24\") " Oct 01 15:58:59 crc kubenswrapper[4949]: I1001 15:58:59.173582 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e396de6-8111-4a48-b1ec-ae8b7bcfdd24-config\") pod \"4e396de6-8111-4a48-b1ec-ae8b7bcfdd24\" (UID: \"4e396de6-8111-4a48-b1ec-ae8b7bcfdd24\") " Oct 01 15:58:59 crc kubenswrapper[4949]: I1001 15:58:59.201302 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e396de6-8111-4a48-b1ec-ae8b7bcfdd24-kube-api-access-z66tl" (OuterVolumeSpecName: "kube-api-access-z66tl") pod "4e396de6-8111-4a48-b1ec-ae8b7bcfdd24" (UID: "4e396de6-8111-4a48-b1ec-ae8b7bcfdd24"). InnerVolumeSpecName "kube-api-access-z66tl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:58:59 crc kubenswrapper[4949]: I1001 15:58:59.209601 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e396de6-8111-4a48-b1ec-ae8b7bcfdd24-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4e396de6-8111-4a48-b1ec-ae8b7bcfdd24" (UID: "4e396de6-8111-4a48-b1ec-ae8b7bcfdd24"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:58:59 crc kubenswrapper[4949]: I1001 15:58:59.215476 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e396de6-8111-4a48-b1ec-ae8b7bcfdd24-config" (OuterVolumeSpecName: "config") pod "4e396de6-8111-4a48-b1ec-ae8b7bcfdd24" (UID: "4e396de6-8111-4a48-b1ec-ae8b7bcfdd24"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:58:59 crc kubenswrapper[4949]: I1001 15:58:59.217740 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e396de6-8111-4a48-b1ec-ae8b7bcfdd24-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4e396de6-8111-4a48-b1ec-ae8b7bcfdd24" (UID: "4e396de6-8111-4a48-b1ec-ae8b7bcfdd24"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:58:59 crc kubenswrapper[4949]: I1001 15:58:59.223520 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e396de6-8111-4a48-b1ec-ae8b7bcfdd24-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4e396de6-8111-4a48-b1ec-ae8b7bcfdd24" (UID: "4e396de6-8111-4a48-b1ec-ae8b7bcfdd24"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:58:59 crc kubenswrapper[4949]: I1001 15:58:59.275799 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e396de6-8111-4a48-b1ec-ae8b7bcfdd24-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 01 15:58:59 crc kubenswrapper[4949]: I1001 15:58:59.275839 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z66tl\" (UniqueName: \"kubernetes.io/projected/4e396de6-8111-4a48-b1ec-ae8b7bcfdd24-kube-api-access-z66tl\") on node \"crc\" DevicePath \"\"" Oct 01 15:58:59 crc kubenswrapper[4949]: I1001 15:58:59.275853 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e396de6-8111-4a48-b1ec-ae8b7bcfdd24-config\") on node \"crc\" DevicePath \"\"" Oct 01 15:58:59 crc kubenswrapper[4949]: I1001 15:58:59.275864 4949 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e396de6-8111-4a48-b1ec-ae8b7bcfdd24-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 01 15:58:59 crc kubenswrapper[4949]: I1001 15:58:59.275875 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e396de6-8111-4a48-b1ec-ae8b7bcfdd24-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 01 15:58:59 crc kubenswrapper[4949]: I1001 15:58:59.613413 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="845971ca-cc3f-4227-adcf-8049ce7fd830" path="/var/lib/kubelet/pods/845971ca-cc3f-4227-adcf-8049ce7fd830/volumes" Oct 01 15:58:59 crc kubenswrapper[4949]: I1001 15:58:59.707231 4949 generic.go:334] "Generic (PLEG): container finished" podID="4e77e083-5e09-4383-8faa-1b16c353b5af" containerID="6590b796907198c29431558d544819d99d2712a0951b198c93ae3d929c255546" exitCode=0 Oct 01 15:58:59 crc kubenswrapper[4949]: I1001 15:58:59.707311 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-4qcv5" event={"ID":"4e77e083-5e09-4383-8faa-1b16c353b5af","Type":"ContainerDied","Data":"6590b796907198c29431558d544819d99d2712a0951b198c93ae3d929c255546"} Oct 01 15:58:59 crc kubenswrapper[4949]: I1001 15:58:59.716207 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6546db6db7-nt4wr" Oct 01 15:58:59 crc kubenswrapper[4949]: I1001 15:58:59.717001 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6546db6db7-nt4wr" event={"ID":"4e396de6-8111-4a48-b1ec-ae8b7bcfdd24","Type":"ContainerDied","Data":"7fb792d5904f3487cfa4063fe9f61d85fee79b87755c162115fc876a92ab2020"} Oct 01 15:58:59 crc kubenswrapper[4949]: I1001 15:58:59.717041 4949 scope.go:117] "RemoveContainer" containerID="33b653c9f5e31a1a30f27e9dfb9b1427ba349953e31b2ddc6d95417bb6260867" Oct 01 15:58:59 crc kubenswrapper[4949]: I1001 15:58:59.888451 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-nt4wr"] Oct 01 15:58:59 crc kubenswrapper[4949]: I1001 15:58:59.939480 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6546db6db7-nt4wr"] Oct 01 15:58:59 crc kubenswrapper[4949]: I1001 15:58:59.945419 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 01 15:59:00 crc kubenswrapper[4949]: I1001 15:59:00.734098 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-4qcv5" event={"ID":"4e77e083-5e09-4383-8faa-1b16c353b5af","Type":"ContainerStarted","Data":"2e1507190ea05aa88c12635ee1bddfcad1ca80c8b9a6c04f283668711e625c3d"} Oct 01 15:59:00 crc kubenswrapper[4949]: I1001 15:59:00.734494 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-7987f74bbc-4qcv5" Oct 01 15:59:00 crc kubenswrapper[4949]: I1001 15:59:00.750633 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-7987f74bbc-4qcv5" podStartSLOduration=3.750613307 podStartE2EDuration="3.750613307s" podCreationTimestamp="2025-10-01 15:58:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:59:00.74997645 +0000 UTC m=+1040.055582661" watchObservedRunningTime="2025-10-01 15:59:00.750613307 +0000 UTC m=+1040.056219498" Oct 01 15:59:01 crc kubenswrapper[4949]: I1001 15:59:01.611665 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e396de6-8111-4a48-b1ec-ae8b7bcfdd24" path="/var/lib/kubelet/pods/4e396de6-8111-4a48-b1ec-ae8b7bcfdd24/volumes" Oct 01 15:59:02 crc kubenswrapper[4949]: I1001 15:59:02.757335 4949 generic.go:334] "Generic (PLEG): container finished" podID="b0dd0278-64c0-4068-8f4e-42fadcd3df42" containerID="4fa4f79ebcfd549e4d4b251e9b94b398074ed3501b9947b7e40828532899f1ca" exitCode=0 Oct 01 15:59:02 crc kubenswrapper[4949]: I1001 15:59:02.757407 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-gb56z" event={"ID":"b0dd0278-64c0-4068-8f4e-42fadcd3df42","Type":"ContainerDied","Data":"4fa4f79ebcfd549e4d4b251e9b94b398074ed3501b9947b7e40828532899f1ca"} Oct 01 15:59:05 crc kubenswrapper[4949]: I1001 15:59:05.148401 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-gb56z" Oct 01 15:59:05 crc kubenswrapper[4949]: I1001 15:59:05.315776 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0dd0278-64c0-4068-8f4e-42fadcd3df42-combined-ca-bundle\") pod \"b0dd0278-64c0-4068-8f4e-42fadcd3df42\" (UID: \"b0dd0278-64c0-4068-8f4e-42fadcd3df42\") " Oct 01 15:59:05 crc kubenswrapper[4949]: I1001 15:59:05.315860 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0dd0278-64c0-4068-8f4e-42fadcd3df42-config-data\") pod \"b0dd0278-64c0-4068-8f4e-42fadcd3df42\" (UID: \"b0dd0278-64c0-4068-8f4e-42fadcd3df42\") " Oct 01 15:59:05 crc kubenswrapper[4949]: I1001 15:59:05.315902 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b0dd0278-64c0-4068-8f4e-42fadcd3df42-fernet-keys\") pod \"b0dd0278-64c0-4068-8f4e-42fadcd3df42\" (UID: \"b0dd0278-64c0-4068-8f4e-42fadcd3df42\") " Oct 01 15:59:05 crc kubenswrapper[4949]: I1001 15:59:05.316043 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b0dd0278-64c0-4068-8f4e-42fadcd3df42-credential-keys\") pod \"b0dd0278-64c0-4068-8f4e-42fadcd3df42\" (UID: \"b0dd0278-64c0-4068-8f4e-42fadcd3df42\") " Oct 01 15:59:05 crc kubenswrapper[4949]: I1001 15:59:05.316092 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sqxtk\" (UniqueName: \"kubernetes.io/projected/b0dd0278-64c0-4068-8f4e-42fadcd3df42-kube-api-access-sqxtk\") pod \"b0dd0278-64c0-4068-8f4e-42fadcd3df42\" (UID: \"b0dd0278-64c0-4068-8f4e-42fadcd3df42\") " Oct 01 15:59:05 crc kubenswrapper[4949]: I1001 15:59:05.316167 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0dd0278-64c0-4068-8f4e-42fadcd3df42-scripts\") pod \"b0dd0278-64c0-4068-8f4e-42fadcd3df42\" (UID: \"b0dd0278-64c0-4068-8f4e-42fadcd3df42\") " Oct 01 15:59:05 crc kubenswrapper[4949]: I1001 15:59:05.322328 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0dd0278-64c0-4068-8f4e-42fadcd3df42-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "b0dd0278-64c0-4068-8f4e-42fadcd3df42" (UID: "b0dd0278-64c0-4068-8f4e-42fadcd3df42"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:59:05 crc kubenswrapper[4949]: I1001 15:59:05.321598 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b0dd0278-64c0-4068-8f4e-42fadcd3df42-kube-api-access-sqxtk" (OuterVolumeSpecName: "kube-api-access-sqxtk") pod "b0dd0278-64c0-4068-8f4e-42fadcd3df42" (UID: "b0dd0278-64c0-4068-8f4e-42fadcd3df42"). InnerVolumeSpecName "kube-api-access-sqxtk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:59:05 crc kubenswrapper[4949]: I1001 15:59:05.334867 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0dd0278-64c0-4068-8f4e-42fadcd3df42-scripts" (OuterVolumeSpecName: "scripts") pod "b0dd0278-64c0-4068-8f4e-42fadcd3df42" (UID: "b0dd0278-64c0-4068-8f4e-42fadcd3df42"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:59:05 crc kubenswrapper[4949]: I1001 15:59:05.335291 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0dd0278-64c0-4068-8f4e-42fadcd3df42-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "b0dd0278-64c0-4068-8f4e-42fadcd3df42" (UID: "b0dd0278-64c0-4068-8f4e-42fadcd3df42"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:59:05 crc kubenswrapper[4949]: I1001 15:59:05.350314 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0dd0278-64c0-4068-8f4e-42fadcd3df42-config-data" (OuterVolumeSpecName: "config-data") pod "b0dd0278-64c0-4068-8f4e-42fadcd3df42" (UID: "b0dd0278-64c0-4068-8f4e-42fadcd3df42"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:59:05 crc kubenswrapper[4949]: I1001 15:59:05.353303 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b0dd0278-64c0-4068-8f4e-42fadcd3df42-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b0dd0278-64c0-4068-8f4e-42fadcd3df42" (UID: "b0dd0278-64c0-4068-8f4e-42fadcd3df42"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:59:05 crc kubenswrapper[4949]: I1001 15:59:05.418103 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0dd0278-64c0-4068-8f4e-42fadcd3df42-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:05 crc kubenswrapper[4949]: I1001 15:59:05.418160 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b0dd0278-64c0-4068-8f4e-42fadcd3df42-config-data\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:05 crc kubenswrapper[4949]: I1001 15:59:05.418175 4949 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/b0dd0278-64c0-4068-8f4e-42fadcd3df42-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:05 crc kubenswrapper[4949]: I1001 15:59:05.418187 4949 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/b0dd0278-64c0-4068-8f4e-42fadcd3df42-credential-keys\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:05 crc kubenswrapper[4949]: I1001 15:59:05.418199 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sqxtk\" (UniqueName: \"kubernetes.io/projected/b0dd0278-64c0-4068-8f4e-42fadcd3df42-kube-api-access-sqxtk\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:05 crc kubenswrapper[4949]: I1001 15:59:05.418213 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b0dd0278-64c0-4068-8f4e-42fadcd3df42-scripts\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:05 crc kubenswrapper[4949]: I1001 15:59:05.790766 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-gb56z" event={"ID":"b0dd0278-64c0-4068-8f4e-42fadcd3df42","Type":"ContainerDied","Data":"0cb115372f53004189a0464d944508aa08a2768ed05485a69e594445ebf9cc3e"} Oct 01 15:59:05 crc kubenswrapper[4949]: I1001 15:59:05.790829 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0cb115372f53004189a0464d944508aa08a2768ed05485a69e594445ebf9cc3e" Oct 01 15:59:05 crc kubenswrapper[4949]: I1001 15:59:05.790807 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-gb56z" Oct 01 15:59:06 crc kubenswrapper[4949]: I1001 15:59:06.232984 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-gb56z"] Oct 01 15:59:06 crc kubenswrapper[4949]: I1001 15:59:06.238805 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-gb56z"] Oct 01 15:59:06 crc kubenswrapper[4949]: I1001 15:59:06.325841 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-bootstrap-26kd8"] Oct 01 15:59:06 crc kubenswrapper[4949]: E1001 15:59:06.326278 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="845971ca-cc3f-4227-adcf-8049ce7fd830" containerName="init" Oct 01 15:59:06 crc kubenswrapper[4949]: I1001 15:59:06.326301 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="845971ca-cc3f-4227-adcf-8049ce7fd830" containerName="init" Oct 01 15:59:06 crc kubenswrapper[4949]: E1001 15:59:06.326336 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b0dd0278-64c0-4068-8f4e-42fadcd3df42" containerName="keystone-bootstrap" Oct 01 15:59:06 crc kubenswrapper[4949]: I1001 15:59:06.326346 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="b0dd0278-64c0-4068-8f4e-42fadcd3df42" containerName="keystone-bootstrap" Oct 01 15:59:06 crc kubenswrapper[4949]: E1001 15:59:06.326362 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e396de6-8111-4a48-b1ec-ae8b7bcfdd24" containerName="init" Oct 01 15:59:06 crc kubenswrapper[4949]: I1001 15:59:06.326370 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e396de6-8111-4a48-b1ec-ae8b7bcfdd24" containerName="init" Oct 01 15:59:06 crc kubenswrapper[4949]: E1001 15:59:06.326389 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="845971ca-cc3f-4227-adcf-8049ce7fd830" containerName="dnsmasq-dns" Oct 01 15:59:06 crc kubenswrapper[4949]: I1001 15:59:06.326397 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="845971ca-cc3f-4227-adcf-8049ce7fd830" containerName="dnsmasq-dns" Oct 01 15:59:06 crc kubenswrapper[4949]: I1001 15:59:06.326807 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="845971ca-cc3f-4227-adcf-8049ce7fd830" containerName="dnsmasq-dns" Oct 01 15:59:06 crc kubenswrapper[4949]: I1001 15:59:06.326831 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e396de6-8111-4a48-b1ec-ae8b7bcfdd24" containerName="init" Oct 01 15:59:06 crc kubenswrapper[4949]: I1001 15:59:06.326852 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="b0dd0278-64c0-4068-8f4e-42fadcd3df42" containerName="keystone-bootstrap" Oct 01 15:59:06 crc kubenswrapper[4949]: I1001 15:59:06.327432 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-26kd8" Oct 01 15:59:06 crc kubenswrapper[4949]: I1001 15:59:06.330268 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 01 15:59:06 crc kubenswrapper[4949]: I1001 15:59:06.330313 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 01 15:59:06 crc kubenswrapper[4949]: I1001 15:59:06.330489 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 01 15:59:06 crc kubenswrapper[4949]: I1001 15:59:06.333830 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-kckdb" Oct 01 15:59:06 crc kubenswrapper[4949]: I1001 15:59:06.344447 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-26kd8"] Oct 01 15:59:06 crc kubenswrapper[4949]: I1001 15:59:06.437284 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f4940aa0-8f70-4b3e-b9b4-b1e299993441-fernet-keys\") pod \"keystone-bootstrap-26kd8\" (UID: \"f4940aa0-8f70-4b3e-b9b4-b1e299993441\") " pod="openstack/keystone-bootstrap-26kd8" Oct 01 15:59:06 crc kubenswrapper[4949]: I1001 15:59:06.437364 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f4940aa0-8f70-4b3e-b9b4-b1e299993441-credential-keys\") pod \"keystone-bootstrap-26kd8\" (UID: \"f4940aa0-8f70-4b3e-b9b4-b1e299993441\") " pod="openstack/keystone-bootstrap-26kd8" Oct 01 15:59:06 crc kubenswrapper[4949]: I1001 15:59:06.437629 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tlk2h\" (UniqueName: \"kubernetes.io/projected/f4940aa0-8f70-4b3e-b9b4-b1e299993441-kube-api-access-tlk2h\") pod \"keystone-bootstrap-26kd8\" (UID: \"f4940aa0-8f70-4b3e-b9b4-b1e299993441\") " pod="openstack/keystone-bootstrap-26kd8" Oct 01 15:59:06 crc kubenswrapper[4949]: I1001 15:59:06.437701 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4940aa0-8f70-4b3e-b9b4-b1e299993441-config-data\") pod \"keystone-bootstrap-26kd8\" (UID: \"f4940aa0-8f70-4b3e-b9b4-b1e299993441\") " pod="openstack/keystone-bootstrap-26kd8" Oct 01 15:59:06 crc kubenswrapper[4949]: I1001 15:59:06.438504 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4940aa0-8f70-4b3e-b9b4-b1e299993441-scripts\") pod \"keystone-bootstrap-26kd8\" (UID: \"f4940aa0-8f70-4b3e-b9b4-b1e299993441\") " pod="openstack/keystone-bootstrap-26kd8" Oct 01 15:59:06 crc kubenswrapper[4949]: I1001 15:59:06.438833 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4940aa0-8f70-4b3e-b9b4-b1e299993441-combined-ca-bundle\") pod \"keystone-bootstrap-26kd8\" (UID: \"f4940aa0-8f70-4b3e-b9b4-b1e299993441\") " pod="openstack/keystone-bootstrap-26kd8" Oct 01 15:59:06 crc kubenswrapper[4949]: I1001 15:59:06.541708 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tlk2h\" (UniqueName: \"kubernetes.io/projected/f4940aa0-8f70-4b3e-b9b4-b1e299993441-kube-api-access-tlk2h\") pod \"keystone-bootstrap-26kd8\" (UID: \"f4940aa0-8f70-4b3e-b9b4-b1e299993441\") " pod="openstack/keystone-bootstrap-26kd8" Oct 01 15:59:06 crc kubenswrapper[4949]: I1001 15:59:06.541872 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4940aa0-8f70-4b3e-b9b4-b1e299993441-config-data\") pod \"keystone-bootstrap-26kd8\" (UID: \"f4940aa0-8f70-4b3e-b9b4-b1e299993441\") " pod="openstack/keystone-bootstrap-26kd8" Oct 01 15:59:06 crc kubenswrapper[4949]: I1001 15:59:06.541950 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4940aa0-8f70-4b3e-b9b4-b1e299993441-scripts\") pod \"keystone-bootstrap-26kd8\" (UID: \"f4940aa0-8f70-4b3e-b9b4-b1e299993441\") " pod="openstack/keystone-bootstrap-26kd8" Oct 01 15:59:06 crc kubenswrapper[4949]: I1001 15:59:06.542015 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4940aa0-8f70-4b3e-b9b4-b1e299993441-combined-ca-bundle\") pod \"keystone-bootstrap-26kd8\" (UID: \"f4940aa0-8f70-4b3e-b9b4-b1e299993441\") " pod="openstack/keystone-bootstrap-26kd8" Oct 01 15:59:06 crc kubenswrapper[4949]: I1001 15:59:06.542085 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f4940aa0-8f70-4b3e-b9b4-b1e299993441-fernet-keys\") pod \"keystone-bootstrap-26kd8\" (UID: \"f4940aa0-8f70-4b3e-b9b4-b1e299993441\") " pod="openstack/keystone-bootstrap-26kd8" Oct 01 15:59:06 crc kubenswrapper[4949]: I1001 15:59:06.542275 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f4940aa0-8f70-4b3e-b9b4-b1e299993441-credential-keys\") pod \"keystone-bootstrap-26kd8\" (UID: \"f4940aa0-8f70-4b3e-b9b4-b1e299993441\") " pod="openstack/keystone-bootstrap-26kd8" Oct 01 15:59:06 crc kubenswrapper[4949]: I1001 15:59:06.549210 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4940aa0-8f70-4b3e-b9b4-b1e299993441-combined-ca-bundle\") pod \"keystone-bootstrap-26kd8\" (UID: \"f4940aa0-8f70-4b3e-b9b4-b1e299993441\") " pod="openstack/keystone-bootstrap-26kd8" Oct 01 15:59:06 crc kubenswrapper[4949]: I1001 15:59:06.549608 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4940aa0-8f70-4b3e-b9b4-b1e299993441-config-data\") pod \"keystone-bootstrap-26kd8\" (UID: \"f4940aa0-8f70-4b3e-b9b4-b1e299993441\") " pod="openstack/keystone-bootstrap-26kd8" Oct 01 15:59:06 crc kubenswrapper[4949]: I1001 15:59:06.559208 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f4940aa0-8f70-4b3e-b9b4-b1e299993441-fernet-keys\") pod \"keystone-bootstrap-26kd8\" (UID: \"f4940aa0-8f70-4b3e-b9b4-b1e299993441\") " pod="openstack/keystone-bootstrap-26kd8" Oct 01 15:59:06 crc kubenswrapper[4949]: I1001 15:59:06.559672 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f4940aa0-8f70-4b3e-b9b4-b1e299993441-credential-keys\") pod \"keystone-bootstrap-26kd8\" (UID: \"f4940aa0-8f70-4b3e-b9b4-b1e299993441\") " pod="openstack/keystone-bootstrap-26kd8" Oct 01 15:59:06 crc kubenswrapper[4949]: I1001 15:59:06.559803 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4940aa0-8f70-4b3e-b9b4-b1e299993441-scripts\") pod \"keystone-bootstrap-26kd8\" (UID: \"f4940aa0-8f70-4b3e-b9b4-b1e299993441\") " pod="openstack/keystone-bootstrap-26kd8" Oct 01 15:59:06 crc kubenswrapper[4949]: I1001 15:59:06.563728 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tlk2h\" (UniqueName: \"kubernetes.io/projected/f4940aa0-8f70-4b3e-b9b4-b1e299993441-kube-api-access-tlk2h\") pod \"keystone-bootstrap-26kd8\" (UID: \"f4940aa0-8f70-4b3e-b9b4-b1e299993441\") " pod="openstack/keystone-bootstrap-26kd8" Oct 01 15:59:06 crc kubenswrapper[4949]: I1001 15:59:06.650855 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-26kd8" Oct 01 15:59:07 crc kubenswrapper[4949]: I1001 15:59:07.614107 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b0dd0278-64c0-4068-8f4e-42fadcd3df42" path="/var/lib/kubelet/pods/b0dd0278-64c0-4068-8f4e-42fadcd3df42/volumes" Oct 01 15:59:07 crc kubenswrapper[4949]: I1001 15:59:07.914326 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-7987f74bbc-4qcv5" Oct 01 15:59:07 crc kubenswrapper[4949]: I1001 15:59:07.971083 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-tt8cw"] Oct 01 15:59:07 crc kubenswrapper[4949]: I1001 15:59:07.971535 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-86db49b7ff-tt8cw" podUID="3919ba46-ce04-4089-a5d1-033501df8eaf" containerName="dnsmasq-dns" containerID="cri-o://036c124ae8721476d22934a06e2614e69914392f5db1da0d747a732fc1de6874" gracePeriod=10 Oct 01 15:59:08 crc kubenswrapper[4949]: I1001 15:59:08.818934 4949 generic.go:334] "Generic (PLEG): container finished" podID="3919ba46-ce04-4089-a5d1-033501df8eaf" containerID="036c124ae8721476d22934a06e2614e69914392f5db1da0d747a732fc1de6874" exitCode=0 Oct 01 15:59:08 crc kubenswrapper[4949]: I1001 15:59:08.818985 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-tt8cw" event={"ID":"3919ba46-ce04-4089-a5d1-033501df8eaf","Type":"ContainerDied","Data":"036c124ae8721476d22934a06e2614e69914392f5db1da0d747a732fc1de6874"} Oct 01 15:59:08 crc kubenswrapper[4949]: I1001 15:59:08.926188 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-tt8cw" podUID="3919ba46-ce04-4089-a5d1-033501df8eaf" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: connect: connection refused" Oct 01 15:59:13 crc kubenswrapper[4949]: I1001 15:59:13.926052 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-tt8cw" podUID="3919ba46-ce04-4089-a5d1-033501df8eaf" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: connect: connection refused" Oct 01 15:59:18 crc kubenswrapper[4949]: I1001 15:59:18.926718 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-86db49b7ff-tt8cw" podUID="3919ba46-ce04-4089-a5d1-033501df8eaf" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.113:5353: connect: connection refused" Oct 01 15:59:18 crc kubenswrapper[4949]: I1001 15:59:18.927254 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-86db49b7ff-tt8cw" Oct 01 15:59:23 crc kubenswrapper[4949]: E1001 15:59:23.403800 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified" Oct 01 15:59:23 crc kubenswrapper[4949]: E1001 15:59:23.404400 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cinder-db-sync,Image:quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_set_configs && /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KOLLA_BOOTSTRAP,Value:TRUE,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:etc-machine-id,ReadOnly:true,MountPath:/etc/machine-id,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:true,MountPath:/usr/local/bin/container-scripts,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/config-data/merged,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/etc/my.cnf,SubPath:my.cnf,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:db-sync-config-data,ReadOnly:true,MountPath:/etc/cinder/cinder.conf.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:true,MountPath:/var/lib/kolla/config_files/config.json,SubPath:db-sync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dpbpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cinder-db-sync-tjfmw_openstack(41cbbbe8-6b79-4667-ba8d-7252d0d1a998): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 01 15:59:23 crc kubenswrapper[4949]: E1001 15:59:23.405963 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/cinder-db-sync-tjfmw" podUID="41cbbbe8-6b79-4667-ba8d-7252d0d1a998" Oct 01 15:59:23 crc kubenswrapper[4949]: I1001 15:59:23.525954 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-tt8cw" Oct 01 15:59:23 crc kubenswrapper[4949]: I1001 15:59:23.544334 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3919ba46-ce04-4089-a5d1-033501df8eaf-ovsdbserver-sb\") pod \"3919ba46-ce04-4089-a5d1-033501df8eaf\" (UID: \"3919ba46-ce04-4089-a5d1-033501df8eaf\") " Oct 01 15:59:23 crc kubenswrapper[4949]: I1001 15:59:23.544428 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3919ba46-ce04-4089-a5d1-033501df8eaf-config\") pod \"3919ba46-ce04-4089-a5d1-033501df8eaf\" (UID: \"3919ba46-ce04-4089-a5d1-033501df8eaf\") " Oct 01 15:59:23 crc kubenswrapper[4949]: I1001 15:59:23.544471 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3919ba46-ce04-4089-a5d1-033501df8eaf-ovsdbserver-nb\") pod \"3919ba46-ce04-4089-a5d1-033501df8eaf\" (UID: \"3919ba46-ce04-4089-a5d1-033501df8eaf\") " Oct 01 15:59:23 crc kubenswrapper[4949]: I1001 15:59:23.544509 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rh2b\" (UniqueName: \"kubernetes.io/projected/3919ba46-ce04-4089-a5d1-033501df8eaf-kube-api-access-5rh2b\") pod \"3919ba46-ce04-4089-a5d1-033501df8eaf\" (UID: \"3919ba46-ce04-4089-a5d1-033501df8eaf\") " Oct 01 15:59:23 crc kubenswrapper[4949]: I1001 15:59:23.544533 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3919ba46-ce04-4089-a5d1-033501df8eaf-dns-svc\") pod \"3919ba46-ce04-4089-a5d1-033501df8eaf\" (UID: \"3919ba46-ce04-4089-a5d1-033501df8eaf\") " Oct 01 15:59:23 crc kubenswrapper[4949]: I1001 15:59:23.553522 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3919ba46-ce04-4089-a5d1-033501df8eaf-kube-api-access-5rh2b" (OuterVolumeSpecName: "kube-api-access-5rh2b") pod "3919ba46-ce04-4089-a5d1-033501df8eaf" (UID: "3919ba46-ce04-4089-a5d1-033501df8eaf"). InnerVolumeSpecName "kube-api-access-5rh2b". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:59:23 crc kubenswrapper[4949]: I1001 15:59:23.609168 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3919ba46-ce04-4089-a5d1-033501df8eaf-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "3919ba46-ce04-4089-a5d1-033501df8eaf" (UID: "3919ba46-ce04-4089-a5d1-033501df8eaf"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:59:23 crc kubenswrapper[4949]: I1001 15:59:23.629437 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3919ba46-ce04-4089-a5d1-033501df8eaf-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "3919ba46-ce04-4089-a5d1-033501df8eaf" (UID: "3919ba46-ce04-4089-a5d1-033501df8eaf"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:59:23 crc kubenswrapper[4949]: I1001 15:59:23.643820 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3919ba46-ce04-4089-a5d1-033501df8eaf-config" (OuterVolumeSpecName: "config") pod "3919ba46-ce04-4089-a5d1-033501df8eaf" (UID: "3919ba46-ce04-4089-a5d1-033501df8eaf"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:59:23 crc kubenswrapper[4949]: I1001 15:59:23.646370 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3919ba46-ce04-4089-a5d1-033501df8eaf-config\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:23 crc kubenswrapper[4949]: I1001 15:59:23.647428 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/3919ba46-ce04-4089-a5d1-033501df8eaf-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:23 crc kubenswrapper[4949]: I1001 15:59:23.647467 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5rh2b\" (UniqueName: \"kubernetes.io/projected/3919ba46-ce04-4089-a5d1-033501df8eaf-kube-api-access-5rh2b\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:23 crc kubenswrapper[4949]: I1001 15:59:23.647485 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/3919ba46-ce04-4089-a5d1-033501df8eaf-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:23 crc kubenswrapper[4949]: I1001 15:59:23.648470 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3919ba46-ce04-4089-a5d1-033501df8eaf-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "3919ba46-ce04-4089-a5d1-033501df8eaf" (UID: "3919ba46-ce04-4089-a5d1-033501df8eaf"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:59:23 crc kubenswrapper[4949]: I1001 15:59:23.661087 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-bootstrap-26kd8"] Oct 01 15:59:23 crc kubenswrapper[4949]: I1001 15:59:23.760462 4949 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/3919ba46-ce04-4089-a5d1-033501df8eaf-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:23 crc kubenswrapper[4949]: I1001 15:59:23.957093 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-86db49b7ff-tt8cw" Oct 01 15:59:23 crc kubenswrapper[4949]: I1001 15:59:23.957090 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-86db49b7ff-tt8cw" event={"ID":"3919ba46-ce04-4089-a5d1-033501df8eaf","Type":"ContainerDied","Data":"7a078b73bd05c3d994870717e70a0f472065dcc8c75558b7286a1c5b970637ae"} Oct 01 15:59:23 crc kubenswrapper[4949]: I1001 15:59:23.957186 4949 scope.go:117] "RemoveContainer" containerID="036c124ae8721476d22934a06e2614e69914392f5db1da0d747a732fc1de6874" Oct 01 15:59:23 crc kubenswrapper[4949]: I1001 15:59:23.972493 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-wfg74" event={"ID":"81bf5b66-a190-4c1a-8607-863f93075c01","Type":"ContainerStarted","Data":"6f4d8443e7f3ab71dd2b9192b52536952c0886c7553f72b34ce0bf26abfa4d61"} Oct 01 15:59:23 crc kubenswrapper[4949]: I1001 15:59:23.973947 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-bm8vc" event={"ID":"d7e44454-db3f-453a-8bc9-d8f435685e32","Type":"ContainerStarted","Data":"bc890e9d93a4a22895864dbd37936bcc5dbae355582e1b48fc18bc3b26aa3243"} Oct 01 15:59:23 crc kubenswrapper[4949]: I1001 15:59:23.975472 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-26kd8" event={"ID":"f4940aa0-8f70-4b3e-b9b4-b1e299993441","Type":"ContainerStarted","Data":"c57d17611be8f2befa48c27321584d20f70833f57894933b63895fc3722964fa"} Oct 01 15:59:23 crc kubenswrapper[4949]: I1001 15:59:23.975509 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-26kd8" event={"ID":"f4940aa0-8f70-4b3e-b9b4-b1e299993441","Type":"ContainerStarted","Data":"5cc1d8c9193edd2ebc4cd65c8652891b386e7e3666f3f8aff85a17e560cf597b"} Oct 01 15:59:23 crc kubenswrapper[4949]: I1001 15:59:23.981583 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ebb36cc-f642-402b-8fc7-c12e7f35776a","Type":"ContainerStarted","Data":"f6b5e005e3c08ea4b0398ff5be1f8ab3eed334cf9122bc2e31b162fc42f195c4"} Oct 01 15:59:23 crc kubenswrapper[4949]: I1001 15:59:23.993662 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-tt8cw"] Oct 01 15:59:23 crc kubenswrapper[4949]: I1001 15:59:23.995456 4949 scope.go:117] "RemoveContainer" containerID="f27741557178a0cb704261316b2b5ba1ae7d5c1d0116e8f74635aab93a9ed5cd" Oct 01 15:59:23 crc kubenswrapper[4949]: E1001 15:59:23.995499 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cinder-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-cinder-api:current-podified\\\"\"" pod="openstack/cinder-db-sync-tjfmw" podUID="41cbbbe8-6b79-4667-ba8d-7252d0d1a998" Oct 01 15:59:24 crc kubenswrapper[4949]: I1001 15:59:24.002288 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-86db49b7ff-tt8cw"] Oct 01 15:59:24 crc kubenswrapper[4949]: I1001 15:59:24.011197 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-db-sync-wfg74" podStartSLOduration=2.226375659 podStartE2EDuration="27.011182892s" podCreationTimestamp="2025-10-01 15:58:57 +0000 UTC" firstStartedPulling="2025-10-01 15:58:58.435781314 +0000 UTC m=+1037.741387505" lastFinishedPulling="2025-10-01 15:59:23.220588537 +0000 UTC m=+1062.526194738" observedRunningTime="2025-10-01 15:59:24.008856709 +0000 UTC m=+1063.314462910" watchObservedRunningTime="2025-10-01 15:59:24.011182892 +0000 UTC m=+1063.316789083" Oct 01 15:59:24 crc kubenswrapper[4949]: I1001 15:59:24.040380 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-db-sync-bm8vc" podStartSLOduration=2.14234541 podStartE2EDuration="27.040359033s" podCreationTimestamp="2025-10-01 15:58:57 +0000 UTC" firstStartedPulling="2025-10-01 15:58:58.322317987 +0000 UTC m=+1037.627924178" lastFinishedPulling="2025-10-01 15:59:23.22033161 +0000 UTC m=+1062.525937801" observedRunningTime="2025-10-01 15:59:24.035908223 +0000 UTC m=+1063.341514424" watchObservedRunningTime="2025-10-01 15:59:24.040359033 +0000 UTC m=+1063.345965224" Oct 01 15:59:24 crc kubenswrapper[4949]: I1001 15:59:24.056955 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-bootstrap-26kd8" podStartSLOduration=18.056937663 podStartE2EDuration="18.056937663s" podCreationTimestamp="2025-10-01 15:59:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:59:24.055000071 +0000 UTC m=+1063.360606262" watchObservedRunningTime="2025-10-01 15:59:24.056937663 +0000 UTC m=+1063.362543854" Oct 01 15:59:25 crc kubenswrapper[4949]: I1001 15:59:25.612619 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3919ba46-ce04-4089-a5d1-033501df8eaf" path="/var/lib/kubelet/pods/3919ba46-ce04-4089-a5d1-033501df8eaf/volumes" Oct 01 15:59:26 crc kubenswrapper[4949]: I1001 15:59:26.005711 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ebb36cc-f642-402b-8fc7-c12e7f35776a","Type":"ContainerStarted","Data":"2ce6b588121f00042fc55103091e4d4375b48192516e333728aaa5f3e119f398"} Oct 01 15:59:28 crc kubenswrapper[4949]: I1001 15:59:28.024866 4949 generic.go:334] "Generic (PLEG): container finished" podID="f4940aa0-8f70-4b3e-b9b4-b1e299993441" containerID="c57d17611be8f2befa48c27321584d20f70833f57894933b63895fc3722964fa" exitCode=0 Oct 01 15:59:28 crc kubenswrapper[4949]: I1001 15:59:28.024961 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-26kd8" event={"ID":"f4940aa0-8f70-4b3e-b9b4-b1e299993441","Type":"ContainerDied","Data":"c57d17611be8f2befa48c27321584d20f70833f57894933b63895fc3722964fa"} Oct 01 15:59:34 crc kubenswrapper[4949]: I1001 15:59:34.760859 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-26kd8" Oct 01 15:59:34 crc kubenswrapper[4949]: I1001 15:59:34.894688 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f4940aa0-8f70-4b3e-b9b4-b1e299993441-fernet-keys\") pod \"f4940aa0-8f70-4b3e-b9b4-b1e299993441\" (UID: \"f4940aa0-8f70-4b3e-b9b4-b1e299993441\") " Oct 01 15:59:34 crc kubenswrapper[4949]: I1001 15:59:34.895079 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4940aa0-8f70-4b3e-b9b4-b1e299993441-scripts\") pod \"f4940aa0-8f70-4b3e-b9b4-b1e299993441\" (UID: \"f4940aa0-8f70-4b3e-b9b4-b1e299993441\") " Oct 01 15:59:34 crc kubenswrapper[4949]: I1001 15:59:34.895180 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f4940aa0-8f70-4b3e-b9b4-b1e299993441-credential-keys\") pod \"f4940aa0-8f70-4b3e-b9b4-b1e299993441\" (UID: \"f4940aa0-8f70-4b3e-b9b4-b1e299993441\") " Oct 01 15:59:34 crc kubenswrapper[4949]: I1001 15:59:34.895270 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4940aa0-8f70-4b3e-b9b4-b1e299993441-config-data\") pod \"f4940aa0-8f70-4b3e-b9b4-b1e299993441\" (UID: \"f4940aa0-8f70-4b3e-b9b4-b1e299993441\") " Oct 01 15:59:34 crc kubenswrapper[4949]: I1001 15:59:34.895324 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4940aa0-8f70-4b3e-b9b4-b1e299993441-combined-ca-bundle\") pod \"f4940aa0-8f70-4b3e-b9b4-b1e299993441\" (UID: \"f4940aa0-8f70-4b3e-b9b4-b1e299993441\") " Oct 01 15:59:34 crc kubenswrapper[4949]: I1001 15:59:34.895455 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tlk2h\" (UniqueName: \"kubernetes.io/projected/f4940aa0-8f70-4b3e-b9b4-b1e299993441-kube-api-access-tlk2h\") pod \"f4940aa0-8f70-4b3e-b9b4-b1e299993441\" (UID: \"f4940aa0-8f70-4b3e-b9b4-b1e299993441\") " Oct 01 15:59:34 crc kubenswrapper[4949]: I1001 15:59:34.901661 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4940aa0-8f70-4b3e-b9b4-b1e299993441-scripts" (OuterVolumeSpecName: "scripts") pod "f4940aa0-8f70-4b3e-b9b4-b1e299993441" (UID: "f4940aa0-8f70-4b3e-b9b4-b1e299993441"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:59:34 crc kubenswrapper[4949]: I1001 15:59:34.902383 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f4940aa0-8f70-4b3e-b9b4-b1e299993441-kube-api-access-tlk2h" (OuterVolumeSpecName: "kube-api-access-tlk2h") pod "f4940aa0-8f70-4b3e-b9b4-b1e299993441" (UID: "f4940aa0-8f70-4b3e-b9b4-b1e299993441"). InnerVolumeSpecName "kube-api-access-tlk2h". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:59:34 crc kubenswrapper[4949]: I1001 15:59:34.902868 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4940aa0-8f70-4b3e-b9b4-b1e299993441-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "f4940aa0-8f70-4b3e-b9b4-b1e299993441" (UID: "f4940aa0-8f70-4b3e-b9b4-b1e299993441"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:59:34 crc kubenswrapper[4949]: I1001 15:59:34.902944 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4940aa0-8f70-4b3e-b9b4-b1e299993441-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "f4940aa0-8f70-4b3e-b9b4-b1e299993441" (UID: "f4940aa0-8f70-4b3e-b9b4-b1e299993441"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:59:34 crc kubenswrapper[4949]: I1001 15:59:34.929336 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4940aa0-8f70-4b3e-b9b4-b1e299993441-config-data" (OuterVolumeSpecName: "config-data") pod "f4940aa0-8f70-4b3e-b9b4-b1e299993441" (UID: "f4940aa0-8f70-4b3e-b9b4-b1e299993441"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:59:34 crc kubenswrapper[4949]: I1001 15:59:34.949272 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f4940aa0-8f70-4b3e-b9b4-b1e299993441-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f4940aa0-8f70-4b3e-b9b4-b1e299993441" (UID: "f4940aa0-8f70-4b3e-b9b4-b1e299993441"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:59:34 crc kubenswrapper[4949]: I1001 15:59:34.997784 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f4940aa0-8f70-4b3e-b9b4-b1e299993441-config-data\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:34 crc kubenswrapper[4949]: I1001 15:59:34.997844 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f4940aa0-8f70-4b3e-b9b4-b1e299993441-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:34 crc kubenswrapper[4949]: I1001 15:59:34.997868 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tlk2h\" (UniqueName: \"kubernetes.io/projected/f4940aa0-8f70-4b3e-b9b4-b1e299993441-kube-api-access-tlk2h\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:34 crc kubenswrapper[4949]: I1001 15:59:34.997886 4949 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/f4940aa0-8f70-4b3e-b9b4-b1e299993441-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:34 crc kubenswrapper[4949]: I1001 15:59:34.997904 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f4940aa0-8f70-4b3e-b9b4-b1e299993441-scripts\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:34 crc kubenswrapper[4949]: I1001 15:59:34.997920 4949 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/f4940aa0-8f70-4b3e-b9b4-b1e299993441-credential-keys\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:35 crc kubenswrapper[4949]: I1001 15:59:35.094405 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-bootstrap-26kd8" event={"ID":"f4940aa0-8f70-4b3e-b9b4-b1e299993441","Type":"ContainerDied","Data":"5cc1d8c9193edd2ebc4cd65c8652891b386e7e3666f3f8aff85a17e560cf597b"} Oct 01 15:59:35 crc kubenswrapper[4949]: I1001 15:59:35.094450 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5cc1d8c9193edd2ebc4cd65c8652891b386e7e3666f3f8aff85a17e560cf597b" Oct 01 15:59:35 crc kubenswrapper[4949]: I1001 15:59:35.094474 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-bootstrap-26kd8" Oct 01 15:59:35 crc kubenswrapper[4949]: I1001 15:59:35.918504 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-6544c97df6-6skzg"] Oct 01 15:59:35 crc kubenswrapper[4949]: E1001 15:59:35.919919 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3919ba46-ce04-4089-a5d1-033501df8eaf" containerName="dnsmasq-dns" Oct 01 15:59:35 crc kubenswrapper[4949]: I1001 15:59:35.920749 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="3919ba46-ce04-4089-a5d1-033501df8eaf" containerName="dnsmasq-dns" Oct 01 15:59:35 crc kubenswrapper[4949]: E1001 15:59:35.920874 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3919ba46-ce04-4089-a5d1-033501df8eaf" containerName="init" Oct 01 15:59:35 crc kubenswrapper[4949]: I1001 15:59:35.920960 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="3919ba46-ce04-4089-a5d1-033501df8eaf" containerName="init" Oct 01 15:59:35 crc kubenswrapper[4949]: E1001 15:59:35.921061 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4940aa0-8f70-4b3e-b9b4-b1e299993441" containerName="keystone-bootstrap" Oct 01 15:59:35 crc kubenswrapper[4949]: I1001 15:59:35.921160 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4940aa0-8f70-4b3e-b9b4-b1e299993441" containerName="keystone-bootstrap" Oct 01 15:59:35 crc kubenswrapper[4949]: I1001 15:59:35.921456 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4940aa0-8f70-4b3e-b9b4-b1e299993441" containerName="keystone-bootstrap" Oct 01 15:59:35 crc kubenswrapper[4949]: I1001 15:59:35.921574 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="3919ba46-ce04-4089-a5d1-033501df8eaf" containerName="dnsmasq-dns" Oct 01 15:59:35 crc kubenswrapper[4949]: I1001 15:59:35.923282 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6544c97df6-6skzg" Oct 01 15:59:35 crc kubenswrapper[4949]: I1001 15:59:35.930446 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-scripts" Oct 01 15:59:35 crc kubenswrapper[4949]: I1001 15:59:35.930470 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone" Oct 01 15:59:35 crc kubenswrapper[4949]: I1001 15:59:35.933519 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-public-svc" Oct 01 15:59:35 crc kubenswrapper[4949]: I1001 15:59:35.933549 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-config-data" Oct 01 15:59:35 crc kubenswrapper[4949]: I1001 15:59:35.933640 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"keystone-keystone-dockercfg-kckdb" Oct 01 15:59:35 crc kubenswrapper[4949]: I1001 15:59:35.936648 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-keystone-internal-svc" Oct 01 15:59:35 crc kubenswrapper[4949]: I1001 15:59:35.941161 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6544c97df6-6skzg"] Oct 01 15:59:36 crc kubenswrapper[4949]: I1001 15:59:36.104628 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ebb36cc-f642-402b-8fc7-c12e7f35776a","Type":"ContainerStarted","Data":"f1c64af69df24b1e06bc4cb42846a23fd36c1ff811a774128a9664cb7f4b1684"} Oct 01 15:59:36 crc kubenswrapper[4949]: I1001 15:59:36.115764 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5kts\" (UniqueName: \"kubernetes.io/projected/764dea52-7d14-4da5-a50d-2fa41001e2b4-kube-api-access-t5kts\") pod \"keystone-6544c97df6-6skzg\" (UID: \"764dea52-7d14-4da5-a50d-2fa41001e2b4\") " pod="openstack/keystone-6544c97df6-6skzg" Oct 01 15:59:36 crc kubenswrapper[4949]: I1001 15:59:36.115818 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/764dea52-7d14-4da5-a50d-2fa41001e2b4-scripts\") pod \"keystone-6544c97df6-6skzg\" (UID: \"764dea52-7d14-4da5-a50d-2fa41001e2b4\") " pod="openstack/keystone-6544c97df6-6skzg" Oct 01 15:59:36 crc kubenswrapper[4949]: I1001 15:59:36.115848 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/764dea52-7d14-4da5-a50d-2fa41001e2b4-config-data\") pod \"keystone-6544c97df6-6skzg\" (UID: \"764dea52-7d14-4da5-a50d-2fa41001e2b4\") " pod="openstack/keystone-6544c97df6-6skzg" Oct 01 15:59:36 crc kubenswrapper[4949]: I1001 15:59:36.116091 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/764dea52-7d14-4da5-a50d-2fa41001e2b4-combined-ca-bundle\") pod \"keystone-6544c97df6-6skzg\" (UID: \"764dea52-7d14-4da5-a50d-2fa41001e2b4\") " pod="openstack/keystone-6544c97df6-6skzg" Oct 01 15:59:36 crc kubenswrapper[4949]: I1001 15:59:36.116179 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/764dea52-7d14-4da5-a50d-2fa41001e2b4-credential-keys\") pod \"keystone-6544c97df6-6skzg\" (UID: \"764dea52-7d14-4da5-a50d-2fa41001e2b4\") " pod="openstack/keystone-6544c97df6-6skzg" Oct 01 15:59:36 crc kubenswrapper[4949]: I1001 15:59:36.116439 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/764dea52-7d14-4da5-a50d-2fa41001e2b4-internal-tls-certs\") pod \"keystone-6544c97df6-6skzg\" (UID: \"764dea52-7d14-4da5-a50d-2fa41001e2b4\") " pod="openstack/keystone-6544c97df6-6skzg" Oct 01 15:59:36 crc kubenswrapper[4949]: I1001 15:59:36.116570 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/764dea52-7d14-4da5-a50d-2fa41001e2b4-public-tls-certs\") pod \"keystone-6544c97df6-6skzg\" (UID: \"764dea52-7d14-4da5-a50d-2fa41001e2b4\") " pod="openstack/keystone-6544c97df6-6skzg" Oct 01 15:59:36 crc kubenswrapper[4949]: I1001 15:59:36.116693 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/764dea52-7d14-4da5-a50d-2fa41001e2b4-fernet-keys\") pod \"keystone-6544c97df6-6skzg\" (UID: \"764dea52-7d14-4da5-a50d-2fa41001e2b4\") " pod="openstack/keystone-6544c97df6-6skzg" Oct 01 15:59:36 crc kubenswrapper[4949]: I1001 15:59:36.218599 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/764dea52-7d14-4da5-a50d-2fa41001e2b4-public-tls-certs\") pod \"keystone-6544c97df6-6skzg\" (UID: \"764dea52-7d14-4da5-a50d-2fa41001e2b4\") " pod="openstack/keystone-6544c97df6-6skzg" Oct 01 15:59:36 crc kubenswrapper[4949]: I1001 15:59:36.218921 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/764dea52-7d14-4da5-a50d-2fa41001e2b4-fernet-keys\") pod \"keystone-6544c97df6-6skzg\" (UID: \"764dea52-7d14-4da5-a50d-2fa41001e2b4\") " pod="openstack/keystone-6544c97df6-6skzg" Oct 01 15:59:36 crc kubenswrapper[4949]: I1001 15:59:36.218961 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t5kts\" (UniqueName: \"kubernetes.io/projected/764dea52-7d14-4da5-a50d-2fa41001e2b4-kube-api-access-t5kts\") pod \"keystone-6544c97df6-6skzg\" (UID: \"764dea52-7d14-4da5-a50d-2fa41001e2b4\") " pod="openstack/keystone-6544c97df6-6skzg" Oct 01 15:59:36 crc kubenswrapper[4949]: I1001 15:59:36.218982 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/764dea52-7d14-4da5-a50d-2fa41001e2b4-scripts\") pod \"keystone-6544c97df6-6skzg\" (UID: \"764dea52-7d14-4da5-a50d-2fa41001e2b4\") " pod="openstack/keystone-6544c97df6-6skzg" Oct 01 15:59:36 crc kubenswrapper[4949]: I1001 15:59:36.219004 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/764dea52-7d14-4da5-a50d-2fa41001e2b4-config-data\") pod \"keystone-6544c97df6-6skzg\" (UID: \"764dea52-7d14-4da5-a50d-2fa41001e2b4\") " pod="openstack/keystone-6544c97df6-6skzg" Oct 01 15:59:36 crc kubenswrapper[4949]: I1001 15:59:36.219043 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/764dea52-7d14-4da5-a50d-2fa41001e2b4-combined-ca-bundle\") pod \"keystone-6544c97df6-6skzg\" (UID: \"764dea52-7d14-4da5-a50d-2fa41001e2b4\") " pod="openstack/keystone-6544c97df6-6skzg" Oct 01 15:59:36 crc kubenswrapper[4949]: I1001 15:59:36.219066 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/764dea52-7d14-4da5-a50d-2fa41001e2b4-credential-keys\") pod \"keystone-6544c97df6-6skzg\" (UID: \"764dea52-7d14-4da5-a50d-2fa41001e2b4\") " pod="openstack/keystone-6544c97df6-6skzg" Oct 01 15:59:36 crc kubenswrapper[4949]: I1001 15:59:36.219111 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/764dea52-7d14-4da5-a50d-2fa41001e2b4-internal-tls-certs\") pod \"keystone-6544c97df6-6skzg\" (UID: \"764dea52-7d14-4da5-a50d-2fa41001e2b4\") " pod="openstack/keystone-6544c97df6-6skzg" Oct 01 15:59:36 crc kubenswrapper[4949]: I1001 15:59:36.227746 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/764dea52-7d14-4da5-a50d-2fa41001e2b4-fernet-keys\") pod \"keystone-6544c97df6-6skzg\" (UID: \"764dea52-7d14-4da5-a50d-2fa41001e2b4\") " pod="openstack/keystone-6544c97df6-6skzg" Oct 01 15:59:36 crc kubenswrapper[4949]: I1001 15:59:36.228187 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/764dea52-7d14-4da5-a50d-2fa41001e2b4-scripts\") pod \"keystone-6544c97df6-6skzg\" (UID: \"764dea52-7d14-4da5-a50d-2fa41001e2b4\") " pod="openstack/keystone-6544c97df6-6skzg" Oct 01 15:59:36 crc kubenswrapper[4949]: I1001 15:59:36.229605 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/764dea52-7d14-4da5-a50d-2fa41001e2b4-credential-keys\") pod \"keystone-6544c97df6-6skzg\" (UID: \"764dea52-7d14-4da5-a50d-2fa41001e2b4\") " pod="openstack/keystone-6544c97df6-6skzg" Oct 01 15:59:36 crc kubenswrapper[4949]: I1001 15:59:36.229849 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/764dea52-7d14-4da5-a50d-2fa41001e2b4-config-data\") pod \"keystone-6544c97df6-6skzg\" (UID: \"764dea52-7d14-4da5-a50d-2fa41001e2b4\") " pod="openstack/keystone-6544c97df6-6skzg" Oct 01 15:59:36 crc kubenswrapper[4949]: I1001 15:59:36.230219 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/764dea52-7d14-4da5-a50d-2fa41001e2b4-internal-tls-certs\") pod \"keystone-6544c97df6-6skzg\" (UID: \"764dea52-7d14-4da5-a50d-2fa41001e2b4\") " pod="openstack/keystone-6544c97df6-6skzg" Oct 01 15:59:36 crc kubenswrapper[4949]: I1001 15:59:36.230506 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/764dea52-7d14-4da5-a50d-2fa41001e2b4-public-tls-certs\") pod \"keystone-6544c97df6-6skzg\" (UID: \"764dea52-7d14-4da5-a50d-2fa41001e2b4\") " pod="openstack/keystone-6544c97df6-6skzg" Oct 01 15:59:36 crc kubenswrapper[4949]: I1001 15:59:36.234510 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/764dea52-7d14-4da5-a50d-2fa41001e2b4-combined-ca-bundle\") pod \"keystone-6544c97df6-6skzg\" (UID: \"764dea52-7d14-4da5-a50d-2fa41001e2b4\") " pod="openstack/keystone-6544c97df6-6skzg" Oct 01 15:59:36 crc kubenswrapper[4949]: I1001 15:59:36.234858 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5kts\" (UniqueName: \"kubernetes.io/projected/764dea52-7d14-4da5-a50d-2fa41001e2b4-kube-api-access-t5kts\") pod \"keystone-6544c97df6-6skzg\" (UID: \"764dea52-7d14-4da5-a50d-2fa41001e2b4\") " pod="openstack/keystone-6544c97df6-6skzg" Oct 01 15:59:36 crc kubenswrapper[4949]: I1001 15:59:36.248961 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-6544c97df6-6skzg" Oct 01 15:59:36 crc kubenswrapper[4949]: I1001 15:59:36.709430 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-6544c97df6-6skzg"] Oct 01 15:59:36 crc kubenswrapper[4949]: W1001 15:59:36.722704 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod764dea52_7d14_4da5_a50d_2fa41001e2b4.slice/crio-71766017aae0f0feaa79645abffc1250dcc88352f4ff4f2e9c99b17f053d0b36 WatchSource:0}: Error finding container 71766017aae0f0feaa79645abffc1250dcc88352f4ff4f2e9c99b17f053d0b36: Status 404 returned error can't find the container with id 71766017aae0f0feaa79645abffc1250dcc88352f4ff4f2e9c99b17f053d0b36 Oct 01 15:59:37 crc kubenswrapper[4949]: I1001 15:59:37.114792 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6544c97df6-6skzg" event={"ID":"764dea52-7d14-4da5-a50d-2fa41001e2b4","Type":"ContainerStarted","Data":"972eb66e4283dd33e5588784a0c1be78d0caa7f6e2e653963dd3bd7efa896d5e"} Oct 01 15:59:37 crc kubenswrapper[4949]: I1001 15:59:37.115108 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-6544c97df6-6skzg" event={"ID":"764dea52-7d14-4da5-a50d-2fa41001e2b4","Type":"ContainerStarted","Data":"71766017aae0f0feaa79645abffc1250dcc88352f4ff4f2e9c99b17f053d0b36"} Oct 01 15:59:37 crc kubenswrapper[4949]: I1001 15:59:37.115162 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/keystone-6544c97df6-6skzg" Oct 01 15:59:37 crc kubenswrapper[4949]: I1001 15:59:37.146586 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-6544c97df6-6skzg" podStartSLOduration=2.1465649 podStartE2EDuration="2.1465649s" podCreationTimestamp="2025-10-01 15:59:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:59:37.13588304 +0000 UTC m=+1076.441489251" watchObservedRunningTime="2025-10-01 15:59:37.1465649 +0000 UTC m=+1076.452171091" Oct 01 15:59:39 crc kubenswrapper[4949]: I1001 15:59:39.134928 4949 generic.go:334] "Generic (PLEG): container finished" podID="81bf5b66-a190-4c1a-8607-863f93075c01" containerID="6f4d8443e7f3ab71dd2b9192b52536952c0886c7553f72b34ce0bf26abfa4d61" exitCode=0 Oct 01 15:59:39 crc kubenswrapper[4949]: I1001 15:59:39.135009 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-wfg74" event={"ID":"81bf5b66-a190-4c1a-8607-863f93075c01","Type":"ContainerDied","Data":"6f4d8443e7f3ab71dd2b9192b52536952c0886c7553f72b34ce0bf26abfa4d61"} Oct 01 15:59:40 crc kubenswrapper[4949]: I1001 15:59:40.144635 4949 generic.go:334] "Generic (PLEG): container finished" podID="d7e44454-db3f-453a-8bc9-d8f435685e32" containerID="bc890e9d93a4a22895864dbd37936bcc5dbae355582e1b48fc18bc3b26aa3243" exitCode=0 Oct 01 15:59:40 crc kubenswrapper[4949]: I1001 15:59:40.144791 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-bm8vc" event={"ID":"d7e44454-db3f-453a-8bc9-d8f435685e32","Type":"ContainerDied","Data":"bc890e9d93a4a22895864dbd37936bcc5dbae355582e1b48fc18bc3b26aa3243"} Oct 01 15:59:40 crc kubenswrapper[4949]: I1001 15:59:40.897609 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-wfg74" Oct 01 15:59:41 crc kubenswrapper[4949]: I1001 15:59:41.008033 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81bf5b66-a190-4c1a-8607-863f93075c01-scripts\") pod \"81bf5b66-a190-4c1a-8607-863f93075c01\" (UID: \"81bf5b66-a190-4c1a-8607-863f93075c01\") " Oct 01 15:59:41 crc kubenswrapper[4949]: I1001 15:59:41.008111 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xgcks\" (UniqueName: \"kubernetes.io/projected/81bf5b66-a190-4c1a-8607-863f93075c01-kube-api-access-xgcks\") pod \"81bf5b66-a190-4c1a-8607-863f93075c01\" (UID: \"81bf5b66-a190-4c1a-8607-863f93075c01\") " Oct 01 15:59:41 crc kubenswrapper[4949]: I1001 15:59:41.008152 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81bf5b66-a190-4c1a-8607-863f93075c01-config-data\") pod \"81bf5b66-a190-4c1a-8607-863f93075c01\" (UID: \"81bf5b66-a190-4c1a-8607-863f93075c01\") " Oct 01 15:59:41 crc kubenswrapper[4949]: I1001 15:59:41.008241 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81bf5b66-a190-4c1a-8607-863f93075c01-logs\") pod \"81bf5b66-a190-4c1a-8607-863f93075c01\" (UID: \"81bf5b66-a190-4c1a-8607-863f93075c01\") " Oct 01 15:59:41 crc kubenswrapper[4949]: I1001 15:59:41.008315 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81bf5b66-a190-4c1a-8607-863f93075c01-combined-ca-bundle\") pod \"81bf5b66-a190-4c1a-8607-863f93075c01\" (UID: \"81bf5b66-a190-4c1a-8607-863f93075c01\") " Oct 01 15:59:41 crc kubenswrapper[4949]: I1001 15:59:41.008780 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/81bf5b66-a190-4c1a-8607-863f93075c01-logs" (OuterVolumeSpecName: "logs") pod "81bf5b66-a190-4c1a-8607-863f93075c01" (UID: "81bf5b66-a190-4c1a-8607-863f93075c01"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 15:59:41 crc kubenswrapper[4949]: I1001 15:59:41.013995 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81bf5b66-a190-4c1a-8607-863f93075c01-kube-api-access-xgcks" (OuterVolumeSpecName: "kube-api-access-xgcks") pod "81bf5b66-a190-4c1a-8607-863f93075c01" (UID: "81bf5b66-a190-4c1a-8607-863f93075c01"). InnerVolumeSpecName "kube-api-access-xgcks". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:59:41 crc kubenswrapper[4949]: I1001 15:59:41.021298 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81bf5b66-a190-4c1a-8607-863f93075c01-scripts" (OuterVolumeSpecName: "scripts") pod "81bf5b66-a190-4c1a-8607-863f93075c01" (UID: "81bf5b66-a190-4c1a-8607-863f93075c01"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:59:41 crc kubenswrapper[4949]: I1001 15:59:41.034940 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81bf5b66-a190-4c1a-8607-863f93075c01-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "81bf5b66-a190-4c1a-8607-863f93075c01" (UID: "81bf5b66-a190-4c1a-8607-863f93075c01"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:59:41 crc kubenswrapper[4949]: I1001 15:59:41.040769 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/81bf5b66-a190-4c1a-8607-863f93075c01-config-data" (OuterVolumeSpecName: "config-data") pod "81bf5b66-a190-4c1a-8607-863f93075c01" (UID: "81bf5b66-a190-4c1a-8607-863f93075c01"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:59:41 crc kubenswrapper[4949]: I1001 15:59:41.109729 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/81bf5b66-a190-4c1a-8607-863f93075c01-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:41 crc kubenswrapper[4949]: I1001 15:59:41.109758 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/81bf5b66-a190-4c1a-8607-863f93075c01-scripts\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:41 crc kubenswrapper[4949]: I1001 15:59:41.109767 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xgcks\" (UniqueName: \"kubernetes.io/projected/81bf5b66-a190-4c1a-8607-863f93075c01-kube-api-access-xgcks\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:41 crc kubenswrapper[4949]: I1001 15:59:41.109778 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/81bf5b66-a190-4c1a-8607-863f93075c01-config-data\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:41 crc kubenswrapper[4949]: I1001 15:59:41.109786 4949 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/81bf5b66-a190-4c1a-8607-863f93075c01-logs\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:41 crc kubenswrapper[4949]: I1001 15:59:41.154381 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-db-sync-wfg74" event={"ID":"81bf5b66-a190-4c1a-8607-863f93075c01","Type":"ContainerDied","Data":"bbb4c678d2dda41eaa4a267f84b24e656ec923e26469bd966a71f87c73530fa5"} Oct 01 15:59:41 crc kubenswrapper[4949]: I1001 15:59:41.154437 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbb4c678d2dda41eaa4a267f84b24e656ec923e26469bd966a71f87c73530fa5" Oct 01 15:59:41 crc kubenswrapper[4949]: I1001 15:59:41.154440 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/placement-db-sync-wfg74" Oct 01 15:59:41 crc kubenswrapper[4949]: I1001 15:59:41.233526 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/placement-575569c7bd-g6srl"] Oct 01 15:59:41 crc kubenswrapper[4949]: E1001 15:59:41.234325 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="81bf5b66-a190-4c1a-8607-863f93075c01" containerName="placement-db-sync" Oct 01 15:59:41 crc kubenswrapper[4949]: I1001 15:59:41.235008 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="81bf5b66-a190-4c1a-8607-863f93075c01" containerName="placement-db-sync" Oct 01 15:59:41 crc kubenswrapper[4949]: I1001 15:59:41.235252 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="81bf5b66-a190-4c1a-8607-863f93075c01" containerName="placement-db-sync" Oct 01 15:59:41 crc kubenswrapper[4949]: I1001 15:59:41.236892 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-575569c7bd-g6srl" Oct 01 15:59:41 crc kubenswrapper[4949]: I1001 15:59:41.240073 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-scripts" Oct 01 15:59:41 crc kubenswrapper[4949]: I1001 15:59:41.240483 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-internal-svc" Oct 01 15:59:41 crc kubenswrapper[4949]: I1001 15:59:41.240761 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-placement-public-svc" Oct 01 15:59:41 crc kubenswrapper[4949]: I1001 15:59:41.242671 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-config-data" Oct 01 15:59:41 crc kubenswrapper[4949]: I1001 15:59:41.245738 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-js7fk" Oct 01 15:59:41 crc kubenswrapper[4949]: I1001 15:59:41.250509 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-575569c7bd-g6srl"] Oct 01 15:59:41 crc kubenswrapper[4949]: I1001 15:59:41.415460 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6bde34f-c88d-4a70-ab54-084c4727d46d-internal-tls-certs\") pod \"placement-575569c7bd-g6srl\" (UID: \"d6bde34f-c88d-4a70-ab54-084c4727d46d\") " pod="openstack/placement-575569c7bd-g6srl" Oct 01 15:59:41 crc kubenswrapper[4949]: I1001 15:59:41.415507 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6bde34f-c88d-4a70-ab54-084c4727d46d-public-tls-certs\") pod \"placement-575569c7bd-g6srl\" (UID: \"d6bde34f-c88d-4a70-ab54-084c4727d46d\") " pod="openstack/placement-575569c7bd-g6srl" Oct 01 15:59:41 crc kubenswrapper[4949]: I1001 15:59:41.415534 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpkjn\" (UniqueName: \"kubernetes.io/projected/d6bde34f-c88d-4a70-ab54-084c4727d46d-kube-api-access-xpkjn\") pod \"placement-575569c7bd-g6srl\" (UID: \"d6bde34f-c88d-4a70-ab54-084c4727d46d\") " pod="openstack/placement-575569c7bd-g6srl" Oct 01 15:59:41 crc kubenswrapper[4949]: I1001 15:59:41.415719 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6bde34f-c88d-4a70-ab54-084c4727d46d-logs\") pod \"placement-575569c7bd-g6srl\" (UID: \"d6bde34f-c88d-4a70-ab54-084c4727d46d\") " pod="openstack/placement-575569c7bd-g6srl" Oct 01 15:59:41 crc kubenswrapper[4949]: I1001 15:59:41.415766 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6bde34f-c88d-4a70-ab54-084c4727d46d-combined-ca-bundle\") pod \"placement-575569c7bd-g6srl\" (UID: \"d6bde34f-c88d-4a70-ab54-084c4727d46d\") " pod="openstack/placement-575569c7bd-g6srl" Oct 01 15:59:41 crc kubenswrapper[4949]: I1001 15:59:41.415818 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6bde34f-c88d-4a70-ab54-084c4727d46d-config-data\") pod \"placement-575569c7bd-g6srl\" (UID: \"d6bde34f-c88d-4a70-ab54-084c4727d46d\") " pod="openstack/placement-575569c7bd-g6srl" Oct 01 15:59:41 crc kubenswrapper[4949]: I1001 15:59:41.416140 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6bde34f-c88d-4a70-ab54-084c4727d46d-scripts\") pod \"placement-575569c7bd-g6srl\" (UID: \"d6bde34f-c88d-4a70-ab54-084c4727d46d\") " pod="openstack/placement-575569c7bd-g6srl" Oct 01 15:59:41 crc kubenswrapper[4949]: I1001 15:59:41.518232 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6bde34f-c88d-4a70-ab54-084c4727d46d-internal-tls-certs\") pod \"placement-575569c7bd-g6srl\" (UID: \"d6bde34f-c88d-4a70-ab54-084c4727d46d\") " pod="openstack/placement-575569c7bd-g6srl" Oct 01 15:59:41 crc kubenswrapper[4949]: I1001 15:59:41.518280 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6bde34f-c88d-4a70-ab54-084c4727d46d-public-tls-certs\") pod \"placement-575569c7bd-g6srl\" (UID: \"d6bde34f-c88d-4a70-ab54-084c4727d46d\") " pod="openstack/placement-575569c7bd-g6srl" Oct 01 15:59:41 crc kubenswrapper[4949]: I1001 15:59:41.518306 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpkjn\" (UniqueName: \"kubernetes.io/projected/d6bde34f-c88d-4a70-ab54-084c4727d46d-kube-api-access-xpkjn\") pod \"placement-575569c7bd-g6srl\" (UID: \"d6bde34f-c88d-4a70-ab54-084c4727d46d\") " pod="openstack/placement-575569c7bd-g6srl" Oct 01 15:59:41 crc kubenswrapper[4949]: I1001 15:59:41.518369 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6bde34f-c88d-4a70-ab54-084c4727d46d-logs\") pod \"placement-575569c7bd-g6srl\" (UID: \"d6bde34f-c88d-4a70-ab54-084c4727d46d\") " pod="openstack/placement-575569c7bd-g6srl" Oct 01 15:59:41 crc kubenswrapper[4949]: I1001 15:59:41.518420 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6bde34f-c88d-4a70-ab54-084c4727d46d-combined-ca-bundle\") pod \"placement-575569c7bd-g6srl\" (UID: \"d6bde34f-c88d-4a70-ab54-084c4727d46d\") " pod="openstack/placement-575569c7bd-g6srl" Oct 01 15:59:41 crc kubenswrapper[4949]: I1001 15:59:41.518455 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6bde34f-c88d-4a70-ab54-084c4727d46d-config-data\") pod \"placement-575569c7bd-g6srl\" (UID: \"d6bde34f-c88d-4a70-ab54-084c4727d46d\") " pod="openstack/placement-575569c7bd-g6srl" Oct 01 15:59:41 crc kubenswrapper[4949]: I1001 15:59:41.518536 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6bde34f-c88d-4a70-ab54-084c4727d46d-scripts\") pod \"placement-575569c7bd-g6srl\" (UID: \"d6bde34f-c88d-4a70-ab54-084c4727d46d\") " pod="openstack/placement-575569c7bd-g6srl" Oct 01 15:59:41 crc kubenswrapper[4949]: I1001 15:59:41.519901 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d6bde34f-c88d-4a70-ab54-084c4727d46d-logs\") pod \"placement-575569c7bd-g6srl\" (UID: \"d6bde34f-c88d-4a70-ab54-084c4727d46d\") " pod="openstack/placement-575569c7bd-g6srl" Oct 01 15:59:41 crc kubenswrapper[4949]: I1001 15:59:41.524745 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6bde34f-c88d-4a70-ab54-084c4727d46d-internal-tls-certs\") pod \"placement-575569c7bd-g6srl\" (UID: \"d6bde34f-c88d-4a70-ab54-084c4727d46d\") " pod="openstack/placement-575569c7bd-g6srl" Oct 01 15:59:41 crc kubenswrapper[4949]: I1001 15:59:41.527545 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d6bde34f-c88d-4a70-ab54-084c4727d46d-combined-ca-bundle\") pod \"placement-575569c7bd-g6srl\" (UID: \"d6bde34f-c88d-4a70-ab54-084c4727d46d\") " pod="openstack/placement-575569c7bd-g6srl" Oct 01 15:59:41 crc kubenswrapper[4949]: I1001 15:59:41.550994 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/d6bde34f-c88d-4a70-ab54-084c4727d46d-public-tls-certs\") pod \"placement-575569c7bd-g6srl\" (UID: \"d6bde34f-c88d-4a70-ab54-084c4727d46d\") " pod="openstack/placement-575569c7bd-g6srl" Oct 01 15:59:41 crc kubenswrapper[4949]: I1001 15:59:41.551241 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d6bde34f-c88d-4a70-ab54-084c4727d46d-config-data\") pod \"placement-575569c7bd-g6srl\" (UID: \"d6bde34f-c88d-4a70-ab54-084c4727d46d\") " pod="openstack/placement-575569c7bd-g6srl" Oct 01 15:59:41 crc kubenswrapper[4949]: I1001 15:59:41.560618 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d6bde34f-c88d-4a70-ab54-084c4727d46d-scripts\") pod \"placement-575569c7bd-g6srl\" (UID: \"d6bde34f-c88d-4a70-ab54-084c4727d46d\") " pod="openstack/placement-575569c7bd-g6srl" Oct 01 15:59:41 crc kubenswrapper[4949]: I1001 15:59:41.560995 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpkjn\" (UniqueName: \"kubernetes.io/projected/d6bde34f-c88d-4a70-ab54-084c4727d46d-kube-api-access-xpkjn\") pod \"placement-575569c7bd-g6srl\" (UID: \"d6bde34f-c88d-4a70-ab54-084c4727d46d\") " pod="openstack/placement-575569c7bd-g6srl" Oct 01 15:59:41 crc kubenswrapper[4949]: I1001 15:59:41.568181 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"placement-placement-dockercfg-js7fk" Oct 01 15:59:41 crc kubenswrapper[4949]: I1001 15:59:41.576882 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/placement-575569c7bd-g6srl" Oct 01 15:59:42 crc kubenswrapper[4949]: I1001 15:59:42.162305 4949 generic.go:334] "Generic (PLEG): container finished" podID="92c26ffd-a7f6-4593-8718-8947375730ef" containerID="d062929600b36141648398c096f2ff8c2539dee38d9804190b4ef4444bf280de" exitCode=0 Oct 01 15:59:42 crc kubenswrapper[4949]: I1001 15:59:42.162345 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vs9qk" event={"ID":"92c26ffd-a7f6-4593-8718-8947375730ef","Type":"ContainerDied","Data":"d062929600b36141648398c096f2ff8c2539dee38d9804190b4ef4444bf280de"} Oct 01 15:59:42 crc kubenswrapper[4949]: I1001 15:59:42.456989 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-bm8vc" Oct 01 15:59:42 crc kubenswrapper[4949]: I1001 15:59:42.643622 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ltfj\" (UniqueName: \"kubernetes.io/projected/d7e44454-db3f-453a-8bc9-d8f435685e32-kube-api-access-9ltfj\") pod \"d7e44454-db3f-453a-8bc9-d8f435685e32\" (UID: \"d7e44454-db3f-453a-8bc9-d8f435685e32\") " Oct 01 15:59:42 crc kubenswrapper[4949]: I1001 15:59:42.643720 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d7e44454-db3f-453a-8bc9-d8f435685e32-db-sync-config-data\") pod \"d7e44454-db3f-453a-8bc9-d8f435685e32\" (UID: \"d7e44454-db3f-453a-8bc9-d8f435685e32\") " Oct 01 15:59:42 crc kubenswrapper[4949]: I1001 15:59:42.643748 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7e44454-db3f-453a-8bc9-d8f435685e32-combined-ca-bundle\") pod \"d7e44454-db3f-453a-8bc9-d8f435685e32\" (UID: \"d7e44454-db3f-453a-8bc9-d8f435685e32\") " Oct 01 15:59:42 crc kubenswrapper[4949]: I1001 15:59:42.648975 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e44454-db3f-453a-8bc9-d8f435685e32-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d7e44454-db3f-453a-8bc9-d8f435685e32" (UID: "d7e44454-db3f-453a-8bc9-d8f435685e32"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:59:42 crc kubenswrapper[4949]: I1001 15:59:42.650262 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e44454-db3f-453a-8bc9-d8f435685e32-kube-api-access-9ltfj" (OuterVolumeSpecName: "kube-api-access-9ltfj") pod "d7e44454-db3f-453a-8bc9-d8f435685e32" (UID: "d7e44454-db3f-453a-8bc9-d8f435685e32"). InnerVolumeSpecName "kube-api-access-9ltfj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:59:42 crc kubenswrapper[4949]: I1001 15:59:42.671651 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e44454-db3f-453a-8bc9-d8f435685e32-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d7e44454-db3f-453a-8bc9-d8f435685e32" (UID: "d7e44454-db3f-453a-8bc9-d8f435685e32"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:59:42 crc kubenswrapper[4949]: I1001 15:59:42.745314 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9ltfj\" (UniqueName: \"kubernetes.io/projected/d7e44454-db3f-453a-8bc9-d8f435685e32-kube-api-access-9ltfj\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:42 crc kubenswrapper[4949]: I1001 15:59:42.745342 4949 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d7e44454-db3f-453a-8bc9-d8f435685e32-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:42 crc kubenswrapper[4949]: I1001 15:59:42.745353 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d7e44454-db3f-453a-8bc9-d8f435685e32-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:42 crc kubenswrapper[4949]: I1001 15:59:42.779046 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/placement-575569c7bd-g6srl"] Oct 01 15:59:42 crc kubenswrapper[4949]: W1001 15:59:42.780684 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd6bde34f_c88d_4a70_ab54_084c4727d46d.slice/crio-8b9a1d182830a7ab40cb47ba56276d8e8cfcaa435a00220a31436f3ad492a3a1 WatchSource:0}: Error finding container 8b9a1d182830a7ab40cb47ba56276d8e8cfcaa435a00220a31436f3ad492a3a1: Status 404 returned error can't find the container with id 8b9a1d182830a7ab40cb47ba56276d8e8cfcaa435a00220a31436f3ad492a3a1 Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.171050 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-db-sync-bm8vc" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.171037 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-db-sync-bm8vc" event={"ID":"d7e44454-db3f-453a-8bc9-d8f435685e32","Type":"ContainerDied","Data":"f8dff302d653772dcb5afad4658c13a6b347f25be97895f33116cf2e328f75e1"} Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.171497 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8dff302d653772dcb5afad4658c13a6b347f25be97895f33116cf2e328f75e1" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.172829 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-575569c7bd-g6srl" event={"ID":"d6bde34f-c88d-4a70-ab54-084c4727d46d","Type":"ContainerStarted","Data":"0a81848365df67bef5fc1f67b5b314e1c83a2dbcf95821bf9356fe64a9814f8a"} Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.172859 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-575569c7bd-g6srl" event={"ID":"d6bde34f-c88d-4a70-ab54-084c4727d46d","Type":"ContainerStarted","Data":"8b9a1d182830a7ab40cb47ba56276d8e8cfcaa435a00220a31436f3ad492a3a1"} Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.175141 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ebb36cc-f642-402b-8fc7-c12e7f35776a","Type":"ContainerStarted","Data":"9f12643cbf4f42c8470487f967890ea15f27599c99444d8ca27873a14123d5ac"} Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.175263 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2ebb36cc-f642-402b-8fc7-c12e7f35776a" containerName="ceilometer-central-agent" containerID="cri-o://f6b5e005e3c08ea4b0398ff5be1f8ab3eed334cf9122bc2e31b162fc42f195c4" gracePeriod=30 Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.175316 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.175366 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2ebb36cc-f642-402b-8fc7-c12e7f35776a" containerName="ceilometer-notification-agent" containerID="cri-o://2ce6b588121f00042fc55103091e4d4375b48192516e333728aaa5f3e119f398" gracePeriod=30 Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.175387 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2ebb36cc-f642-402b-8fc7-c12e7f35776a" containerName="sg-core" containerID="cri-o://f1c64af69df24b1e06bc4cb42846a23fd36c1ff811a774128a9664cb7f4b1684" gracePeriod=30 Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.175642 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="2ebb36cc-f642-402b-8fc7-c12e7f35776a" containerName="proxy-httpd" containerID="cri-o://9f12643cbf4f42c8470487f967890ea15f27599c99444d8ca27873a14123d5ac" gracePeriod=30 Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.188175 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-tjfmw" event={"ID":"41cbbbe8-6b79-4667-ba8d-7252d0d1a998","Type":"ContainerStarted","Data":"07d91c7413a4e7927432e7b97110cdc226eef7f33770a710271e1d9f96cec01b"} Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.241971 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-db-sync-tjfmw" podStartSLOduration=2.255448689 podStartE2EDuration="46.241951044s" podCreationTimestamp="2025-10-01 15:58:57 +0000 UTC" firstStartedPulling="2025-10-01 15:58:58.423157112 +0000 UTC m=+1037.728763303" lastFinishedPulling="2025-10-01 15:59:42.409659467 +0000 UTC m=+1081.715265658" observedRunningTime="2025-10-01 15:59:43.241626435 +0000 UTC m=+1082.547232646" watchObservedRunningTime="2025-10-01 15:59:43.241951044 +0000 UTC m=+1082.547557245" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.246247 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.262935411 podStartE2EDuration="46.246226699s" podCreationTimestamp="2025-10-01 15:58:57 +0000 UTC" firstStartedPulling="2025-10-01 15:58:58.423382848 +0000 UTC m=+1037.728989039" lastFinishedPulling="2025-10-01 15:59:42.406674136 +0000 UTC m=+1081.712280327" observedRunningTime="2025-10-01 15:59:43.22188504 +0000 UTC m=+1082.527491231" watchObservedRunningTime="2025-10-01 15:59:43.246226699 +0000 UTC m=+1082.551832890" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.431667 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vs9qk" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.562403 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/92c26ffd-a7f6-4593-8718-8947375730ef-config\") pod \"92c26ffd-a7f6-4593-8718-8947375730ef\" (UID: \"92c26ffd-a7f6-4593-8718-8947375730ef\") " Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.562483 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92c26ffd-a7f6-4593-8718-8947375730ef-combined-ca-bundle\") pod \"92c26ffd-a7f6-4593-8718-8947375730ef\" (UID: \"92c26ffd-a7f6-4593-8718-8947375730ef\") " Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.563097 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46tnl\" (UniqueName: \"kubernetes.io/projected/92c26ffd-a7f6-4593-8718-8947375730ef-kube-api-access-46tnl\") pod \"92c26ffd-a7f6-4593-8718-8947375730ef\" (UID: \"92c26ffd-a7f6-4593-8718-8947375730ef\") " Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.567403 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92c26ffd-a7f6-4593-8718-8947375730ef-kube-api-access-46tnl" (OuterVolumeSpecName: "kube-api-access-46tnl") pod "92c26ffd-a7f6-4593-8718-8947375730ef" (UID: "92c26ffd-a7f6-4593-8718-8947375730ef"). InnerVolumeSpecName "kube-api-access-46tnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.584943 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92c26ffd-a7f6-4593-8718-8947375730ef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "92c26ffd-a7f6-4593-8718-8947375730ef" (UID: "92c26ffd-a7f6-4593-8718-8947375730ef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.585309 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92c26ffd-a7f6-4593-8718-8947375730ef-config" (OuterVolumeSpecName: "config") pod "92c26ffd-a7f6-4593-8718-8947375730ef" (UID: "92c26ffd-a7f6-4593-8718-8947375730ef"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.665252 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/92c26ffd-a7f6-4593-8718-8947375730ef-config\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.665285 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/92c26ffd-a7f6-4593-8718-8947375730ef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.665296 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46tnl\" (UniqueName: \"kubernetes.io/projected/92c26ffd-a7f6-4593-8718-8947375730ef-kube-api-access-46tnl\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.680604 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-worker-5cdfb75847-bw4vd"] Oct 01 15:59:43 crc kubenswrapper[4949]: E1001 15:59:43.680948 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92c26ffd-a7f6-4593-8718-8947375730ef" containerName="neutron-db-sync" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.680965 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="92c26ffd-a7f6-4593-8718-8947375730ef" containerName="neutron-db-sync" Oct 01 15:59:43 crc kubenswrapper[4949]: E1001 15:59:43.680980 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d7e44454-db3f-453a-8bc9-d8f435685e32" containerName="barbican-db-sync" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.680986 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="d7e44454-db3f-453a-8bc9-d8f435685e32" containerName="barbican-db-sync" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.681199 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="d7e44454-db3f-453a-8bc9-d8f435685e32" containerName="barbican-db-sync" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.681226 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="92c26ffd-a7f6-4593-8718-8947375730ef" containerName="neutron-db-sync" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.689743 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5cdfb75847-bw4vd" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.694567 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-config-data" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.694833 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-barbican-dockercfg-ttrc6" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.697055 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-worker-config-data" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.720190 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5cdfb75847-bw4vd"] Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.744520 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-keystone-listener-54c68d78fd-k7v8v"] Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.746298 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-54c68d78fd-k7v8v" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.748611 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-keystone-listener-config-data" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.767294 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-54c68d78fd-k7v8v"] Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.802455 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-699df9757c-nfrcg"] Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.804187 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699df9757c-nfrcg" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.814093 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-699df9757c-nfrcg"] Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.868402 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f5c8bcf5-419a-4094-ac1c-bed8d1610faf-config-data-custom\") pod \"barbican-keystone-listener-54c68d78fd-k7v8v\" (UID: \"f5c8bcf5-419a-4094-ac1c-bed8d1610faf\") " pod="openstack/barbican-keystone-listener-54c68d78fd-k7v8v" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.868451 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36bab499-5905-4a12-baf4-dbbcd1422864-logs\") pod \"barbican-worker-5cdfb75847-bw4vd\" (UID: \"36bab499-5905-4a12-baf4-dbbcd1422864\") " pod="openstack/barbican-worker-5cdfb75847-bw4vd" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.869270 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5c8bcf5-419a-4094-ac1c-bed8d1610faf-combined-ca-bundle\") pod \"barbican-keystone-listener-54c68d78fd-k7v8v\" (UID: \"f5c8bcf5-419a-4094-ac1c-bed8d1610faf\") " pod="openstack/barbican-keystone-listener-54c68d78fd-k7v8v" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.869327 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36bab499-5905-4a12-baf4-dbbcd1422864-combined-ca-bundle\") pod \"barbican-worker-5cdfb75847-bw4vd\" (UID: \"36bab499-5905-4a12-baf4-dbbcd1422864\") " pod="openstack/barbican-worker-5cdfb75847-bw4vd" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.869370 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/36bab499-5905-4a12-baf4-dbbcd1422864-config-data-custom\") pod \"barbican-worker-5cdfb75847-bw4vd\" (UID: \"36bab499-5905-4a12-baf4-dbbcd1422864\") " pod="openstack/barbican-worker-5cdfb75847-bw4vd" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.869434 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5c8bcf5-419a-4094-ac1c-bed8d1610faf-config-data\") pod \"barbican-keystone-listener-54c68d78fd-k7v8v\" (UID: \"f5c8bcf5-419a-4094-ac1c-bed8d1610faf\") " pod="openstack/barbican-keystone-listener-54c68d78fd-k7v8v" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.869489 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46mg5\" (UniqueName: \"kubernetes.io/projected/f5c8bcf5-419a-4094-ac1c-bed8d1610faf-kube-api-access-46mg5\") pod \"barbican-keystone-listener-54c68d78fd-k7v8v\" (UID: \"f5c8bcf5-419a-4094-ac1c-bed8d1610faf\") " pod="openstack/barbican-keystone-listener-54c68d78fd-k7v8v" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.869517 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36bab499-5905-4a12-baf4-dbbcd1422864-config-data\") pod \"barbican-worker-5cdfb75847-bw4vd\" (UID: \"36bab499-5905-4a12-baf4-dbbcd1422864\") " pod="openstack/barbican-worker-5cdfb75847-bw4vd" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.869681 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5c8bcf5-419a-4094-ac1c-bed8d1610faf-logs\") pod \"barbican-keystone-listener-54c68d78fd-k7v8v\" (UID: \"f5c8bcf5-419a-4094-ac1c-bed8d1610faf\") " pod="openstack/barbican-keystone-listener-54c68d78fd-k7v8v" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.870369 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6vn8\" (UniqueName: \"kubernetes.io/projected/36bab499-5905-4a12-baf4-dbbcd1422864-kube-api-access-w6vn8\") pod \"barbican-worker-5cdfb75847-bw4vd\" (UID: \"36bab499-5905-4a12-baf4-dbbcd1422864\") " pod="openstack/barbican-worker-5cdfb75847-bw4vd" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.936999 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-db9d4676b-6pnsc"] Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.938483 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-db9d4676b-6pnsc" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.941053 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"barbican-api-config-data" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.949614 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-db9d4676b-6pnsc"] Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.971724 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/103475e5-ed89-4051-9173-43fd280a60c4-logs\") pod \"barbican-api-db9d4676b-6pnsc\" (UID: \"103475e5-ed89-4051-9173-43fd280a60c4\") " pod="openstack/barbican-api-db9d4676b-6pnsc" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.971787 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46mg5\" (UniqueName: \"kubernetes.io/projected/f5c8bcf5-419a-4094-ac1c-bed8d1610faf-kube-api-access-46mg5\") pod \"barbican-keystone-listener-54c68d78fd-k7v8v\" (UID: \"f5c8bcf5-419a-4094-ac1c-bed8d1610faf\") " pod="openstack/barbican-keystone-listener-54c68d78fd-k7v8v" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.971873 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36bab499-5905-4a12-baf4-dbbcd1422864-config-data\") pod \"barbican-worker-5cdfb75847-bw4vd\" (UID: \"36bab499-5905-4a12-baf4-dbbcd1422864\") " pod="openstack/barbican-worker-5cdfb75847-bw4vd" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.971901 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5c8bcf5-419a-4094-ac1c-bed8d1610faf-logs\") pod \"barbican-keystone-listener-54c68d78fd-k7v8v\" (UID: \"f5c8bcf5-419a-4094-ac1c-bed8d1610faf\") " pod="openstack/barbican-keystone-listener-54c68d78fd-k7v8v" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.971928 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6vn8\" (UniqueName: \"kubernetes.io/projected/36bab499-5905-4a12-baf4-dbbcd1422864-kube-api-access-w6vn8\") pod \"barbican-worker-5cdfb75847-bw4vd\" (UID: \"36bab499-5905-4a12-baf4-dbbcd1422864\") " pod="openstack/barbican-worker-5cdfb75847-bw4vd" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.971952 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/103475e5-ed89-4051-9173-43fd280a60c4-config-data-custom\") pod \"barbican-api-db9d4676b-6pnsc\" (UID: \"103475e5-ed89-4051-9173-43fd280a60c4\") " pod="openstack/barbican-api-db9d4676b-6pnsc" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.971969 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f5c8bcf5-419a-4094-ac1c-bed8d1610faf-config-data-custom\") pod \"barbican-keystone-listener-54c68d78fd-k7v8v\" (UID: \"f5c8bcf5-419a-4094-ac1c-bed8d1610faf\") " pod="openstack/barbican-keystone-listener-54c68d78fd-k7v8v" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.971991 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36bab499-5905-4a12-baf4-dbbcd1422864-logs\") pod \"barbican-worker-5cdfb75847-bw4vd\" (UID: \"36bab499-5905-4a12-baf4-dbbcd1422864\") " pod="openstack/barbican-worker-5cdfb75847-bw4vd" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.972018 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng84p\" (UniqueName: \"kubernetes.io/projected/615f797d-3143-4c5c-8941-91855be00750-kube-api-access-ng84p\") pod \"dnsmasq-dns-699df9757c-nfrcg\" (UID: \"615f797d-3143-4c5c-8941-91855be00750\") " pod="openstack/dnsmasq-dns-699df9757c-nfrcg" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.972035 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/615f797d-3143-4c5c-8941-91855be00750-config\") pod \"dnsmasq-dns-699df9757c-nfrcg\" (UID: \"615f797d-3143-4c5c-8941-91855be00750\") " pod="openstack/dnsmasq-dns-699df9757c-nfrcg" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.972057 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5c8bcf5-419a-4094-ac1c-bed8d1610faf-combined-ca-bundle\") pod \"barbican-keystone-listener-54c68d78fd-k7v8v\" (UID: \"f5c8bcf5-419a-4094-ac1c-bed8d1610faf\") " pod="openstack/barbican-keystone-listener-54c68d78fd-k7v8v" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.972085 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36bab499-5905-4a12-baf4-dbbcd1422864-combined-ca-bundle\") pod \"barbican-worker-5cdfb75847-bw4vd\" (UID: \"36bab499-5905-4a12-baf4-dbbcd1422864\") " pod="openstack/barbican-worker-5cdfb75847-bw4vd" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.972108 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/615f797d-3143-4c5c-8941-91855be00750-ovsdbserver-nb\") pod \"dnsmasq-dns-699df9757c-nfrcg\" (UID: \"615f797d-3143-4c5c-8941-91855be00750\") " pod="openstack/dnsmasq-dns-699df9757c-nfrcg" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.972156 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/36bab499-5905-4a12-baf4-dbbcd1422864-config-data-custom\") pod \"barbican-worker-5cdfb75847-bw4vd\" (UID: \"36bab499-5905-4a12-baf4-dbbcd1422864\") " pod="openstack/barbican-worker-5cdfb75847-bw4vd" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.972266 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrhml\" (UniqueName: \"kubernetes.io/projected/103475e5-ed89-4051-9173-43fd280a60c4-kube-api-access-jrhml\") pod \"barbican-api-db9d4676b-6pnsc\" (UID: \"103475e5-ed89-4051-9173-43fd280a60c4\") " pod="openstack/barbican-api-db9d4676b-6pnsc" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.972309 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/615f797d-3143-4c5c-8941-91855be00750-dns-svc\") pod \"dnsmasq-dns-699df9757c-nfrcg\" (UID: \"615f797d-3143-4c5c-8941-91855be00750\") " pod="openstack/dnsmasq-dns-699df9757c-nfrcg" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.972331 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103475e5-ed89-4051-9173-43fd280a60c4-combined-ca-bundle\") pod \"barbican-api-db9d4676b-6pnsc\" (UID: \"103475e5-ed89-4051-9173-43fd280a60c4\") " pod="openstack/barbican-api-db9d4676b-6pnsc" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.972354 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/103475e5-ed89-4051-9173-43fd280a60c4-config-data\") pod \"barbican-api-db9d4676b-6pnsc\" (UID: \"103475e5-ed89-4051-9173-43fd280a60c4\") " pod="openstack/barbican-api-db9d4676b-6pnsc" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.972376 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5c8bcf5-419a-4094-ac1c-bed8d1610faf-config-data\") pod \"barbican-keystone-listener-54c68d78fd-k7v8v\" (UID: \"f5c8bcf5-419a-4094-ac1c-bed8d1610faf\") " pod="openstack/barbican-keystone-listener-54c68d78fd-k7v8v" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.972382 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f5c8bcf5-419a-4094-ac1c-bed8d1610faf-logs\") pod \"barbican-keystone-listener-54c68d78fd-k7v8v\" (UID: \"f5c8bcf5-419a-4094-ac1c-bed8d1610faf\") " pod="openstack/barbican-keystone-listener-54c68d78fd-k7v8v" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.972403 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/615f797d-3143-4c5c-8941-91855be00750-ovsdbserver-sb\") pod \"dnsmasq-dns-699df9757c-nfrcg\" (UID: \"615f797d-3143-4c5c-8941-91855be00750\") " pod="openstack/dnsmasq-dns-699df9757c-nfrcg" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.973679 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/36bab499-5905-4a12-baf4-dbbcd1422864-logs\") pod \"barbican-worker-5cdfb75847-bw4vd\" (UID: \"36bab499-5905-4a12-baf4-dbbcd1422864\") " pod="openstack/barbican-worker-5cdfb75847-bw4vd" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.976867 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/36bab499-5905-4a12-baf4-dbbcd1422864-config-data-custom\") pod \"barbican-worker-5cdfb75847-bw4vd\" (UID: \"36bab499-5905-4a12-baf4-dbbcd1422864\") " pod="openstack/barbican-worker-5cdfb75847-bw4vd" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.978318 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/36bab499-5905-4a12-baf4-dbbcd1422864-combined-ca-bundle\") pod \"barbican-worker-5cdfb75847-bw4vd\" (UID: \"36bab499-5905-4a12-baf4-dbbcd1422864\") " pod="openstack/barbican-worker-5cdfb75847-bw4vd" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.978784 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/f5c8bcf5-419a-4094-ac1c-bed8d1610faf-config-data-custom\") pod \"barbican-keystone-listener-54c68d78fd-k7v8v\" (UID: \"f5c8bcf5-419a-4094-ac1c-bed8d1610faf\") " pod="openstack/barbican-keystone-listener-54c68d78fd-k7v8v" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.979165 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f5c8bcf5-419a-4094-ac1c-bed8d1610faf-combined-ca-bundle\") pod \"barbican-keystone-listener-54c68d78fd-k7v8v\" (UID: \"f5c8bcf5-419a-4094-ac1c-bed8d1610faf\") " pod="openstack/barbican-keystone-listener-54c68d78fd-k7v8v" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.979894 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/36bab499-5905-4a12-baf4-dbbcd1422864-config-data\") pod \"barbican-worker-5cdfb75847-bw4vd\" (UID: \"36bab499-5905-4a12-baf4-dbbcd1422864\") " pod="openstack/barbican-worker-5cdfb75847-bw4vd" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.980949 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f5c8bcf5-419a-4094-ac1c-bed8d1610faf-config-data\") pod \"barbican-keystone-listener-54c68d78fd-k7v8v\" (UID: \"f5c8bcf5-419a-4094-ac1c-bed8d1610faf\") " pod="openstack/barbican-keystone-listener-54c68d78fd-k7v8v" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.996842 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46mg5\" (UniqueName: \"kubernetes.io/projected/f5c8bcf5-419a-4094-ac1c-bed8d1610faf-kube-api-access-46mg5\") pod \"barbican-keystone-listener-54c68d78fd-k7v8v\" (UID: \"f5c8bcf5-419a-4094-ac1c-bed8d1610faf\") " pod="openstack/barbican-keystone-listener-54c68d78fd-k7v8v" Oct 01 15:59:43 crc kubenswrapper[4949]: I1001 15:59:43.997461 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6vn8\" (UniqueName: \"kubernetes.io/projected/36bab499-5905-4a12-baf4-dbbcd1422864-kube-api-access-w6vn8\") pod \"barbican-worker-5cdfb75847-bw4vd\" (UID: \"36bab499-5905-4a12-baf4-dbbcd1422864\") " pod="openstack/barbican-worker-5cdfb75847-bw4vd" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.029155 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-worker-5cdfb75847-bw4vd" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.073535 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ng84p\" (UniqueName: \"kubernetes.io/projected/615f797d-3143-4c5c-8941-91855be00750-kube-api-access-ng84p\") pod \"dnsmasq-dns-699df9757c-nfrcg\" (UID: \"615f797d-3143-4c5c-8941-91855be00750\") " pod="openstack/dnsmasq-dns-699df9757c-nfrcg" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.073611 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/615f797d-3143-4c5c-8941-91855be00750-config\") pod \"dnsmasq-dns-699df9757c-nfrcg\" (UID: \"615f797d-3143-4c5c-8941-91855be00750\") " pod="openstack/dnsmasq-dns-699df9757c-nfrcg" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.073655 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/615f797d-3143-4c5c-8941-91855be00750-ovsdbserver-nb\") pod \"dnsmasq-dns-699df9757c-nfrcg\" (UID: \"615f797d-3143-4c5c-8941-91855be00750\") " pod="openstack/dnsmasq-dns-699df9757c-nfrcg" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.073705 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrhml\" (UniqueName: \"kubernetes.io/projected/103475e5-ed89-4051-9173-43fd280a60c4-kube-api-access-jrhml\") pod \"barbican-api-db9d4676b-6pnsc\" (UID: \"103475e5-ed89-4051-9173-43fd280a60c4\") " pod="openstack/barbican-api-db9d4676b-6pnsc" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.073730 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/615f797d-3143-4c5c-8941-91855be00750-dns-svc\") pod \"dnsmasq-dns-699df9757c-nfrcg\" (UID: \"615f797d-3143-4c5c-8941-91855be00750\") " pod="openstack/dnsmasq-dns-699df9757c-nfrcg" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.073751 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103475e5-ed89-4051-9173-43fd280a60c4-combined-ca-bundle\") pod \"barbican-api-db9d4676b-6pnsc\" (UID: \"103475e5-ed89-4051-9173-43fd280a60c4\") " pod="openstack/barbican-api-db9d4676b-6pnsc" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.073769 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/103475e5-ed89-4051-9173-43fd280a60c4-config-data\") pod \"barbican-api-db9d4676b-6pnsc\" (UID: \"103475e5-ed89-4051-9173-43fd280a60c4\") " pod="openstack/barbican-api-db9d4676b-6pnsc" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.073789 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/615f797d-3143-4c5c-8941-91855be00750-ovsdbserver-sb\") pod \"dnsmasq-dns-699df9757c-nfrcg\" (UID: \"615f797d-3143-4c5c-8941-91855be00750\") " pod="openstack/dnsmasq-dns-699df9757c-nfrcg" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.073826 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/103475e5-ed89-4051-9173-43fd280a60c4-logs\") pod \"barbican-api-db9d4676b-6pnsc\" (UID: \"103475e5-ed89-4051-9173-43fd280a60c4\") " pod="openstack/barbican-api-db9d4676b-6pnsc" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.073863 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/103475e5-ed89-4051-9173-43fd280a60c4-config-data-custom\") pod \"barbican-api-db9d4676b-6pnsc\" (UID: \"103475e5-ed89-4051-9173-43fd280a60c4\") " pod="openstack/barbican-api-db9d4676b-6pnsc" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.074737 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/615f797d-3143-4c5c-8941-91855be00750-dns-svc\") pod \"dnsmasq-dns-699df9757c-nfrcg\" (UID: \"615f797d-3143-4c5c-8941-91855be00750\") " pod="openstack/dnsmasq-dns-699df9757c-nfrcg" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.076157 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/615f797d-3143-4c5c-8941-91855be00750-config\") pod \"dnsmasq-dns-699df9757c-nfrcg\" (UID: \"615f797d-3143-4c5c-8941-91855be00750\") " pod="openstack/dnsmasq-dns-699df9757c-nfrcg" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.076480 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/615f797d-3143-4c5c-8941-91855be00750-ovsdbserver-nb\") pod \"dnsmasq-dns-699df9757c-nfrcg\" (UID: \"615f797d-3143-4c5c-8941-91855be00750\") " pod="openstack/dnsmasq-dns-699df9757c-nfrcg" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.076759 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/103475e5-ed89-4051-9173-43fd280a60c4-logs\") pod \"barbican-api-db9d4676b-6pnsc\" (UID: \"103475e5-ed89-4051-9173-43fd280a60c4\") " pod="openstack/barbican-api-db9d4676b-6pnsc" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.076768 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/615f797d-3143-4c5c-8941-91855be00750-ovsdbserver-sb\") pod \"dnsmasq-dns-699df9757c-nfrcg\" (UID: \"615f797d-3143-4c5c-8941-91855be00750\") " pod="openstack/dnsmasq-dns-699df9757c-nfrcg" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.080449 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/103475e5-ed89-4051-9173-43fd280a60c4-config-data-custom\") pod \"barbican-api-db9d4676b-6pnsc\" (UID: \"103475e5-ed89-4051-9173-43fd280a60c4\") " pod="openstack/barbican-api-db9d4676b-6pnsc" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.082637 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/103475e5-ed89-4051-9173-43fd280a60c4-config-data\") pod \"barbican-api-db9d4676b-6pnsc\" (UID: \"103475e5-ed89-4051-9173-43fd280a60c4\") " pod="openstack/barbican-api-db9d4676b-6pnsc" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.082861 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103475e5-ed89-4051-9173-43fd280a60c4-combined-ca-bundle\") pod \"barbican-api-db9d4676b-6pnsc\" (UID: \"103475e5-ed89-4051-9173-43fd280a60c4\") " pod="openstack/barbican-api-db9d4676b-6pnsc" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.083920 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-keystone-listener-54c68d78fd-k7v8v" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.092763 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ng84p\" (UniqueName: \"kubernetes.io/projected/615f797d-3143-4c5c-8941-91855be00750-kube-api-access-ng84p\") pod \"dnsmasq-dns-699df9757c-nfrcg\" (UID: \"615f797d-3143-4c5c-8941-91855be00750\") " pod="openstack/dnsmasq-dns-699df9757c-nfrcg" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.097235 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrhml\" (UniqueName: \"kubernetes.io/projected/103475e5-ed89-4051-9173-43fd280a60c4-kube-api-access-jrhml\") pod \"barbican-api-db9d4676b-6pnsc\" (UID: \"103475e5-ed89-4051-9173-43fd280a60c4\") " pod="openstack/barbican-api-db9d4676b-6pnsc" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.174449 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699df9757c-nfrcg" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.227249 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/placement-575569c7bd-g6srl" event={"ID":"d6bde34f-c88d-4a70-ab54-084c4727d46d","Type":"ContainerStarted","Data":"47c04d1388b2fc85fe32d1a550104eeeedc3a72af5e1a1af924d81db9a76ee1a"} Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.227560 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-575569c7bd-g6srl" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.227576 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/placement-575569c7bd-g6srl" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.228516 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-db-sync-vs9qk" event={"ID":"92c26ffd-a7f6-4593-8718-8947375730ef","Type":"ContainerDied","Data":"ac9ba166b8a006538847042cde7ff5157a60178a25248d1a515303e4770a94f5"} Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.228539 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac9ba166b8a006538847042cde7ff5157a60178a25248d1a515303e4770a94f5" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.228606 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-db-sync-vs9qk" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.233446 4949 generic.go:334] "Generic (PLEG): container finished" podID="2ebb36cc-f642-402b-8fc7-c12e7f35776a" containerID="9f12643cbf4f42c8470487f967890ea15f27599c99444d8ca27873a14123d5ac" exitCode=0 Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.233467 4949 generic.go:334] "Generic (PLEG): container finished" podID="2ebb36cc-f642-402b-8fc7-c12e7f35776a" containerID="f1c64af69df24b1e06bc4cb42846a23fd36c1ff811a774128a9664cb7f4b1684" exitCode=2 Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.233474 4949 generic.go:334] "Generic (PLEG): container finished" podID="2ebb36cc-f642-402b-8fc7-c12e7f35776a" containerID="f6b5e005e3c08ea4b0398ff5be1f8ab3eed334cf9122bc2e31b162fc42f195c4" exitCode=0 Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.233496 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ebb36cc-f642-402b-8fc7-c12e7f35776a","Type":"ContainerDied","Data":"9f12643cbf4f42c8470487f967890ea15f27599c99444d8ca27873a14123d5ac"} Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.233519 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ebb36cc-f642-402b-8fc7-c12e7f35776a","Type":"ContainerDied","Data":"f1c64af69df24b1e06bc4cb42846a23fd36c1ff811a774128a9664cb7f4b1684"} Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.233529 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ebb36cc-f642-402b-8fc7-c12e7f35776a","Type":"ContainerDied","Data":"f6b5e005e3c08ea4b0398ff5be1f8ab3eed334cf9122bc2e31b162fc42f195c4"} Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.256859 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/placement-575569c7bd-g6srl" podStartSLOduration=3.256843673 podStartE2EDuration="3.256843673s" podCreationTimestamp="2025-10-01 15:59:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:59:44.249894945 +0000 UTC m=+1083.555501136" watchObservedRunningTime="2025-10-01 15:59:44.256843673 +0000 UTC m=+1083.562449864" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.260735 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-db9d4676b-6pnsc" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.386737 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-keystone-listener-54c68d78fd-k7v8v"] Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.450025 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-699df9757c-nfrcg"] Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.471393 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-ljcwm"] Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.473091 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-ljcwm" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.486644 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-ljcwm"] Oct 01 15:59:44 crc kubenswrapper[4949]: W1001 15:59:44.517388 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod36bab499_5905_4a12_baf4_dbbcd1422864.slice/crio-f1bae958ede671f345acf20de6ce070e19e77ffb46926ced544cfdf146af9fe7 WatchSource:0}: Error finding container f1bae958ede671f345acf20de6ce070e19e77ffb46926ced544cfdf146af9fe7: Status 404 returned error can't find the container with id f1bae958ede671f345acf20de6ce070e19e77ffb46926ced544cfdf146af9fe7 Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.520690 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-worker-5cdfb75847-bw4vd"] Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.582288 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-5b4b76d758-g7k8p"] Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.584358 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5b4b76d758-g7k8p" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.593382 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-neutron-dockercfg-qhg79" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.593439 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-httpd-config" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.593551 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-config" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.593595 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-ovndbs" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.597312 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-crqvq\" (UniqueName: \"kubernetes.io/projected/09cb5d0c-f02b-4067-94ea-43e1067dbb5b-kube-api-access-crqvq\") pod \"dnsmasq-dns-6bb684768f-ljcwm\" (UID: \"09cb5d0c-f02b-4067-94ea-43e1067dbb5b\") " pod="openstack/dnsmasq-dns-6bb684768f-ljcwm" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.597419 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09cb5d0c-f02b-4067-94ea-43e1067dbb5b-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb684768f-ljcwm\" (UID: \"09cb5d0c-f02b-4067-94ea-43e1067dbb5b\") " pod="openstack/dnsmasq-dns-6bb684768f-ljcwm" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.597504 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09cb5d0c-f02b-4067-94ea-43e1067dbb5b-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb684768f-ljcwm\" (UID: \"09cb5d0c-f02b-4067-94ea-43e1067dbb5b\") " pod="openstack/dnsmasq-dns-6bb684768f-ljcwm" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.597591 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cb5d0c-f02b-4067-94ea-43e1067dbb5b-config\") pod \"dnsmasq-dns-6bb684768f-ljcwm\" (UID: \"09cb5d0c-f02b-4067-94ea-43e1067dbb5b\") " pod="openstack/dnsmasq-dns-6bb684768f-ljcwm" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.598198 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09cb5d0c-f02b-4067-94ea-43e1067dbb5b-dns-svc\") pod \"dnsmasq-dns-6bb684768f-ljcwm\" (UID: \"09cb5d0c-f02b-4067-94ea-43e1067dbb5b\") " pod="openstack/dnsmasq-dns-6bb684768f-ljcwm" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.601770 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5b4b76d758-g7k8p"] Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.699979 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/83844cc6-633a-48b1-aa76-e1cf34582971-httpd-config\") pod \"neutron-5b4b76d758-g7k8p\" (UID: \"83844cc6-633a-48b1-aa76-e1cf34582971\") " pod="openstack/neutron-5b4b76d758-g7k8p" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.700104 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09cb5d0c-f02b-4067-94ea-43e1067dbb5b-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb684768f-ljcwm\" (UID: \"09cb5d0c-f02b-4067-94ea-43e1067dbb5b\") " pod="openstack/dnsmasq-dns-6bb684768f-ljcwm" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.700170 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cb5d0c-f02b-4067-94ea-43e1067dbb5b-config\") pod \"dnsmasq-dns-6bb684768f-ljcwm\" (UID: \"09cb5d0c-f02b-4067-94ea-43e1067dbb5b\") " pod="openstack/dnsmasq-dns-6bb684768f-ljcwm" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.700211 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/83844cc6-633a-48b1-aa76-e1cf34582971-config\") pod \"neutron-5b4b76d758-g7k8p\" (UID: \"83844cc6-633a-48b1-aa76-e1cf34582971\") " pod="openstack/neutron-5b4b76d758-g7k8p" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.700238 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09cb5d0c-f02b-4067-94ea-43e1067dbb5b-dns-svc\") pod \"dnsmasq-dns-6bb684768f-ljcwm\" (UID: \"09cb5d0c-f02b-4067-94ea-43e1067dbb5b\") " pod="openstack/dnsmasq-dns-6bb684768f-ljcwm" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.700259 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83844cc6-633a-48b1-aa76-e1cf34582971-combined-ca-bundle\") pod \"neutron-5b4b76d758-g7k8p\" (UID: \"83844cc6-633a-48b1-aa76-e1cf34582971\") " pod="openstack/neutron-5b4b76d758-g7k8p" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.700337 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-crqvq\" (UniqueName: \"kubernetes.io/projected/09cb5d0c-f02b-4067-94ea-43e1067dbb5b-kube-api-access-crqvq\") pod \"dnsmasq-dns-6bb684768f-ljcwm\" (UID: \"09cb5d0c-f02b-4067-94ea-43e1067dbb5b\") " pod="openstack/dnsmasq-dns-6bb684768f-ljcwm" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.700419 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09cb5d0c-f02b-4067-94ea-43e1067dbb5b-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb684768f-ljcwm\" (UID: \"09cb5d0c-f02b-4067-94ea-43e1067dbb5b\") " pod="openstack/dnsmasq-dns-6bb684768f-ljcwm" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.700457 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-575l4\" (UniqueName: \"kubernetes.io/projected/83844cc6-633a-48b1-aa76-e1cf34582971-kube-api-access-575l4\") pod \"neutron-5b4b76d758-g7k8p\" (UID: \"83844cc6-633a-48b1-aa76-e1cf34582971\") " pod="openstack/neutron-5b4b76d758-g7k8p" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.700482 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/83844cc6-633a-48b1-aa76-e1cf34582971-ovndb-tls-certs\") pod \"neutron-5b4b76d758-g7k8p\" (UID: \"83844cc6-633a-48b1-aa76-e1cf34582971\") " pod="openstack/neutron-5b4b76d758-g7k8p" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.703458 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09cb5d0c-f02b-4067-94ea-43e1067dbb5b-ovsdbserver-sb\") pod \"dnsmasq-dns-6bb684768f-ljcwm\" (UID: \"09cb5d0c-f02b-4067-94ea-43e1067dbb5b\") " pod="openstack/dnsmasq-dns-6bb684768f-ljcwm" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.703468 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cb5d0c-f02b-4067-94ea-43e1067dbb5b-config\") pod \"dnsmasq-dns-6bb684768f-ljcwm\" (UID: \"09cb5d0c-f02b-4067-94ea-43e1067dbb5b\") " pod="openstack/dnsmasq-dns-6bb684768f-ljcwm" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.703703 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09cb5d0c-f02b-4067-94ea-43e1067dbb5b-ovsdbserver-nb\") pod \"dnsmasq-dns-6bb684768f-ljcwm\" (UID: \"09cb5d0c-f02b-4067-94ea-43e1067dbb5b\") " pod="openstack/dnsmasq-dns-6bb684768f-ljcwm" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.703798 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09cb5d0c-f02b-4067-94ea-43e1067dbb5b-dns-svc\") pod \"dnsmasq-dns-6bb684768f-ljcwm\" (UID: \"09cb5d0c-f02b-4067-94ea-43e1067dbb5b\") " pod="openstack/dnsmasq-dns-6bb684768f-ljcwm" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.720389 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-crqvq\" (UniqueName: \"kubernetes.io/projected/09cb5d0c-f02b-4067-94ea-43e1067dbb5b-kube-api-access-crqvq\") pod \"dnsmasq-dns-6bb684768f-ljcwm\" (UID: \"09cb5d0c-f02b-4067-94ea-43e1067dbb5b\") " pod="openstack/dnsmasq-dns-6bb684768f-ljcwm" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.754528 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-699df9757c-nfrcg"] Oct 01 15:59:44 crc kubenswrapper[4949]: W1001 15:59:44.758153 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod615f797d_3143_4c5c_8941_91855be00750.slice/crio-c617912c11e9b2dee8c8d188382c47e354e5133c7881ff65d644316cdf9ba192 WatchSource:0}: Error finding container c617912c11e9b2dee8c8d188382c47e354e5133c7881ff65d644316cdf9ba192: Status 404 returned error can't find the container with id c617912c11e9b2dee8c8d188382c47e354e5133c7881ff65d644316cdf9ba192 Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.811864 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/83844cc6-633a-48b1-aa76-e1cf34582971-config\") pod \"neutron-5b4b76d758-g7k8p\" (UID: \"83844cc6-633a-48b1-aa76-e1cf34582971\") " pod="openstack/neutron-5b4b76d758-g7k8p" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.811925 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83844cc6-633a-48b1-aa76-e1cf34582971-combined-ca-bundle\") pod \"neutron-5b4b76d758-g7k8p\" (UID: \"83844cc6-633a-48b1-aa76-e1cf34582971\") " pod="openstack/neutron-5b4b76d758-g7k8p" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.811996 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-575l4\" (UniqueName: \"kubernetes.io/projected/83844cc6-633a-48b1-aa76-e1cf34582971-kube-api-access-575l4\") pod \"neutron-5b4b76d758-g7k8p\" (UID: \"83844cc6-633a-48b1-aa76-e1cf34582971\") " pod="openstack/neutron-5b4b76d758-g7k8p" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.812026 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/83844cc6-633a-48b1-aa76-e1cf34582971-ovndb-tls-certs\") pod \"neutron-5b4b76d758-g7k8p\" (UID: \"83844cc6-633a-48b1-aa76-e1cf34582971\") " pod="openstack/neutron-5b4b76d758-g7k8p" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.812054 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/83844cc6-633a-48b1-aa76-e1cf34582971-httpd-config\") pod \"neutron-5b4b76d758-g7k8p\" (UID: \"83844cc6-633a-48b1-aa76-e1cf34582971\") " pod="openstack/neutron-5b4b76d758-g7k8p" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.816875 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/83844cc6-633a-48b1-aa76-e1cf34582971-ovndb-tls-certs\") pod \"neutron-5b4b76d758-g7k8p\" (UID: \"83844cc6-633a-48b1-aa76-e1cf34582971\") " pod="openstack/neutron-5b4b76d758-g7k8p" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.818115 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83844cc6-633a-48b1-aa76-e1cf34582971-combined-ca-bundle\") pod \"neutron-5b4b76d758-g7k8p\" (UID: \"83844cc6-633a-48b1-aa76-e1cf34582971\") " pod="openstack/neutron-5b4b76d758-g7k8p" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.818785 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-ljcwm" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.818858 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/83844cc6-633a-48b1-aa76-e1cf34582971-config\") pod \"neutron-5b4b76d758-g7k8p\" (UID: \"83844cc6-633a-48b1-aa76-e1cf34582971\") " pod="openstack/neutron-5b4b76d758-g7k8p" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.828411 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/83844cc6-633a-48b1-aa76-e1cf34582971-httpd-config\") pod \"neutron-5b4b76d758-g7k8p\" (UID: \"83844cc6-633a-48b1-aa76-e1cf34582971\") " pod="openstack/neutron-5b4b76d758-g7k8p" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.835802 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-575l4\" (UniqueName: \"kubernetes.io/projected/83844cc6-633a-48b1-aa76-e1cf34582971-kube-api-access-575l4\") pod \"neutron-5b4b76d758-g7k8p\" (UID: \"83844cc6-633a-48b1-aa76-e1cf34582971\") " pod="openstack/neutron-5b4b76d758-g7k8p" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.914278 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5b4b76d758-g7k8p" Oct 01 15:59:44 crc kubenswrapper[4949]: I1001 15:59:44.980741 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-db9d4676b-6pnsc"] Oct 01 15:59:45 crc kubenswrapper[4949]: I1001 15:59:45.255061 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-54c68d78fd-k7v8v" event={"ID":"f5c8bcf5-419a-4094-ac1c-bed8d1610faf","Type":"ContainerStarted","Data":"a144d2d373394ca1ebd852a019ddd832153f21f55e12e5650b579b4a1187607a"} Oct 01 15:59:45 crc kubenswrapper[4949]: I1001 15:59:45.265172 4949 generic.go:334] "Generic (PLEG): container finished" podID="615f797d-3143-4c5c-8941-91855be00750" containerID="ac075b9edda0e61c4450c00d7f237d314a5cc76ff7f18277fff1066581bca222" exitCode=0 Oct 01 15:59:45 crc kubenswrapper[4949]: I1001 15:59:45.265280 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699df9757c-nfrcg" event={"ID":"615f797d-3143-4c5c-8941-91855be00750","Type":"ContainerDied","Data":"ac075b9edda0e61c4450c00d7f237d314a5cc76ff7f18277fff1066581bca222"} Oct 01 15:59:45 crc kubenswrapper[4949]: I1001 15:59:45.265307 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699df9757c-nfrcg" event={"ID":"615f797d-3143-4c5c-8941-91855be00750","Type":"ContainerStarted","Data":"c617912c11e9b2dee8c8d188382c47e354e5133c7881ff65d644316cdf9ba192"} Oct 01 15:59:45 crc kubenswrapper[4949]: I1001 15:59:45.277198 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-db9d4676b-6pnsc" event={"ID":"103475e5-ed89-4051-9173-43fd280a60c4","Type":"ContainerStarted","Data":"79cb92a86cbc41c4a571bdd0b64471e4c1e141e4e8fd3679409baade464188b5"} Oct 01 15:59:45 crc kubenswrapper[4949]: I1001 15:59:45.279689 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5cdfb75847-bw4vd" event={"ID":"36bab499-5905-4a12-baf4-dbbcd1422864","Type":"ContainerStarted","Data":"f1bae958ede671f345acf20de6ce070e19e77ffb46926ced544cfdf146af9fe7"} Oct 01 15:59:45 crc kubenswrapper[4949]: I1001 15:59:45.454728 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-5b4b76d758-g7k8p"] Oct 01 15:59:45 crc kubenswrapper[4949]: I1001 15:59:45.478237 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-ljcwm"] Oct 01 15:59:45 crc kubenswrapper[4949]: W1001 15:59:45.488197 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod83844cc6_633a_48b1_aa76_e1cf34582971.slice/crio-4164bd3f1f93a37e7f003254156224cf9b668b89782c5ca6b0c1bddf366c033e WatchSource:0}: Error finding container 4164bd3f1f93a37e7f003254156224cf9b668b89782c5ca6b0c1bddf366c033e: Status 404 returned error can't find the container with id 4164bd3f1f93a37e7f003254156224cf9b668b89782c5ca6b0c1bddf366c033e Oct 01 15:59:45 crc kubenswrapper[4949]: W1001 15:59:45.493653 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod09cb5d0c_f02b_4067_94ea_43e1067dbb5b.slice/crio-d2ac3ddc542f9b193a5de075ed680c78f33c0431d3447aa0a56018db9710c2e5 WatchSource:0}: Error finding container d2ac3ddc542f9b193a5de075ed680c78f33c0431d3447aa0a56018db9710c2e5: Status 404 returned error can't find the container with id d2ac3ddc542f9b193a5de075ed680c78f33c0431d3447aa0a56018db9710c2e5 Oct 01 15:59:45 crc kubenswrapper[4949]: I1001 15:59:45.528793 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699df9757c-nfrcg" Oct 01 15:59:45 crc kubenswrapper[4949]: I1001 15:59:45.626772 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/615f797d-3143-4c5c-8941-91855be00750-ovsdbserver-sb\") pod \"615f797d-3143-4c5c-8941-91855be00750\" (UID: \"615f797d-3143-4c5c-8941-91855be00750\") " Oct 01 15:59:45 crc kubenswrapper[4949]: I1001 15:59:45.626898 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/615f797d-3143-4c5c-8941-91855be00750-dns-svc\") pod \"615f797d-3143-4c5c-8941-91855be00750\" (UID: \"615f797d-3143-4c5c-8941-91855be00750\") " Oct 01 15:59:45 crc kubenswrapper[4949]: I1001 15:59:45.626954 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ng84p\" (UniqueName: \"kubernetes.io/projected/615f797d-3143-4c5c-8941-91855be00750-kube-api-access-ng84p\") pod \"615f797d-3143-4c5c-8941-91855be00750\" (UID: \"615f797d-3143-4c5c-8941-91855be00750\") " Oct 01 15:59:45 crc kubenswrapper[4949]: I1001 15:59:45.627057 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/615f797d-3143-4c5c-8941-91855be00750-ovsdbserver-nb\") pod \"615f797d-3143-4c5c-8941-91855be00750\" (UID: \"615f797d-3143-4c5c-8941-91855be00750\") " Oct 01 15:59:45 crc kubenswrapper[4949]: I1001 15:59:45.627075 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/615f797d-3143-4c5c-8941-91855be00750-config\") pod \"615f797d-3143-4c5c-8941-91855be00750\" (UID: \"615f797d-3143-4c5c-8941-91855be00750\") " Oct 01 15:59:45 crc kubenswrapper[4949]: I1001 15:59:45.632420 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/615f797d-3143-4c5c-8941-91855be00750-kube-api-access-ng84p" (OuterVolumeSpecName: "kube-api-access-ng84p") pod "615f797d-3143-4c5c-8941-91855be00750" (UID: "615f797d-3143-4c5c-8941-91855be00750"). InnerVolumeSpecName "kube-api-access-ng84p". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:59:45 crc kubenswrapper[4949]: I1001 15:59:45.674512 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/615f797d-3143-4c5c-8941-91855be00750-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "615f797d-3143-4c5c-8941-91855be00750" (UID: "615f797d-3143-4c5c-8941-91855be00750"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:59:45 crc kubenswrapper[4949]: I1001 15:59:45.675658 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/615f797d-3143-4c5c-8941-91855be00750-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "615f797d-3143-4c5c-8941-91855be00750" (UID: "615f797d-3143-4c5c-8941-91855be00750"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:59:45 crc kubenswrapper[4949]: I1001 15:59:45.675762 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/615f797d-3143-4c5c-8941-91855be00750-config" (OuterVolumeSpecName: "config") pod "615f797d-3143-4c5c-8941-91855be00750" (UID: "615f797d-3143-4c5c-8941-91855be00750"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:59:45 crc kubenswrapper[4949]: I1001 15:59:45.680552 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/615f797d-3143-4c5c-8941-91855be00750-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "615f797d-3143-4c5c-8941-91855be00750" (UID: "615f797d-3143-4c5c-8941-91855be00750"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:59:45 crc kubenswrapper[4949]: I1001 15:59:45.730379 4949 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/615f797d-3143-4c5c-8941-91855be00750-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:45 crc kubenswrapper[4949]: I1001 15:59:45.730625 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ng84p\" (UniqueName: \"kubernetes.io/projected/615f797d-3143-4c5c-8941-91855be00750-kube-api-access-ng84p\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:45 crc kubenswrapper[4949]: I1001 15:59:45.730640 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/615f797d-3143-4c5c-8941-91855be00750-config\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:45 crc kubenswrapper[4949]: I1001 15:59:45.730651 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/615f797d-3143-4c5c-8941-91855be00750-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:45 crc kubenswrapper[4949]: I1001 15:59:45.730666 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/615f797d-3143-4c5c-8941-91855be00750-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:46 crc kubenswrapper[4949]: I1001 15:59:46.292041 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-699df9757c-nfrcg" event={"ID":"615f797d-3143-4c5c-8941-91855be00750","Type":"ContainerDied","Data":"c617912c11e9b2dee8c8d188382c47e354e5133c7881ff65d644316cdf9ba192"} Oct 01 15:59:46 crc kubenswrapper[4949]: I1001 15:59:46.292395 4949 scope.go:117] "RemoveContainer" containerID="ac075b9edda0e61c4450c00d7f237d314a5cc76ff7f18277fff1066581bca222" Oct 01 15:59:46 crc kubenswrapper[4949]: I1001 15:59:46.292532 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-699df9757c-nfrcg" Oct 01 15:59:46 crc kubenswrapper[4949]: I1001 15:59:46.298304 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-db9d4676b-6pnsc" event={"ID":"103475e5-ed89-4051-9173-43fd280a60c4","Type":"ContainerStarted","Data":"24964535941ac1dc01ab9afae1eb89f6f1ef6eaa1955e9582ed1bea2533325ab"} Oct 01 15:59:46 crc kubenswrapper[4949]: I1001 15:59:46.298346 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-db9d4676b-6pnsc" event={"ID":"103475e5-ed89-4051-9173-43fd280a60c4","Type":"ContainerStarted","Data":"53955d744a27802dcd05146fdedb9a20333246013bce43929e4546714ab369d9"} Oct 01 15:59:46 crc kubenswrapper[4949]: I1001 15:59:46.298455 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-db9d4676b-6pnsc" Oct 01 15:59:46 crc kubenswrapper[4949]: I1001 15:59:46.298513 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-db9d4676b-6pnsc" Oct 01 15:59:46 crc kubenswrapper[4949]: I1001 15:59:46.299680 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-ljcwm" event={"ID":"09cb5d0c-f02b-4067-94ea-43e1067dbb5b","Type":"ContainerStarted","Data":"a19bbc6f564135b69fde67329f49126a1a395eaedb240a5731aa02cb7935503d"} Oct 01 15:59:46 crc kubenswrapper[4949]: I1001 15:59:46.299719 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-ljcwm" event={"ID":"09cb5d0c-f02b-4067-94ea-43e1067dbb5b","Type":"ContainerStarted","Data":"d2ac3ddc542f9b193a5de075ed680c78f33c0431d3447aa0a56018db9710c2e5"} Oct 01 15:59:46 crc kubenswrapper[4949]: I1001 15:59:46.302000 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b4b76d758-g7k8p" event={"ID":"83844cc6-633a-48b1-aa76-e1cf34582971","Type":"ContainerStarted","Data":"4296e7f70193aaff1a495f543a6785dd939d89c451bf8a090ec069a2e6ac997c"} Oct 01 15:59:46 crc kubenswrapper[4949]: I1001 15:59:46.302030 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b4b76d758-g7k8p" event={"ID":"83844cc6-633a-48b1-aa76-e1cf34582971","Type":"ContainerStarted","Data":"4164bd3f1f93a37e7f003254156224cf9b668b89782c5ca6b0c1bddf366c033e"} Oct 01 15:59:46 crc kubenswrapper[4949]: I1001 15:59:46.321676 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-db9d4676b-6pnsc" podStartSLOduration=3.321651703 podStartE2EDuration="3.321651703s" podCreationTimestamp="2025-10-01 15:59:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:59:46.314209761 +0000 UTC m=+1085.619815962" watchObservedRunningTime="2025-10-01 15:59:46.321651703 +0000 UTC m=+1085.627257904" Oct 01 15:59:46 crc kubenswrapper[4949]: I1001 15:59:46.373503 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-699df9757c-nfrcg"] Oct 01 15:59:46 crc kubenswrapper[4949]: I1001 15:59:46.380578 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-699df9757c-nfrcg"] Oct 01 15:59:46 crc kubenswrapper[4949]: I1001 15:59:46.792482 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 01 15:59:46 crc kubenswrapper[4949]: I1001 15:59:46.953370 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2ebb36cc-f642-402b-8fc7-c12e7f35776a-sg-core-conf-yaml\") pod \"2ebb36cc-f642-402b-8fc7-c12e7f35776a\" (UID: \"2ebb36cc-f642-402b-8fc7-c12e7f35776a\") " Oct 01 15:59:46 crc kubenswrapper[4949]: I1001 15:59:46.953788 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ebb36cc-f642-402b-8fc7-c12e7f35776a-config-data\") pod \"2ebb36cc-f642-402b-8fc7-c12e7f35776a\" (UID: \"2ebb36cc-f642-402b-8fc7-c12e7f35776a\") " Oct 01 15:59:46 crc kubenswrapper[4949]: I1001 15:59:46.953833 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ebb36cc-f642-402b-8fc7-c12e7f35776a-scripts\") pod \"2ebb36cc-f642-402b-8fc7-c12e7f35776a\" (UID: \"2ebb36cc-f642-402b-8fc7-c12e7f35776a\") " Oct 01 15:59:46 crc kubenswrapper[4949]: I1001 15:59:46.953916 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ebb36cc-f642-402b-8fc7-c12e7f35776a-run-httpd\") pod \"2ebb36cc-f642-402b-8fc7-c12e7f35776a\" (UID: \"2ebb36cc-f642-402b-8fc7-c12e7f35776a\") " Oct 01 15:59:46 crc kubenswrapper[4949]: I1001 15:59:46.953933 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ebb36cc-f642-402b-8fc7-c12e7f35776a-combined-ca-bundle\") pod \"2ebb36cc-f642-402b-8fc7-c12e7f35776a\" (UID: \"2ebb36cc-f642-402b-8fc7-c12e7f35776a\") " Oct 01 15:59:46 crc kubenswrapper[4949]: I1001 15:59:46.953987 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ebb36cc-f642-402b-8fc7-c12e7f35776a-log-httpd\") pod \"2ebb36cc-f642-402b-8fc7-c12e7f35776a\" (UID: \"2ebb36cc-f642-402b-8fc7-c12e7f35776a\") " Oct 01 15:59:46 crc kubenswrapper[4949]: I1001 15:59:46.954018 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7zjx2\" (UniqueName: \"kubernetes.io/projected/2ebb36cc-f642-402b-8fc7-c12e7f35776a-kube-api-access-7zjx2\") pod \"2ebb36cc-f642-402b-8fc7-c12e7f35776a\" (UID: \"2ebb36cc-f642-402b-8fc7-c12e7f35776a\") " Oct 01 15:59:46 crc kubenswrapper[4949]: I1001 15:59:46.954418 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ebb36cc-f642-402b-8fc7-c12e7f35776a-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "2ebb36cc-f642-402b-8fc7-c12e7f35776a" (UID: "2ebb36cc-f642-402b-8fc7-c12e7f35776a"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 15:59:46 crc kubenswrapper[4949]: I1001 15:59:46.955035 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ebb36cc-f642-402b-8fc7-c12e7f35776a-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "2ebb36cc-f642-402b-8fc7-c12e7f35776a" (UID: "2ebb36cc-f642-402b-8fc7-c12e7f35776a"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 15:59:46 crc kubenswrapper[4949]: I1001 15:59:46.960990 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ebb36cc-f642-402b-8fc7-c12e7f35776a-scripts" (OuterVolumeSpecName: "scripts") pod "2ebb36cc-f642-402b-8fc7-c12e7f35776a" (UID: "2ebb36cc-f642-402b-8fc7-c12e7f35776a"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:59:46 crc kubenswrapper[4949]: I1001 15:59:46.973733 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ebb36cc-f642-402b-8fc7-c12e7f35776a-kube-api-access-7zjx2" (OuterVolumeSpecName: "kube-api-access-7zjx2") pod "2ebb36cc-f642-402b-8fc7-c12e7f35776a" (UID: "2ebb36cc-f642-402b-8fc7-c12e7f35776a"). InnerVolumeSpecName "kube-api-access-7zjx2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.047996 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ebb36cc-f642-402b-8fc7-c12e7f35776a-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "2ebb36cc-f642-402b-8fc7-c12e7f35776a" (UID: "2ebb36cc-f642-402b-8fc7-c12e7f35776a"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.055431 4949 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ebb36cc-f642-402b-8fc7-c12e7f35776a-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.055464 4949 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/2ebb36cc-f642-402b-8fc7-c12e7f35776a-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.055473 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7zjx2\" (UniqueName: \"kubernetes.io/projected/2ebb36cc-f642-402b-8fc7-c12e7f35776a-kube-api-access-7zjx2\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.055484 4949 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/2ebb36cc-f642-402b-8fc7-c12e7f35776a-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.055492 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ebb36cc-f642-402b-8fc7-c12e7f35776a-scripts\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.113142 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ebb36cc-f642-402b-8fc7-c12e7f35776a-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "2ebb36cc-f642-402b-8fc7-c12e7f35776a" (UID: "2ebb36cc-f642-402b-8fc7-c12e7f35776a"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.149692 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ebb36cc-f642-402b-8fc7-c12e7f35776a-config-data" (OuterVolumeSpecName: "config-data") pod "2ebb36cc-f642-402b-8fc7-c12e7f35776a" (UID: "2ebb36cc-f642-402b-8fc7-c12e7f35776a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.156824 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2ebb36cc-f642-402b-8fc7-c12e7f35776a-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.156851 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ebb36cc-f642-402b-8fc7-c12e7f35776a-config-data\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.320869 4949 generic.go:334] "Generic (PLEG): container finished" podID="09cb5d0c-f02b-4067-94ea-43e1067dbb5b" containerID="a19bbc6f564135b69fde67329f49126a1a395eaedb240a5731aa02cb7935503d" exitCode=0 Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.321062 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-ljcwm" event={"ID":"09cb5d0c-f02b-4067-94ea-43e1067dbb5b","Type":"ContainerDied","Data":"a19bbc6f564135b69fde67329f49126a1a395eaedb240a5731aa02cb7935503d"} Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.325164 4949 generic.go:334] "Generic (PLEG): container finished" podID="2ebb36cc-f642-402b-8fc7-c12e7f35776a" containerID="2ce6b588121f00042fc55103091e4d4375b48192516e333728aaa5f3e119f398" exitCode=0 Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.325327 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.327150 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ebb36cc-f642-402b-8fc7-c12e7f35776a","Type":"ContainerDied","Data":"2ce6b588121f00042fc55103091e4d4375b48192516e333728aaa5f3e119f398"} Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.327222 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"2ebb36cc-f642-402b-8fc7-c12e7f35776a","Type":"ContainerDied","Data":"1c28869d35fff2e54a32cffb4e1ddc78b5461d5051aed6fc82b2458f23666fb8"} Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.327257 4949 scope.go:117] "RemoveContainer" containerID="9f12643cbf4f42c8470487f967890ea15f27599c99444d8ca27873a14123d5ac" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.332789 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5cdfb75847-bw4vd" event={"ID":"36bab499-5905-4a12-baf4-dbbcd1422864","Type":"ContainerStarted","Data":"c28af44fc955f603deb3a3aca555276d77a76414b869ddf3d0c3013cb1994813"} Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.332831 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-worker-5cdfb75847-bw4vd" event={"ID":"36bab499-5905-4a12-baf4-dbbcd1422864","Type":"ContainerStarted","Data":"ce5aed345b76b4c9e87dfa7d0f6c6bd956084c999e3fbab1a9e4a5a57fe0abad"} Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.360488 4949 scope.go:117] "RemoveContainer" containerID="f1c64af69df24b1e06bc4cb42846a23fd36c1ff811a774128a9664cb7f4b1684" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.363692 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b4b76d758-g7k8p" event={"ID":"83844cc6-633a-48b1-aa76-e1cf34582971","Type":"ContainerStarted","Data":"2054f5ff6ae03b875b6cbc963c060c686e4c972fa288f3d9ac7c5bfcdf50f28a"} Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.363745 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-5b4b76d758-g7k8p" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.394478 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-54c68d78fd-k7v8v" event={"ID":"f5c8bcf5-419a-4094-ac1c-bed8d1610faf","Type":"ContainerStarted","Data":"5bae70c6fa58be1f2c5db7c53117be48e3e1751f80f2cf8f36ca799294e71d1c"} Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.394509 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-keystone-listener-54c68d78fd-k7v8v" event={"ID":"f5c8bcf5-419a-4094-ac1c-bed8d1610faf","Type":"ContainerStarted","Data":"4e3c69767e98e7b9c622d548f61e38abe52edb206c6723b0511e69e2a42f9e45"} Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.396135 4949 scope.go:117] "RemoveContainer" containerID="2ce6b588121f00042fc55103091e4d4375b48192516e333728aaa5f3e119f398" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.424061 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-559fd97bd5-6zst2"] Oct 01 15:59:47 crc kubenswrapper[4949]: E1001 15:59:47.424466 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ebb36cc-f642-402b-8fc7-c12e7f35776a" containerName="ceilometer-central-agent" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.424484 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ebb36cc-f642-402b-8fc7-c12e7f35776a" containerName="ceilometer-central-agent" Oct 01 15:59:47 crc kubenswrapper[4949]: E1001 15:59:47.424498 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ebb36cc-f642-402b-8fc7-c12e7f35776a" containerName="sg-core" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.424504 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ebb36cc-f642-402b-8fc7-c12e7f35776a" containerName="sg-core" Oct 01 15:59:47 crc kubenswrapper[4949]: E1001 15:59:47.424518 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ebb36cc-f642-402b-8fc7-c12e7f35776a" containerName="proxy-httpd" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.424526 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ebb36cc-f642-402b-8fc7-c12e7f35776a" containerName="proxy-httpd" Oct 01 15:59:47 crc kubenswrapper[4949]: E1001 15:59:47.424538 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="615f797d-3143-4c5c-8941-91855be00750" containerName="init" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.424544 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="615f797d-3143-4c5c-8941-91855be00750" containerName="init" Oct 01 15:59:47 crc kubenswrapper[4949]: E1001 15:59:47.424572 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ebb36cc-f642-402b-8fc7-c12e7f35776a" containerName="ceilometer-notification-agent" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.424577 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ebb36cc-f642-402b-8fc7-c12e7f35776a" containerName="ceilometer-notification-agent" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.424730 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ebb36cc-f642-402b-8fc7-c12e7f35776a" containerName="proxy-httpd" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.424755 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ebb36cc-f642-402b-8fc7-c12e7f35776a" containerName="sg-core" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.424773 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ebb36cc-f642-402b-8fc7-c12e7f35776a" containerName="ceilometer-central-agent" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.424783 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ebb36cc-f642-402b-8fc7-c12e7f35776a" containerName="ceilometer-notification-agent" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.424794 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="615f797d-3143-4c5c-8941-91855be00750" containerName="init" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.428663 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-559fd97bd5-6zst2" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.434270 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-public-svc" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.434457 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-neutron-internal-svc" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.434573 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-559fd97bd5-6zst2"] Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.434606 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-worker-5cdfb75847-bw4vd" podStartSLOduration=2.397944826 podStartE2EDuration="4.434589902s" podCreationTimestamp="2025-10-01 15:59:43 +0000 UTC" firstStartedPulling="2025-10-01 15:59:44.523160197 +0000 UTC m=+1083.828766388" lastFinishedPulling="2025-10-01 15:59:46.559805273 +0000 UTC m=+1085.865411464" observedRunningTime="2025-10-01 15:59:47.40317166 +0000 UTC m=+1086.708777861" watchObservedRunningTime="2025-10-01 15:59:47.434589902 +0000 UTC m=+1086.740196093" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.438802 4949 scope.go:117] "RemoveContainer" containerID="f6b5e005e3c08ea4b0398ff5be1f8ab3eed334cf9122bc2e31b162fc42f195c4" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.446633 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-5b4b76d758-g7k8p" podStartSLOduration=3.446610499 podStartE2EDuration="3.446610499s" podCreationTimestamp="2025-10-01 15:59:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:59:47.427526151 +0000 UTC m=+1086.733132342" watchObservedRunningTime="2025-10-01 15:59:47.446610499 +0000 UTC m=+1086.752216690" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.463247 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-keystone-listener-54c68d78fd-k7v8v" podStartSLOduration=2.301182913 podStartE2EDuration="4.46322594s" podCreationTimestamp="2025-10-01 15:59:43 +0000 UTC" firstStartedPulling="2025-10-01 15:59:44.397729835 +0000 UTC m=+1083.703336026" lastFinishedPulling="2025-10-01 15:59:46.559772862 +0000 UTC m=+1085.865379053" observedRunningTime="2025-10-01 15:59:47.450626178 +0000 UTC m=+1086.756232369" watchObservedRunningTime="2025-10-01 15:59:47.46322594 +0000 UTC m=+1086.768832131" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.486232 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.487737 4949 scope.go:117] "RemoveContainer" containerID="9f12643cbf4f42c8470487f967890ea15f27599c99444d8ca27873a14123d5ac" Oct 01 15:59:47 crc kubenswrapper[4949]: E1001 15:59:47.488229 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f12643cbf4f42c8470487f967890ea15f27599c99444d8ca27873a14123d5ac\": container with ID starting with 9f12643cbf4f42c8470487f967890ea15f27599c99444d8ca27873a14123d5ac not found: ID does not exist" containerID="9f12643cbf4f42c8470487f967890ea15f27599c99444d8ca27873a14123d5ac" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.488268 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f12643cbf4f42c8470487f967890ea15f27599c99444d8ca27873a14123d5ac"} err="failed to get container status \"9f12643cbf4f42c8470487f967890ea15f27599c99444d8ca27873a14123d5ac\": rpc error: code = NotFound desc = could not find container \"9f12643cbf4f42c8470487f967890ea15f27599c99444d8ca27873a14123d5ac\": container with ID starting with 9f12643cbf4f42c8470487f967890ea15f27599c99444d8ca27873a14123d5ac not found: ID does not exist" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.488295 4949 scope.go:117] "RemoveContainer" containerID="f1c64af69df24b1e06bc4cb42846a23fd36c1ff811a774128a9664cb7f4b1684" Oct 01 15:59:47 crc kubenswrapper[4949]: E1001 15:59:47.489881 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1c64af69df24b1e06bc4cb42846a23fd36c1ff811a774128a9664cb7f4b1684\": container with ID starting with f1c64af69df24b1e06bc4cb42846a23fd36c1ff811a774128a9664cb7f4b1684 not found: ID does not exist" containerID="f1c64af69df24b1e06bc4cb42846a23fd36c1ff811a774128a9664cb7f4b1684" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.489911 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1c64af69df24b1e06bc4cb42846a23fd36c1ff811a774128a9664cb7f4b1684"} err="failed to get container status \"f1c64af69df24b1e06bc4cb42846a23fd36c1ff811a774128a9664cb7f4b1684\": rpc error: code = NotFound desc = could not find container \"f1c64af69df24b1e06bc4cb42846a23fd36c1ff811a774128a9664cb7f4b1684\": container with ID starting with f1c64af69df24b1e06bc4cb42846a23fd36c1ff811a774128a9664cb7f4b1684 not found: ID does not exist" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.489927 4949 scope.go:117] "RemoveContainer" containerID="2ce6b588121f00042fc55103091e4d4375b48192516e333728aaa5f3e119f398" Oct 01 15:59:47 crc kubenswrapper[4949]: E1001 15:59:47.493808 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ce6b588121f00042fc55103091e4d4375b48192516e333728aaa5f3e119f398\": container with ID starting with 2ce6b588121f00042fc55103091e4d4375b48192516e333728aaa5f3e119f398 not found: ID does not exist" containerID="2ce6b588121f00042fc55103091e4d4375b48192516e333728aaa5f3e119f398" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.493846 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ce6b588121f00042fc55103091e4d4375b48192516e333728aaa5f3e119f398"} err="failed to get container status \"2ce6b588121f00042fc55103091e4d4375b48192516e333728aaa5f3e119f398\": rpc error: code = NotFound desc = could not find container \"2ce6b588121f00042fc55103091e4d4375b48192516e333728aaa5f3e119f398\": container with ID starting with 2ce6b588121f00042fc55103091e4d4375b48192516e333728aaa5f3e119f398 not found: ID does not exist" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.493868 4949 scope.go:117] "RemoveContainer" containerID="f6b5e005e3c08ea4b0398ff5be1f8ab3eed334cf9122bc2e31b162fc42f195c4" Oct 01 15:59:47 crc kubenswrapper[4949]: E1001 15:59:47.496590 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6b5e005e3c08ea4b0398ff5be1f8ab3eed334cf9122bc2e31b162fc42f195c4\": container with ID starting with f6b5e005e3c08ea4b0398ff5be1f8ab3eed334cf9122bc2e31b162fc42f195c4 not found: ID does not exist" containerID="f6b5e005e3c08ea4b0398ff5be1f8ab3eed334cf9122bc2e31b162fc42f195c4" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.496615 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6b5e005e3c08ea4b0398ff5be1f8ab3eed334cf9122bc2e31b162fc42f195c4"} err="failed to get container status \"f6b5e005e3c08ea4b0398ff5be1f8ab3eed334cf9122bc2e31b162fc42f195c4\": rpc error: code = NotFound desc = could not find container \"f6b5e005e3c08ea4b0398ff5be1f8ab3eed334cf9122bc2e31b162fc42f195c4\": container with ID starting with f6b5e005e3c08ea4b0398ff5be1f8ab3eed334cf9122bc2e31b162fc42f195c4 not found: ID does not exist" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.505185 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.516693 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.519551 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.525973 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.526289 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.532240 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.568083 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qtvj\" (UniqueName: \"kubernetes.io/projected/8e733c32-7a78-4088-b089-cfe1a37bb3e4-kube-api-access-6qtvj\") pod \"neutron-559fd97bd5-6zst2\" (UID: \"8e733c32-7a78-4088-b089-cfe1a37bb3e4\") " pod="openstack/neutron-559fd97bd5-6zst2" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.568178 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e733c32-7a78-4088-b089-cfe1a37bb3e4-ovndb-tls-certs\") pod \"neutron-559fd97bd5-6zst2\" (UID: \"8e733c32-7a78-4088-b089-cfe1a37bb3e4\") " pod="openstack/neutron-559fd97bd5-6zst2" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.568228 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e733c32-7a78-4088-b089-cfe1a37bb3e4-public-tls-certs\") pod \"neutron-559fd97bd5-6zst2\" (UID: \"8e733c32-7a78-4088-b089-cfe1a37bb3e4\") " pod="openstack/neutron-559fd97bd5-6zst2" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.568294 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e733c32-7a78-4088-b089-cfe1a37bb3e4-internal-tls-certs\") pod \"neutron-559fd97bd5-6zst2\" (UID: \"8e733c32-7a78-4088-b089-cfe1a37bb3e4\") " pod="openstack/neutron-559fd97bd5-6zst2" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.568328 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8e733c32-7a78-4088-b089-cfe1a37bb3e4-httpd-config\") pod \"neutron-559fd97bd5-6zst2\" (UID: \"8e733c32-7a78-4088-b089-cfe1a37bb3e4\") " pod="openstack/neutron-559fd97bd5-6zst2" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.568391 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e733c32-7a78-4088-b089-cfe1a37bb3e4-combined-ca-bundle\") pod \"neutron-559fd97bd5-6zst2\" (UID: \"8e733c32-7a78-4088-b089-cfe1a37bb3e4\") " pod="openstack/neutron-559fd97bd5-6zst2" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.568528 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8e733c32-7a78-4088-b089-cfe1a37bb3e4-config\") pod \"neutron-559fd97bd5-6zst2\" (UID: \"8e733c32-7a78-4088-b089-cfe1a37bb3e4\") " pod="openstack/neutron-559fd97bd5-6zst2" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.614694 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ebb36cc-f642-402b-8fc7-c12e7f35776a" path="/var/lib/kubelet/pods/2ebb36cc-f642-402b-8fc7-c12e7f35776a/volumes" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.615648 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="615f797d-3143-4c5c-8941-91855be00750" path="/var/lib/kubelet/pods/615f797d-3143-4c5c-8941-91855be00750/volumes" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.670235 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5778394-f962-4a15-960f-85a81428d1d0-run-httpd\") pod \"ceilometer-0\" (UID: \"a5778394-f962-4a15-960f-85a81428d1d0\") " pod="openstack/ceilometer-0" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.670319 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e733c32-7a78-4088-b089-cfe1a37bb3e4-ovndb-tls-certs\") pod \"neutron-559fd97bd5-6zst2\" (UID: \"8e733c32-7a78-4088-b089-cfe1a37bb3e4\") " pod="openstack/neutron-559fd97bd5-6zst2" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.670355 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5778394-f962-4a15-960f-85a81428d1d0-log-httpd\") pod \"ceilometer-0\" (UID: \"a5778394-f962-4a15-960f-85a81428d1d0\") " pod="openstack/ceilometer-0" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.670390 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e733c32-7a78-4088-b089-cfe1a37bb3e4-public-tls-certs\") pod \"neutron-559fd97bd5-6zst2\" (UID: \"8e733c32-7a78-4088-b089-cfe1a37bb3e4\") " pod="openstack/neutron-559fd97bd5-6zst2" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.670416 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5778394-f962-4a15-960f-85a81428d1d0-config-data\") pod \"ceilometer-0\" (UID: \"a5778394-f962-4a15-960f-85a81428d1d0\") " pod="openstack/ceilometer-0" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.670465 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e733c32-7a78-4088-b089-cfe1a37bb3e4-internal-tls-certs\") pod \"neutron-559fd97bd5-6zst2\" (UID: \"8e733c32-7a78-4088-b089-cfe1a37bb3e4\") " pod="openstack/neutron-559fd97bd5-6zst2" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.670495 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a5778394-f962-4a15-960f-85a81428d1d0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a5778394-f962-4a15-960f-85a81428d1d0\") " pod="openstack/ceilometer-0" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.670556 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5778394-f962-4a15-960f-85a81428d1d0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a5778394-f962-4a15-960f-85a81428d1d0\") " pod="openstack/ceilometer-0" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.670608 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8e733c32-7a78-4088-b089-cfe1a37bb3e4-httpd-config\") pod \"neutron-559fd97bd5-6zst2\" (UID: \"8e733c32-7a78-4088-b089-cfe1a37bb3e4\") " pod="openstack/neutron-559fd97bd5-6zst2" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.670657 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e733c32-7a78-4088-b089-cfe1a37bb3e4-combined-ca-bundle\") pod \"neutron-559fd97bd5-6zst2\" (UID: \"8e733c32-7a78-4088-b089-cfe1a37bb3e4\") " pod="openstack/neutron-559fd97bd5-6zst2" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.670708 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zzvd\" (UniqueName: \"kubernetes.io/projected/a5778394-f962-4a15-960f-85a81428d1d0-kube-api-access-9zzvd\") pod \"ceilometer-0\" (UID: \"a5778394-f962-4a15-960f-85a81428d1d0\") " pod="openstack/ceilometer-0" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.670763 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5778394-f962-4a15-960f-85a81428d1d0-scripts\") pod \"ceilometer-0\" (UID: \"a5778394-f962-4a15-960f-85a81428d1d0\") " pod="openstack/ceilometer-0" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.670783 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/8e733c32-7a78-4088-b089-cfe1a37bb3e4-config\") pod \"neutron-559fd97bd5-6zst2\" (UID: \"8e733c32-7a78-4088-b089-cfe1a37bb3e4\") " pod="openstack/neutron-559fd97bd5-6zst2" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.670829 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qtvj\" (UniqueName: \"kubernetes.io/projected/8e733c32-7a78-4088-b089-cfe1a37bb3e4-kube-api-access-6qtvj\") pod \"neutron-559fd97bd5-6zst2\" (UID: \"8e733c32-7a78-4088-b089-cfe1a37bb3e4\") " pod="openstack/neutron-559fd97bd5-6zst2" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.675809 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e733c32-7a78-4088-b089-cfe1a37bb3e4-internal-tls-certs\") pod \"neutron-559fd97bd5-6zst2\" (UID: \"8e733c32-7a78-4088-b089-cfe1a37bb3e4\") " pod="openstack/neutron-559fd97bd5-6zst2" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.675877 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/8e733c32-7a78-4088-b089-cfe1a37bb3e4-httpd-config\") pod \"neutron-559fd97bd5-6zst2\" (UID: \"8e733c32-7a78-4088-b089-cfe1a37bb3e4\") " pod="openstack/neutron-559fd97bd5-6zst2" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.675923 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e733c32-7a78-4088-b089-cfe1a37bb3e4-ovndb-tls-certs\") pod \"neutron-559fd97bd5-6zst2\" (UID: \"8e733c32-7a78-4088-b089-cfe1a37bb3e4\") " pod="openstack/neutron-559fd97bd5-6zst2" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.676361 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/8e733c32-7a78-4088-b089-cfe1a37bb3e4-public-tls-certs\") pod \"neutron-559fd97bd5-6zst2\" (UID: \"8e733c32-7a78-4088-b089-cfe1a37bb3e4\") " pod="openstack/neutron-559fd97bd5-6zst2" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.676973 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/8e733c32-7a78-4088-b089-cfe1a37bb3e4-config\") pod \"neutron-559fd97bd5-6zst2\" (UID: \"8e733c32-7a78-4088-b089-cfe1a37bb3e4\") " pod="openstack/neutron-559fd97bd5-6zst2" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.677033 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8e733c32-7a78-4088-b089-cfe1a37bb3e4-combined-ca-bundle\") pod \"neutron-559fd97bd5-6zst2\" (UID: \"8e733c32-7a78-4088-b089-cfe1a37bb3e4\") " pod="openstack/neutron-559fd97bd5-6zst2" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.693840 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qtvj\" (UniqueName: \"kubernetes.io/projected/8e733c32-7a78-4088-b089-cfe1a37bb3e4-kube-api-access-6qtvj\") pod \"neutron-559fd97bd5-6zst2\" (UID: \"8e733c32-7a78-4088-b089-cfe1a37bb3e4\") " pod="openstack/neutron-559fd97bd5-6zst2" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.756250 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-559fd97bd5-6zst2" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.772276 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zzvd\" (UniqueName: \"kubernetes.io/projected/a5778394-f962-4a15-960f-85a81428d1d0-kube-api-access-9zzvd\") pod \"ceilometer-0\" (UID: \"a5778394-f962-4a15-960f-85a81428d1d0\") " pod="openstack/ceilometer-0" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.772358 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5778394-f962-4a15-960f-85a81428d1d0-scripts\") pod \"ceilometer-0\" (UID: \"a5778394-f962-4a15-960f-85a81428d1d0\") " pod="openstack/ceilometer-0" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.772410 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5778394-f962-4a15-960f-85a81428d1d0-run-httpd\") pod \"ceilometer-0\" (UID: \"a5778394-f962-4a15-960f-85a81428d1d0\") " pod="openstack/ceilometer-0" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.772535 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5778394-f962-4a15-960f-85a81428d1d0-log-httpd\") pod \"ceilometer-0\" (UID: \"a5778394-f962-4a15-960f-85a81428d1d0\") " pod="openstack/ceilometer-0" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.772582 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5778394-f962-4a15-960f-85a81428d1d0-config-data\") pod \"ceilometer-0\" (UID: \"a5778394-f962-4a15-960f-85a81428d1d0\") " pod="openstack/ceilometer-0" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.772630 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a5778394-f962-4a15-960f-85a81428d1d0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a5778394-f962-4a15-960f-85a81428d1d0\") " pod="openstack/ceilometer-0" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.772652 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5778394-f962-4a15-960f-85a81428d1d0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a5778394-f962-4a15-960f-85a81428d1d0\") " pod="openstack/ceilometer-0" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.772928 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5778394-f962-4a15-960f-85a81428d1d0-run-httpd\") pod \"ceilometer-0\" (UID: \"a5778394-f962-4a15-960f-85a81428d1d0\") " pod="openstack/ceilometer-0" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.772964 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5778394-f962-4a15-960f-85a81428d1d0-log-httpd\") pod \"ceilometer-0\" (UID: \"a5778394-f962-4a15-960f-85a81428d1d0\") " pod="openstack/ceilometer-0" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.775616 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5778394-f962-4a15-960f-85a81428d1d0-scripts\") pod \"ceilometer-0\" (UID: \"a5778394-f962-4a15-960f-85a81428d1d0\") " pod="openstack/ceilometer-0" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.776041 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a5778394-f962-4a15-960f-85a81428d1d0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"a5778394-f962-4a15-960f-85a81428d1d0\") " pod="openstack/ceilometer-0" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.783860 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5778394-f962-4a15-960f-85a81428d1d0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"a5778394-f962-4a15-960f-85a81428d1d0\") " pod="openstack/ceilometer-0" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.789928 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5778394-f962-4a15-960f-85a81428d1d0-config-data\") pod \"ceilometer-0\" (UID: \"a5778394-f962-4a15-960f-85a81428d1d0\") " pod="openstack/ceilometer-0" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.791099 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zzvd\" (UniqueName: \"kubernetes.io/projected/a5778394-f962-4a15-960f-85a81428d1d0-kube-api-access-9zzvd\") pod \"ceilometer-0\" (UID: \"a5778394-f962-4a15-960f-85a81428d1d0\") " pod="openstack/ceilometer-0" Oct 01 15:59:47 crc kubenswrapper[4949]: I1001 15:59:47.846742 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 01 15:59:48 crc kubenswrapper[4949]: I1001 15:59:48.038673 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 15:59:48 crc kubenswrapper[4949]: I1001 15:59:48.039036 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 15:59:48 crc kubenswrapper[4949]: I1001 15:59:48.347474 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-559fd97bd5-6zst2"] Oct 01 15:59:48 crc kubenswrapper[4949]: I1001 15:59:48.387846 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 01 15:59:48 crc kubenswrapper[4949]: I1001 15:59:48.416549 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-559fd97bd5-6zst2" event={"ID":"8e733c32-7a78-4088-b089-cfe1a37bb3e4","Type":"ContainerStarted","Data":"0067732b34f2c73e0af35346b5bae69c7964018191794ab665634a0d889fe310"} Oct 01 15:59:48 crc kubenswrapper[4949]: I1001 15:59:48.421401 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-ljcwm" event={"ID":"09cb5d0c-f02b-4067-94ea-43e1067dbb5b","Type":"ContainerStarted","Data":"824262f2dab2df3cfcf23f74355950a02079fc759d8c7f8ae279183f295f80b0"} Oct 01 15:59:48 crc kubenswrapper[4949]: I1001 15:59:48.421491 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6bb684768f-ljcwm" Oct 01 15:59:48 crc kubenswrapper[4949]: I1001 15:59:48.422938 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5778394-f962-4a15-960f-85a81428d1d0","Type":"ContainerStarted","Data":"df8d1c45c7a1f28112b7a2ba4ddcdaccfb88f324feeb362fc47a187baf2cde8b"} Oct 01 15:59:48 crc kubenswrapper[4949]: I1001 15:59:48.445299 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6bb684768f-ljcwm" podStartSLOduration=4.445278419 podStartE2EDuration="4.445278419s" podCreationTimestamp="2025-10-01 15:59:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:59:48.440498099 +0000 UTC m=+1087.746104320" watchObservedRunningTime="2025-10-01 15:59:48.445278419 +0000 UTC m=+1087.750884620" Oct 01 15:59:49 crc kubenswrapper[4949]: I1001 15:59:49.432655 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5778394-f962-4a15-960f-85a81428d1d0","Type":"ContainerStarted","Data":"4b0f4da982ac1cb12ae872a62f2fa0647f16d867df5c73fdff58d635c7b2c4e2"} Oct 01 15:59:49 crc kubenswrapper[4949]: I1001 15:59:49.436578 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-559fd97bd5-6zst2" event={"ID":"8e733c32-7a78-4088-b089-cfe1a37bb3e4","Type":"ContainerStarted","Data":"fd3f3eb04e47a410e7cbcdadcbd7126aaa8cc4cd1faa569e2c1092f6c98da404"} Oct 01 15:59:49 crc kubenswrapper[4949]: I1001 15:59:49.436644 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-559fd97bd5-6zst2" event={"ID":"8e733c32-7a78-4088-b089-cfe1a37bb3e4","Type":"ContainerStarted","Data":"b79d90adbad80c2e8a5edc34395e4b3ba23a261685b581d6a178147754645eeb"} Oct 01 15:59:49 crc kubenswrapper[4949]: I1001 15:59:49.436742 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/neutron-559fd97bd5-6zst2" Oct 01 15:59:49 crc kubenswrapper[4949]: I1001 15:59:49.438322 4949 generic.go:334] "Generic (PLEG): container finished" podID="41cbbbe8-6b79-4667-ba8d-7252d0d1a998" containerID="07d91c7413a4e7927432e7b97110cdc226eef7f33770a710271e1d9f96cec01b" exitCode=0 Oct 01 15:59:49 crc kubenswrapper[4949]: I1001 15:59:49.438401 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-tjfmw" event={"ID":"41cbbbe8-6b79-4667-ba8d-7252d0d1a998","Type":"ContainerDied","Data":"07d91c7413a4e7927432e7b97110cdc226eef7f33770a710271e1d9f96cec01b"} Oct 01 15:59:49 crc kubenswrapper[4949]: I1001 15:59:49.467699 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-559fd97bd5-6zst2" podStartSLOduration=2.467679952 podStartE2EDuration="2.467679952s" podCreationTimestamp="2025-10-01 15:59:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:59:49.454008211 +0000 UTC m=+1088.759614412" watchObservedRunningTime="2025-10-01 15:59:49.467679952 +0000 UTC m=+1088.773286143" Oct 01 15:59:50 crc kubenswrapper[4949]: I1001 15:59:50.399866 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/barbican-api-b577bcff4-r4wxv"] Oct 01 15:59:50 crc kubenswrapper[4949]: I1001 15:59:50.401603 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-b577bcff4-r4wxv" Oct 01 15:59:50 crc kubenswrapper[4949]: I1001 15:59:50.404234 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-public-svc" Oct 01 15:59:50 crc kubenswrapper[4949]: I1001 15:59:50.404441 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-barbican-internal-svc" Oct 01 15:59:50 crc kubenswrapper[4949]: I1001 15:59:50.411846 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-b577bcff4-r4wxv"] Oct 01 15:59:50 crc kubenswrapper[4949]: I1001 15:59:50.523629 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b22f7df5-f2f0-485a-b277-6196948f9cee-internal-tls-certs\") pod \"barbican-api-b577bcff4-r4wxv\" (UID: \"b22f7df5-f2f0-485a-b277-6196948f9cee\") " pod="openstack/barbican-api-b577bcff4-r4wxv" Oct 01 15:59:50 crc kubenswrapper[4949]: I1001 15:59:50.523755 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b22f7df5-f2f0-485a-b277-6196948f9cee-public-tls-certs\") pod \"barbican-api-b577bcff4-r4wxv\" (UID: \"b22f7df5-f2f0-485a-b277-6196948f9cee\") " pod="openstack/barbican-api-b577bcff4-r4wxv" Oct 01 15:59:50 crc kubenswrapper[4949]: I1001 15:59:50.524176 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b22f7df5-f2f0-485a-b277-6196948f9cee-combined-ca-bundle\") pod \"barbican-api-b577bcff4-r4wxv\" (UID: \"b22f7df5-f2f0-485a-b277-6196948f9cee\") " pod="openstack/barbican-api-b577bcff4-r4wxv" Oct 01 15:59:50 crc kubenswrapper[4949]: I1001 15:59:50.524242 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc8b4\" (UniqueName: \"kubernetes.io/projected/b22f7df5-f2f0-485a-b277-6196948f9cee-kube-api-access-mc8b4\") pod \"barbican-api-b577bcff4-r4wxv\" (UID: \"b22f7df5-f2f0-485a-b277-6196948f9cee\") " pod="openstack/barbican-api-b577bcff4-r4wxv" Oct 01 15:59:50 crc kubenswrapper[4949]: I1001 15:59:50.524338 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b22f7df5-f2f0-485a-b277-6196948f9cee-config-data-custom\") pod \"barbican-api-b577bcff4-r4wxv\" (UID: \"b22f7df5-f2f0-485a-b277-6196948f9cee\") " pod="openstack/barbican-api-b577bcff4-r4wxv" Oct 01 15:59:50 crc kubenswrapper[4949]: I1001 15:59:50.524388 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b22f7df5-f2f0-485a-b277-6196948f9cee-config-data\") pod \"barbican-api-b577bcff4-r4wxv\" (UID: \"b22f7df5-f2f0-485a-b277-6196948f9cee\") " pod="openstack/barbican-api-b577bcff4-r4wxv" Oct 01 15:59:50 crc kubenswrapper[4949]: I1001 15:59:50.524465 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b22f7df5-f2f0-485a-b277-6196948f9cee-logs\") pod \"barbican-api-b577bcff4-r4wxv\" (UID: \"b22f7df5-f2f0-485a-b277-6196948f9cee\") " pod="openstack/barbican-api-b577bcff4-r4wxv" Oct 01 15:59:50 crc kubenswrapper[4949]: I1001 15:59:50.626165 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b22f7df5-f2f0-485a-b277-6196948f9cee-logs\") pod \"barbican-api-b577bcff4-r4wxv\" (UID: \"b22f7df5-f2f0-485a-b277-6196948f9cee\") " pod="openstack/barbican-api-b577bcff4-r4wxv" Oct 01 15:59:50 crc kubenswrapper[4949]: I1001 15:59:50.626248 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b22f7df5-f2f0-485a-b277-6196948f9cee-internal-tls-certs\") pod \"barbican-api-b577bcff4-r4wxv\" (UID: \"b22f7df5-f2f0-485a-b277-6196948f9cee\") " pod="openstack/barbican-api-b577bcff4-r4wxv" Oct 01 15:59:50 crc kubenswrapper[4949]: I1001 15:59:50.626291 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b22f7df5-f2f0-485a-b277-6196948f9cee-public-tls-certs\") pod \"barbican-api-b577bcff4-r4wxv\" (UID: \"b22f7df5-f2f0-485a-b277-6196948f9cee\") " pod="openstack/barbican-api-b577bcff4-r4wxv" Oct 01 15:59:50 crc kubenswrapper[4949]: I1001 15:59:50.626356 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b22f7df5-f2f0-485a-b277-6196948f9cee-combined-ca-bundle\") pod \"barbican-api-b577bcff4-r4wxv\" (UID: \"b22f7df5-f2f0-485a-b277-6196948f9cee\") " pod="openstack/barbican-api-b577bcff4-r4wxv" Oct 01 15:59:50 crc kubenswrapper[4949]: I1001 15:59:50.626392 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mc8b4\" (UniqueName: \"kubernetes.io/projected/b22f7df5-f2f0-485a-b277-6196948f9cee-kube-api-access-mc8b4\") pod \"barbican-api-b577bcff4-r4wxv\" (UID: \"b22f7df5-f2f0-485a-b277-6196948f9cee\") " pod="openstack/barbican-api-b577bcff4-r4wxv" Oct 01 15:59:50 crc kubenswrapper[4949]: I1001 15:59:50.626430 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b22f7df5-f2f0-485a-b277-6196948f9cee-config-data-custom\") pod \"barbican-api-b577bcff4-r4wxv\" (UID: \"b22f7df5-f2f0-485a-b277-6196948f9cee\") " pod="openstack/barbican-api-b577bcff4-r4wxv" Oct 01 15:59:50 crc kubenswrapper[4949]: I1001 15:59:50.626460 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b22f7df5-f2f0-485a-b277-6196948f9cee-config-data\") pod \"barbican-api-b577bcff4-r4wxv\" (UID: \"b22f7df5-f2f0-485a-b277-6196948f9cee\") " pod="openstack/barbican-api-b577bcff4-r4wxv" Oct 01 15:59:50 crc kubenswrapper[4949]: I1001 15:59:50.627184 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b22f7df5-f2f0-485a-b277-6196948f9cee-logs\") pod \"barbican-api-b577bcff4-r4wxv\" (UID: \"b22f7df5-f2f0-485a-b277-6196948f9cee\") " pod="openstack/barbican-api-b577bcff4-r4wxv" Oct 01 15:59:50 crc kubenswrapper[4949]: I1001 15:59:50.631762 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/b22f7df5-f2f0-485a-b277-6196948f9cee-config-data-custom\") pod \"barbican-api-b577bcff4-r4wxv\" (UID: \"b22f7df5-f2f0-485a-b277-6196948f9cee\") " pod="openstack/barbican-api-b577bcff4-r4wxv" Oct 01 15:59:50 crc kubenswrapper[4949]: I1001 15:59:50.631795 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/b22f7df5-f2f0-485a-b277-6196948f9cee-internal-tls-certs\") pod \"barbican-api-b577bcff4-r4wxv\" (UID: \"b22f7df5-f2f0-485a-b277-6196948f9cee\") " pod="openstack/barbican-api-b577bcff4-r4wxv" Oct 01 15:59:50 crc kubenswrapper[4949]: I1001 15:59:50.633667 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b22f7df5-f2f0-485a-b277-6196948f9cee-combined-ca-bundle\") pod \"barbican-api-b577bcff4-r4wxv\" (UID: \"b22f7df5-f2f0-485a-b277-6196948f9cee\") " pod="openstack/barbican-api-b577bcff4-r4wxv" Oct 01 15:59:50 crc kubenswrapper[4949]: I1001 15:59:50.636707 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b22f7df5-f2f0-485a-b277-6196948f9cee-public-tls-certs\") pod \"barbican-api-b577bcff4-r4wxv\" (UID: \"b22f7df5-f2f0-485a-b277-6196948f9cee\") " pod="openstack/barbican-api-b577bcff4-r4wxv" Oct 01 15:59:50 crc kubenswrapper[4949]: I1001 15:59:50.649495 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b22f7df5-f2f0-485a-b277-6196948f9cee-config-data\") pod \"barbican-api-b577bcff4-r4wxv\" (UID: \"b22f7df5-f2f0-485a-b277-6196948f9cee\") " pod="openstack/barbican-api-b577bcff4-r4wxv" Oct 01 15:59:50 crc kubenswrapper[4949]: I1001 15:59:50.653914 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mc8b4\" (UniqueName: \"kubernetes.io/projected/b22f7df5-f2f0-485a-b277-6196948f9cee-kube-api-access-mc8b4\") pod \"barbican-api-b577bcff4-r4wxv\" (UID: \"b22f7df5-f2f0-485a-b277-6196948f9cee\") " pod="openstack/barbican-api-b577bcff4-r4wxv" Oct 01 15:59:50 crc kubenswrapper[4949]: I1001 15:59:50.716940 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-b577bcff4-r4wxv" Oct 01 15:59:50 crc kubenswrapper[4949]: I1001 15:59:50.726524 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-tjfmw" Oct 01 15:59:50 crc kubenswrapper[4949]: I1001 15:59:50.829667 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41cbbbe8-6b79-4667-ba8d-7252d0d1a998-config-data\") pod \"41cbbbe8-6b79-4667-ba8d-7252d0d1a998\" (UID: \"41cbbbe8-6b79-4667-ba8d-7252d0d1a998\") " Oct 01 15:59:50 crc kubenswrapper[4949]: I1001 15:59:50.830021 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/41cbbbe8-6b79-4667-ba8d-7252d0d1a998-db-sync-config-data\") pod \"41cbbbe8-6b79-4667-ba8d-7252d0d1a998\" (UID: \"41cbbbe8-6b79-4667-ba8d-7252d0d1a998\") " Oct 01 15:59:50 crc kubenswrapper[4949]: I1001 15:59:50.830158 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/41cbbbe8-6b79-4667-ba8d-7252d0d1a998-etc-machine-id\") pod \"41cbbbe8-6b79-4667-ba8d-7252d0d1a998\" (UID: \"41cbbbe8-6b79-4667-ba8d-7252d0d1a998\") " Oct 01 15:59:50 crc kubenswrapper[4949]: I1001 15:59:50.830225 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41cbbbe8-6b79-4667-ba8d-7252d0d1a998-scripts\") pod \"41cbbbe8-6b79-4667-ba8d-7252d0d1a998\" (UID: \"41cbbbe8-6b79-4667-ba8d-7252d0d1a998\") " Oct 01 15:59:50 crc kubenswrapper[4949]: I1001 15:59:50.830277 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41cbbbe8-6b79-4667-ba8d-7252d0d1a998-combined-ca-bundle\") pod \"41cbbbe8-6b79-4667-ba8d-7252d0d1a998\" (UID: \"41cbbbe8-6b79-4667-ba8d-7252d0d1a998\") " Oct 01 15:59:50 crc kubenswrapper[4949]: I1001 15:59:50.830311 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dpbpt\" (UniqueName: \"kubernetes.io/projected/41cbbbe8-6b79-4667-ba8d-7252d0d1a998-kube-api-access-dpbpt\") pod \"41cbbbe8-6b79-4667-ba8d-7252d0d1a998\" (UID: \"41cbbbe8-6b79-4667-ba8d-7252d0d1a998\") " Oct 01 15:59:50 crc kubenswrapper[4949]: I1001 15:59:50.831350 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41cbbbe8-6b79-4667-ba8d-7252d0d1a998-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "41cbbbe8-6b79-4667-ba8d-7252d0d1a998" (UID: "41cbbbe8-6b79-4667-ba8d-7252d0d1a998"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 01 15:59:50 crc kubenswrapper[4949]: I1001 15:59:50.837996 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41cbbbe8-6b79-4667-ba8d-7252d0d1a998-scripts" (OuterVolumeSpecName: "scripts") pod "41cbbbe8-6b79-4667-ba8d-7252d0d1a998" (UID: "41cbbbe8-6b79-4667-ba8d-7252d0d1a998"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:59:50 crc kubenswrapper[4949]: I1001 15:59:50.839586 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41cbbbe8-6b79-4667-ba8d-7252d0d1a998-kube-api-access-dpbpt" (OuterVolumeSpecName: "kube-api-access-dpbpt") pod "41cbbbe8-6b79-4667-ba8d-7252d0d1a998" (UID: "41cbbbe8-6b79-4667-ba8d-7252d0d1a998"). InnerVolumeSpecName "kube-api-access-dpbpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:59:50 crc kubenswrapper[4949]: I1001 15:59:50.848714 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41cbbbe8-6b79-4667-ba8d-7252d0d1a998-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "41cbbbe8-6b79-4667-ba8d-7252d0d1a998" (UID: "41cbbbe8-6b79-4667-ba8d-7252d0d1a998"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:59:50 crc kubenswrapper[4949]: I1001 15:59:50.871243 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41cbbbe8-6b79-4667-ba8d-7252d0d1a998-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "41cbbbe8-6b79-4667-ba8d-7252d0d1a998" (UID: "41cbbbe8-6b79-4667-ba8d-7252d0d1a998"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:59:50 crc kubenswrapper[4949]: I1001 15:59:50.889743 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/41cbbbe8-6b79-4667-ba8d-7252d0d1a998-config-data" (OuterVolumeSpecName: "config-data") pod "41cbbbe8-6b79-4667-ba8d-7252d0d1a998" (UID: "41cbbbe8-6b79-4667-ba8d-7252d0d1a998"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:59:50 crc kubenswrapper[4949]: I1001 15:59:50.932461 4949 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/41cbbbe8-6b79-4667-ba8d-7252d0d1a998-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:50 crc kubenswrapper[4949]: I1001 15:59:50.932498 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/41cbbbe8-6b79-4667-ba8d-7252d0d1a998-scripts\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:50 crc kubenswrapper[4949]: I1001 15:59:50.932507 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/41cbbbe8-6b79-4667-ba8d-7252d0d1a998-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:50 crc kubenswrapper[4949]: I1001 15:59:50.932517 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dpbpt\" (UniqueName: \"kubernetes.io/projected/41cbbbe8-6b79-4667-ba8d-7252d0d1a998-kube-api-access-dpbpt\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:50 crc kubenswrapper[4949]: I1001 15:59:50.932527 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/41cbbbe8-6b79-4667-ba8d-7252d0d1a998-config-data\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:50 crc kubenswrapper[4949]: I1001 15:59:50.932535 4949 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/41cbbbe8-6b79-4667-ba8d-7252d0d1a998-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:51 crc kubenswrapper[4949]: I1001 15:59:51.198770 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/barbican-api-b577bcff4-r4wxv"] Oct 01 15:59:51 crc kubenswrapper[4949]: I1001 15:59:51.454630 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b577bcff4-r4wxv" event={"ID":"b22f7df5-f2f0-485a-b277-6196948f9cee","Type":"ContainerStarted","Data":"1da2495274aeb8b8c1a488ff319fed9c34208dbd19f743c6ea3af1f0bcb5fc65"} Oct 01 15:59:51 crc kubenswrapper[4949]: I1001 15:59:51.456728 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5778394-f962-4a15-960f-85a81428d1d0","Type":"ContainerStarted","Data":"2c77e2a952014e68e92bfd5fd2704deaeab4a43eed03f675bbefd4f124091bc4"} Oct 01 15:59:51 crc kubenswrapper[4949]: I1001 15:59:51.458551 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-db-sync-tjfmw" event={"ID":"41cbbbe8-6b79-4667-ba8d-7252d0d1a998","Type":"ContainerDied","Data":"8d65d9ea6759625ff172f26248c794ef1646eaadd6f52af045acd995fc353dd4"} Oct 01 15:59:51 crc kubenswrapper[4949]: I1001 15:59:51.458583 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8d65d9ea6759625ff172f26248c794ef1646eaadd6f52af045acd995fc353dd4" Oct 01 15:59:51 crc kubenswrapper[4949]: I1001 15:59:51.458718 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-db-sync-tjfmw" Oct 01 15:59:51 crc kubenswrapper[4949]: I1001 15:59:51.803199 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Oct 01 15:59:51 crc kubenswrapper[4949]: E1001 15:59:51.803722 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="41cbbbe8-6b79-4667-ba8d-7252d0d1a998" containerName="cinder-db-sync" Oct 01 15:59:51 crc kubenswrapper[4949]: I1001 15:59:51.803738 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="41cbbbe8-6b79-4667-ba8d-7252d0d1a998" containerName="cinder-db-sync" Oct 01 15:59:51 crc kubenswrapper[4949]: I1001 15:59:51.803911 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="41cbbbe8-6b79-4667-ba8d-7252d0d1a998" containerName="cinder-db-sync" Oct 01 15:59:51 crc kubenswrapper[4949]: I1001 15:59:51.805045 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 01 15:59:51 crc kubenswrapper[4949]: I1001 15:59:51.814155 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scripts" Oct 01 15:59:51 crc kubenswrapper[4949]: I1001 15:59:51.815106 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Oct 01 15:59:51 crc kubenswrapper[4949]: I1001 15:59:51.815686 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-config-data" Oct 01 15:59:51 crc kubenswrapper[4949]: I1001 15:59:51.817145 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-cinder-dockercfg-rn68j" Oct 01 15:59:51 crc kubenswrapper[4949]: I1001 15:59:51.821681 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 01 15:59:51 crc kubenswrapper[4949]: I1001 15:59:51.912961 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-ljcwm"] Oct 01 15:59:51 crc kubenswrapper[4949]: I1001 15:59:51.949640 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8phqg\" (UniqueName: \"kubernetes.io/projected/8b11c0e4-ef14-48dd-84bb-2eb40ff02abf-kube-api-access-8phqg\") pod \"cinder-scheduler-0\" (UID: \"8b11c0e4-ef14-48dd-84bb-2eb40ff02abf\") " pod="openstack/cinder-scheduler-0" Oct 01 15:59:51 crc kubenswrapper[4949]: I1001 15:59:51.950082 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b11c0e4-ef14-48dd-84bb-2eb40ff02abf-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8b11c0e4-ef14-48dd-84bb-2eb40ff02abf\") " pod="openstack/cinder-scheduler-0" Oct 01 15:59:51 crc kubenswrapper[4949]: I1001 15:59:51.950318 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b11c0e4-ef14-48dd-84bb-2eb40ff02abf-config-data\") pod \"cinder-scheduler-0\" (UID: \"8b11c0e4-ef14-48dd-84bb-2eb40ff02abf\") " pod="openstack/cinder-scheduler-0" Oct 01 15:59:51 crc kubenswrapper[4949]: I1001 15:59:51.950547 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8b11c0e4-ef14-48dd-84bb-2eb40ff02abf-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8b11c0e4-ef14-48dd-84bb-2eb40ff02abf\") " pod="openstack/cinder-scheduler-0" Oct 01 15:59:51 crc kubenswrapper[4949]: I1001 15:59:51.950695 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b11c0e4-ef14-48dd-84bb-2eb40ff02abf-scripts\") pod \"cinder-scheduler-0\" (UID: \"8b11c0e4-ef14-48dd-84bb-2eb40ff02abf\") " pod="openstack/cinder-scheduler-0" Oct 01 15:59:51 crc kubenswrapper[4949]: I1001 15:59:51.950874 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8b11c0e4-ef14-48dd-84bb-2eb40ff02abf-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8b11c0e4-ef14-48dd-84bb-2eb40ff02abf\") " pod="openstack/cinder-scheduler-0" Oct 01 15:59:51 crc kubenswrapper[4949]: I1001 15:59:51.952359 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6bb684768f-ljcwm" podUID="09cb5d0c-f02b-4067-94ea-43e1067dbb5b" containerName="dnsmasq-dns" containerID="cri-o://824262f2dab2df3cfcf23f74355950a02079fc759d8c7f8ae279183f295f80b0" gracePeriod=10 Oct 01 15:59:51 crc kubenswrapper[4949]: I1001 15:59:51.982086 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-dtzdc"] Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.004517 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-dtzdc" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.055499 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b11c0e4-ef14-48dd-84bb-2eb40ff02abf-config-data\") pod \"cinder-scheduler-0\" (UID: \"8b11c0e4-ef14-48dd-84bb-2eb40ff02abf\") " pod="openstack/cinder-scheduler-0" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.055606 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8b11c0e4-ef14-48dd-84bb-2eb40ff02abf-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8b11c0e4-ef14-48dd-84bb-2eb40ff02abf\") " pod="openstack/cinder-scheduler-0" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.055633 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b11c0e4-ef14-48dd-84bb-2eb40ff02abf-scripts\") pod \"cinder-scheduler-0\" (UID: \"8b11c0e4-ef14-48dd-84bb-2eb40ff02abf\") " pod="openstack/cinder-scheduler-0" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.055677 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8b11c0e4-ef14-48dd-84bb-2eb40ff02abf-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8b11c0e4-ef14-48dd-84bb-2eb40ff02abf\") " pod="openstack/cinder-scheduler-0" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.055709 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8phqg\" (UniqueName: \"kubernetes.io/projected/8b11c0e4-ef14-48dd-84bb-2eb40ff02abf-kube-api-access-8phqg\") pod \"cinder-scheduler-0\" (UID: \"8b11c0e4-ef14-48dd-84bb-2eb40ff02abf\") " pod="openstack/cinder-scheduler-0" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.055824 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b11c0e4-ef14-48dd-84bb-2eb40ff02abf-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8b11c0e4-ef14-48dd-84bb-2eb40ff02abf\") " pod="openstack/cinder-scheduler-0" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.057239 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8b11c0e4-ef14-48dd-84bb-2eb40ff02abf-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"8b11c0e4-ef14-48dd-84bb-2eb40ff02abf\") " pod="openstack/cinder-scheduler-0" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.065501 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b11c0e4-ef14-48dd-84bb-2eb40ff02abf-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"8b11c0e4-ef14-48dd-84bb-2eb40ff02abf\") " pod="openstack/cinder-scheduler-0" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.067638 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8b11c0e4-ef14-48dd-84bb-2eb40ff02abf-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"8b11c0e4-ef14-48dd-84bb-2eb40ff02abf\") " pod="openstack/cinder-scheduler-0" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.071810 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b11c0e4-ef14-48dd-84bb-2eb40ff02abf-scripts\") pod \"cinder-scheduler-0\" (UID: \"8b11c0e4-ef14-48dd-84bb-2eb40ff02abf\") " pod="openstack/cinder-scheduler-0" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.079086 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b11c0e4-ef14-48dd-84bb-2eb40ff02abf-config-data\") pod \"cinder-scheduler-0\" (UID: \"8b11c0e4-ef14-48dd-84bb-2eb40ff02abf\") " pod="openstack/cinder-scheduler-0" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.102881 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-dtzdc"] Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.175144 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8phqg\" (UniqueName: \"kubernetes.io/projected/8b11c0e4-ef14-48dd-84bb-2eb40ff02abf-kube-api-access-8phqg\") pod \"cinder-scheduler-0\" (UID: \"8b11c0e4-ef14-48dd-84bb-2eb40ff02abf\") " pod="openstack/cinder-scheduler-0" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.185856 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93330a80-b373-43e8-88f3-26a188281912-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-dtzdc\" (UID: \"93330a80-b373-43e8-88f3-26a188281912\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-dtzdc" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.186314 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.186744 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93330a80-b373-43e8-88f3-26a188281912-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-dtzdc\" (UID: \"93330a80-b373-43e8-88f3-26a188281912\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-dtzdc" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.186767 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4vzgj\" (UniqueName: \"kubernetes.io/projected/93330a80-b373-43e8-88f3-26a188281912-kube-api-access-4vzgj\") pod \"dnsmasq-dns-6d97fcdd8f-dtzdc\" (UID: \"93330a80-b373-43e8-88f3-26a188281912\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-dtzdc" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.186843 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93330a80-b373-43e8-88f3-26a188281912-config\") pod \"dnsmasq-dns-6d97fcdd8f-dtzdc\" (UID: \"93330a80-b373-43e8-88f3-26a188281912\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-dtzdc" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.186873 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93330a80-b373-43e8-88f3-26a188281912-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-dtzdc\" (UID: \"93330a80-b373-43e8-88f3-26a188281912\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-dtzdc" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.203362 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.204722 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.251965 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.268301 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.287757 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93330a80-b373-43e8-88f3-26a188281912-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-dtzdc\" (UID: \"93330a80-b373-43e8-88f3-26a188281912\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-dtzdc" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.288040 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93330a80-b373-43e8-88f3-26a188281912-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-dtzdc\" (UID: \"93330a80-b373-43e8-88f3-26a188281912\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-dtzdc" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.288241 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93330a80-b373-43e8-88f3-26a188281912-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-dtzdc\" (UID: \"93330a80-b373-43e8-88f3-26a188281912\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-dtzdc" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.288343 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4vzgj\" (UniqueName: \"kubernetes.io/projected/93330a80-b373-43e8-88f3-26a188281912-kube-api-access-4vzgj\") pod \"dnsmasq-dns-6d97fcdd8f-dtzdc\" (UID: \"93330a80-b373-43e8-88f3-26a188281912\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-dtzdc" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.288488 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93330a80-b373-43e8-88f3-26a188281912-config\") pod \"dnsmasq-dns-6d97fcdd8f-dtzdc\" (UID: \"93330a80-b373-43e8-88f3-26a188281912\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-dtzdc" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.290095 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93330a80-b373-43e8-88f3-26a188281912-config\") pod \"dnsmasq-dns-6d97fcdd8f-dtzdc\" (UID: \"93330a80-b373-43e8-88f3-26a188281912\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-dtzdc" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.290159 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93330a80-b373-43e8-88f3-26a188281912-dns-svc\") pod \"dnsmasq-dns-6d97fcdd8f-dtzdc\" (UID: \"93330a80-b373-43e8-88f3-26a188281912\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-dtzdc" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.290603 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93330a80-b373-43e8-88f3-26a188281912-ovsdbserver-sb\") pod \"dnsmasq-dns-6d97fcdd8f-dtzdc\" (UID: \"93330a80-b373-43e8-88f3-26a188281912\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-dtzdc" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.290890 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93330a80-b373-43e8-88f3-26a188281912-ovsdbserver-nb\") pod \"dnsmasq-dns-6d97fcdd8f-dtzdc\" (UID: \"93330a80-b373-43e8-88f3-26a188281912\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-dtzdc" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.317191 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4vzgj\" (UniqueName: \"kubernetes.io/projected/93330a80-b373-43e8-88f3-26a188281912-kube-api-access-4vzgj\") pod \"dnsmasq-dns-6d97fcdd8f-dtzdc\" (UID: \"93330a80-b373-43e8-88f3-26a188281912\") " pod="openstack/dnsmasq-dns-6d97fcdd8f-dtzdc" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.360517 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-dtzdc" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.389821 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03e43939-3732-4b86-ad5d-0e04ef8570d0-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"03e43939-3732-4b86-ad5d-0e04ef8570d0\") " pod="openstack/cinder-api-0" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.389869 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p5sqz\" (UniqueName: \"kubernetes.io/projected/03e43939-3732-4b86-ad5d-0e04ef8570d0-kube-api-access-p5sqz\") pod \"cinder-api-0\" (UID: \"03e43939-3732-4b86-ad5d-0e04ef8570d0\") " pod="openstack/cinder-api-0" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.389942 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/03e43939-3732-4b86-ad5d-0e04ef8570d0-etc-machine-id\") pod \"cinder-api-0\" (UID: \"03e43939-3732-4b86-ad5d-0e04ef8570d0\") " pod="openstack/cinder-api-0" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.389978 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03e43939-3732-4b86-ad5d-0e04ef8570d0-config-data\") pod \"cinder-api-0\" (UID: \"03e43939-3732-4b86-ad5d-0e04ef8570d0\") " pod="openstack/cinder-api-0" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.390011 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03e43939-3732-4b86-ad5d-0e04ef8570d0-logs\") pod \"cinder-api-0\" (UID: \"03e43939-3732-4b86-ad5d-0e04ef8570d0\") " pod="openstack/cinder-api-0" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.390055 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03e43939-3732-4b86-ad5d-0e04ef8570d0-scripts\") pod \"cinder-api-0\" (UID: \"03e43939-3732-4b86-ad5d-0e04ef8570d0\") " pod="openstack/cinder-api-0" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.390090 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/03e43939-3732-4b86-ad5d-0e04ef8570d0-config-data-custom\") pod \"cinder-api-0\" (UID: \"03e43939-3732-4b86-ad5d-0e04ef8570d0\") " pod="openstack/cinder-api-0" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.483502 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b577bcff4-r4wxv" event={"ID":"b22f7df5-f2f0-485a-b277-6196948f9cee","Type":"ContainerStarted","Data":"48f6a95f345d745812dcd6a4c4c85c9bdc314fcc019429a6183cc3abe0c806e5"} Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.483844 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-b577bcff4-r4wxv" event={"ID":"b22f7df5-f2f0-485a-b277-6196948f9cee","Type":"ContainerStarted","Data":"492f83ab73d938f8f1da8d098de3f590c9096591efc09ad5347c053bee24462e"} Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.483893 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-b577bcff4-r4wxv" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.483920 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/barbican-api-b577bcff4-r4wxv" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.491549 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03e43939-3732-4b86-ad5d-0e04ef8570d0-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"03e43939-3732-4b86-ad5d-0e04ef8570d0\") " pod="openstack/cinder-api-0" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.491617 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p5sqz\" (UniqueName: \"kubernetes.io/projected/03e43939-3732-4b86-ad5d-0e04ef8570d0-kube-api-access-p5sqz\") pod \"cinder-api-0\" (UID: \"03e43939-3732-4b86-ad5d-0e04ef8570d0\") " pod="openstack/cinder-api-0" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.491711 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/03e43939-3732-4b86-ad5d-0e04ef8570d0-etc-machine-id\") pod \"cinder-api-0\" (UID: \"03e43939-3732-4b86-ad5d-0e04ef8570d0\") " pod="openstack/cinder-api-0" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.491735 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03e43939-3732-4b86-ad5d-0e04ef8570d0-config-data\") pod \"cinder-api-0\" (UID: \"03e43939-3732-4b86-ad5d-0e04ef8570d0\") " pod="openstack/cinder-api-0" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.491776 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03e43939-3732-4b86-ad5d-0e04ef8570d0-logs\") pod \"cinder-api-0\" (UID: \"03e43939-3732-4b86-ad5d-0e04ef8570d0\") " pod="openstack/cinder-api-0" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.491815 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03e43939-3732-4b86-ad5d-0e04ef8570d0-scripts\") pod \"cinder-api-0\" (UID: \"03e43939-3732-4b86-ad5d-0e04ef8570d0\") " pod="openstack/cinder-api-0" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.491862 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/03e43939-3732-4b86-ad5d-0e04ef8570d0-config-data-custom\") pod \"cinder-api-0\" (UID: \"03e43939-3732-4b86-ad5d-0e04ef8570d0\") " pod="openstack/cinder-api-0" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.493086 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/03e43939-3732-4b86-ad5d-0e04ef8570d0-etc-machine-id\") pod \"cinder-api-0\" (UID: \"03e43939-3732-4b86-ad5d-0e04ef8570d0\") " pod="openstack/cinder-api-0" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.494236 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03e43939-3732-4b86-ad5d-0e04ef8570d0-logs\") pod \"cinder-api-0\" (UID: \"03e43939-3732-4b86-ad5d-0e04ef8570d0\") " pod="openstack/cinder-api-0" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.499879 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03e43939-3732-4b86-ad5d-0e04ef8570d0-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"03e43939-3732-4b86-ad5d-0e04ef8570d0\") " pod="openstack/cinder-api-0" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.501718 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/03e43939-3732-4b86-ad5d-0e04ef8570d0-config-data-custom\") pod \"cinder-api-0\" (UID: \"03e43939-3732-4b86-ad5d-0e04ef8570d0\") " pod="openstack/cinder-api-0" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.502344 4949 generic.go:334] "Generic (PLEG): container finished" podID="09cb5d0c-f02b-4067-94ea-43e1067dbb5b" containerID="824262f2dab2df3cfcf23f74355950a02079fc759d8c7f8ae279183f295f80b0" exitCode=0 Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.502436 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-ljcwm" event={"ID":"09cb5d0c-f02b-4067-94ea-43e1067dbb5b","Type":"ContainerDied","Data":"824262f2dab2df3cfcf23f74355950a02079fc759d8c7f8ae279183f295f80b0"} Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.503240 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03e43939-3732-4b86-ad5d-0e04ef8570d0-scripts\") pod \"cinder-api-0\" (UID: \"03e43939-3732-4b86-ad5d-0e04ef8570d0\") " pod="openstack/cinder-api-0" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.505065 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03e43939-3732-4b86-ad5d-0e04ef8570d0-config-data\") pod \"cinder-api-0\" (UID: \"03e43939-3732-4b86-ad5d-0e04ef8570d0\") " pod="openstack/cinder-api-0" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.522163 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/barbican-api-b577bcff4-r4wxv" podStartSLOduration=2.522147968 podStartE2EDuration="2.522147968s" podCreationTimestamp="2025-10-01 15:59:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:59:52.517998155 +0000 UTC m=+1091.823604346" watchObservedRunningTime="2025-10-01 15:59:52.522147968 +0000 UTC m=+1091.827754159" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.526764 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p5sqz\" (UniqueName: \"kubernetes.io/projected/03e43939-3732-4b86-ad5d-0e04ef8570d0-kube-api-access-p5sqz\") pod \"cinder-api-0\" (UID: \"03e43939-3732-4b86-ad5d-0e04ef8570d0\") " pod="openstack/cinder-api-0" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.535282 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5778394-f962-4a15-960f-85a81428d1d0","Type":"ContainerStarted","Data":"5740517d36f8c670db6bde5ccf307226113b3f4bb8e2b99873b800747677b625"} Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.655401 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.801811 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 01 15:59:52 crc kubenswrapper[4949]: I1001 15:59:52.948813 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-dtzdc"] Oct 01 15:59:53 crc kubenswrapper[4949]: I1001 15:59:53.132851 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-ljcwm" Oct 01 15:59:53 crc kubenswrapper[4949]: I1001 15:59:53.207141 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09cb5d0c-f02b-4067-94ea-43e1067dbb5b-ovsdbserver-sb\") pod \"09cb5d0c-f02b-4067-94ea-43e1067dbb5b\" (UID: \"09cb5d0c-f02b-4067-94ea-43e1067dbb5b\") " Oct 01 15:59:53 crc kubenswrapper[4949]: I1001 15:59:53.207367 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cb5d0c-f02b-4067-94ea-43e1067dbb5b-config\") pod \"09cb5d0c-f02b-4067-94ea-43e1067dbb5b\" (UID: \"09cb5d0c-f02b-4067-94ea-43e1067dbb5b\") " Oct 01 15:59:53 crc kubenswrapper[4949]: I1001 15:59:53.207464 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-crqvq\" (UniqueName: \"kubernetes.io/projected/09cb5d0c-f02b-4067-94ea-43e1067dbb5b-kube-api-access-crqvq\") pod \"09cb5d0c-f02b-4067-94ea-43e1067dbb5b\" (UID: \"09cb5d0c-f02b-4067-94ea-43e1067dbb5b\") " Oct 01 15:59:53 crc kubenswrapper[4949]: I1001 15:59:53.207499 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09cb5d0c-f02b-4067-94ea-43e1067dbb5b-ovsdbserver-nb\") pod \"09cb5d0c-f02b-4067-94ea-43e1067dbb5b\" (UID: \"09cb5d0c-f02b-4067-94ea-43e1067dbb5b\") " Oct 01 15:59:53 crc kubenswrapper[4949]: I1001 15:59:53.207520 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09cb5d0c-f02b-4067-94ea-43e1067dbb5b-dns-svc\") pod \"09cb5d0c-f02b-4067-94ea-43e1067dbb5b\" (UID: \"09cb5d0c-f02b-4067-94ea-43e1067dbb5b\") " Oct 01 15:59:53 crc kubenswrapper[4949]: I1001 15:59:53.224909 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cb5d0c-f02b-4067-94ea-43e1067dbb5b-kube-api-access-crqvq" (OuterVolumeSpecName: "kube-api-access-crqvq") pod "09cb5d0c-f02b-4067-94ea-43e1067dbb5b" (UID: "09cb5d0c-f02b-4067-94ea-43e1067dbb5b"). InnerVolumeSpecName "kube-api-access-crqvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:59:53 crc kubenswrapper[4949]: I1001 15:59:53.237050 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 01 15:59:53 crc kubenswrapper[4949]: I1001 15:59:53.268823 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cb5d0c-f02b-4067-94ea-43e1067dbb5b-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "09cb5d0c-f02b-4067-94ea-43e1067dbb5b" (UID: "09cb5d0c-f02b-4067-94ea-43e1067dbb5b"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:59:53 crc kubenswrapper[4949]: I1001 15:59:53.278728 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cb5d0c-f02b-4067-94ea-43e1067dbb5b-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "09cb5d0c-f02b-4067-94ea-43e1067dbb5b" (UID: "09cb5d0c-f02b-4067-94ea-43e1067dbb5b"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:59:53 crc kubenswrapper[4949]: I1001 15:59:53.289066 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cb5d0c-f02b-4067-94ea-43e1067dbb5b-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "09cb5d0c-f02b-4067-94ea-43e1067dbb5b" (UID: "09cb5d0c-f02b-4067-94ea-43e1067dbb5b"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:59:53 crc kubenswrapper[4949]: I1001 15:59:53.299100 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cb5d0c-f02b-4067-94ea-43e1067dbb5b-config" (OuterVolumeSpecName: "config") pod "09cb5d0c-f02b-4067-94ea-43e1067dbb5b" (UID: "09cb5d0c-f02b-4067-94ea-43e1067dbb5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 15:59:53 crc kubenswrapper[4949]: I1001 15:59:53.309969 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/09cb5d0c-f02b-4067-94ea-43e1067dbb5b-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:53 crc kubenswrapper[4949]: I1001 15:59:53.310014 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cb5d0c-f02b-4067-94ea-43e1067dbb5b-config\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:53 crc kubenswrapper[4949]: I1001 15:59:53.310075 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-crqvq\" (UniqueName: \"kubernetes.io/projected/09cb5d0c-f02b-4067-94ea-43e1067dbb5b-kube-api-access-crqvq\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:53 crc kubenswrapper[4949]: I1001 15:59:53.310094 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/09cb5d0c-f02b-4067-94ea-43e1067dbb5b-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:53 crc kubenswrapper[4949]: I1001 15:59:53.310248 4949 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/09cb5d0c-f02b-4067-94ea-43e1067dbb5b-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:53 crc kubenswrapper[4949]: I1001 15:59:53.552935 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8b11c0e4-ef14-48dd-84bb-2eb40ff02abf","Type":"ContainerStarted","Data":"6e91d20ec9dd1d03e3c2716a5a5855b0a3c9b65d83c7517fb241f8cb3239d9c4"} Oct 01 15:59:53 crc kubenswrapper[4949]: I1001 15:59:53.554621 4949 generic.go:334] "Generic (PLEG): container finished" podID="93330a80-b373-43e8-88f3-26a188281912" containerID="18571e36d94e945585e8160c7d69ade60b528958bbfaa474d40219750698ebc8" exitCode=0 Oct 01 15:59:53 crc kubenswrapper[4949]: I1001 15:59:53.554668 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-dtzdc" event={"ID":"93330a80-b373-43e8-88f3-26a188281912","Type":"ContainerDied","Data":"18571e36d94e945585e8160c7d69ade60b528958bbfaa474d40219750698ebc8"} Oct 01 15:59:53 crc kubenswrapper[4949]: I1001 15:59:53.554686 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-dtzdc" event={"ID":"93330a80-b373-43e8-88f3-26a188281912","Type":"ContainerStarted","Data":"c03d056cab5d0c74b72b9a9ad56a1ea5b6d0bc0f5bb87a26e3ff079dee3c25c4"} Oct 01 15:59:53 crc kubenswrapper[4949]: I1001 15:59:53.556460 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"03e43939-3732-4b86-ad5d-0e04ef8570d0","Type":"ContainerStarted","Data":"5f290efe3f4fb551fdc38d5fde3f95003344fcde597cb7f46767be076344a3ca"} Oct 01 15:59:53 crc kubenswrapper[4949]: I1001 15:59:53.558494 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6bb684768f-ljcwm" event={"ID":"09cb5d0c-f02b-4067-94ea-43e1067dbb5b","Type":"ContainerDied","Data":"d2ac3ddc542f9b193a5de075ed680c78f33c0431d3447aa0a56018db9710c2e5"} Oct 01 15:59:53 crc kubenswrapper[4949]: I1001 15:59:53.558526 4949 scope.go:117] "RemoveContainer" containerID="824262f2dab2df3cfcf23f74355950a02079fc759d8c7f8ae279183f295f80b0" Oct 01 15:59:53 crc kubenswrapper[4949]: I1001 15:59:53.558552 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6bb684768f-ljcwm" Oct 01 15:59:53 crc kubenswrapper[4949]: I1001 15:59:53.578393 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5778394-f962-4a15-960f-85a81428d1d0","Type":"ContainerStarted","Data":"d07612193ee4a35973eda9f834ce730ba9cb448535bb8f8f248cde50e4eba70d"} Oct 01 15:59:53 crc kubenswrapper[4949]: I1001 15:59:53.619700 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.8096015319999998 podStartE2EDuration="6.61968634s" podCreationTimestamp="2025-10-01 15:59:47 +0000 UTC" firstStartedPulling="2025-10-01 15:59:48.393995888 +0000 UTC m=+1087.699602089" lastFinishedPulling="2025-10-01 15:59:53.204080706 +0000 UTC m=+1092.509686897" observedRunningTime="2025-10-01 15:59:53.613593164 +0000 UTC m=+1092.919199355" watchObservedRunningTime="2025-10-01 15:59:53.61968634 +0000 UTC m=+1092.925292531" Oct 01 15:59:53 crc kubenswrapper[4949]: I1001 15:59:53.643661 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-ljcwm"] Oct 01 15:59:53 crc kubenswrapper[4949]: I1001 15:59:53.652457 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6bb684768f-ljcwm"] Oct 01 15:59:53 crc kubenswrapper[4949]: I1001 15:59:53.719541 4949 scope.go:117] "RemoveContainer" containerID="a19bbc6f564135b69fde67329f49126a1a395eaedb240a5731aa02cb7935503d" Oct 01 15:59:54 crc kubenswrapper[4949]: I1001 15:59:54.225141 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 01 15:59:54 crc kubenswrapper[4949]: I1001 15:59:54.615064 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"03e43939-3732-4b86-ad5d-0e04ef8570d0","Type":"ContainerStarted","Data":"cb1addb80ddb43dcd25a2dc8e769f8873f315342841b263f1e4b2a2cf02fe42b"} Oct 01 15:59:54 crc kubenswrapper[4949]: I1001 15:59:54.671302 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8b11c0e4-ef14-48dd-84bb-2eb40ff02abf","Type":"ContainerStarted","Data":"6d6f483d3c6e9c65c5cf1756f97385ced18a02d66408b257d707d64e61a9299b"} Oct 01 15:59:54 crc kubenswrapper[4949]: I1001 15:59:54.680697 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-dtzdc" event={"ID":"93330a80-b373-43e8-88f3-26a188281912","Type":"ContainerStarted","Data":"691a1d62aebef81b6c2bcbd9a34d47bb7ce5fe7c350f34f7dd607386a65d27b2"} Oct 01 15:59:54 crc kubenswrapper[4949]: I1001 15:59:54.680746 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 01 15:59:54 crc kubenswrapper[4949]: I1001 15:59:54.680778 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6d97fcdd8f-dtzdc" Oct 01 15:59:55 crc kubenswrapper[4949]: I1001 15:59:55.619396 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cb5d0c-f02b-4067-94ea-43e1067dbb5b" path="/var/lib/kubelet/pods/09cb5d0c-f02b-4067-94ea-43e1067dbb5b/volumes" Oct 01 15:59:55 crc kubenswrapper[4949]: I1001 15:59:55.706410 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8b11c0e4-ef14-48dd-84bb-2eb40ff02abf","Type":"ContainerStarted","Data":"30ecacb35674ab5285a3d0e3f1398d6ee78609d5e88bc49bae9b97e17eebf9f2"} Oct 01 15:59:55 crc kubenswrapper[4949]: I1001 15:59:55.710491 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"03e43939-3732-4b86-ad5d-0e04ef8570d0","Type":"ContainerStarted","Data":"fa8922aa260c95ef07a7ebcaa973ad8143b5cb8933f6ad3ba15ba60686b26c5d"} Oct 01 15:59:55 crc kubenswrapper[4949]: I1001 15:59:55.711107 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="03e43939-3732-4b86-ad5d-0e04ef8570d0" containerName="cinder-api-log" containerID="cri-o://cb1addb80ddb43dcd25a2dc8e769f8873f315342841b263f1e4b2a2cf02fe42b" gracePeriod=30 Oct 01 15:59:55 crc kubenswrapper[4949]: I1001 15:59:55.711216 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-api-0" podUID="03e43939-3732-4b86-ad5d-0e04ef8570d0" containerName="cinder-api" containerID="cri-o://fa8922aa260c95ef07a7ebcaa973ad8143b5cb8933f6ad3ba15ba60686b26c5d" gracePeriod=30 Oct 01 15:59:55 crc kubenswrapper[4949]: I1001 15:59:55.731589 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.757135955 podStartE2EDuration="4.731569616s" podCreationTimestamp="2025-10-01 15:59:51 +0000 UTC" firstStartedPulling="2025-10-01 15:59:52.816632407 +0000 UTC m=+1092.122238598" lastFinishedPulling="2025-10-01 15:59:53.791066058 +0000 UTC m=+1093.096672259" observedRunningTime="2025-10-01 15:59:55.729657924 +0000 UTC m=+1095.035264115" watchObservedRunningTime="2025-10-01 15:59:55.731569616 +0000 UTC m=+1095.037175827" Oct 01 15:59:55 crc kubenswrapper[4949]: I1001 15:59:55.737547 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6d97fcdd8f-dtzdc" podStartSLOduration=4.737509748 podStartE2EDuration="4.737509748s" podCreationTimestamp="2025-10-01 15:59:51 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:59:54.709522162 +0000 UTC m=+1094.015128353" watchObservedRunningTime="2025-10-01 15:59:55.737509748 +0000 UTC m=+1095.043115939" Oct 01 15:59:55 crc kubenswrapper[4949]: I1001 15:59:55.758809 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.7587886150000003 podStartE2EDuration="3.758788615s" podCreationTimestamp="2025-10-01 15:59:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:59:55.756491403 +0000 UTC m=+1095.062097594" watchObservedRunningTime="2025-10-01 15:59:55.758788615 +0000 UTC m=+1095.064394806" Oct 01 15:59:55 crc kubenswrapper[4949]: I1001 15:59:55.886954 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-db9d4676b-6pnsc" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.135303 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-db9d4676b-6pnsc" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.480605 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.581173 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03e43939-3732-4b86-ad5d-0e04ef8570d0-combined-ca-bundle\") pod \"03e43939-3732-4b86-ad5d-0e04ef8570d0\" (UID: \"03e43939-3732-4b86-ad5d-0e04ef8570d0\") " Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.582499 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03e43939-3732-4b86-ad5d-0e04ef8570d0-config-data\") pod \"03e43939-3732-4b86-ad5d-0e04ef8570d0\" (UID: \"03e43939-3732-4b86-ad5d-0e04ef8570d0\") " Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.582554 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03e43939-3732-4b86-ad5d-0e04ef8570d0-logs\") pod \"03e43939-3732-4b86-ad5d-0e04ef8570d0\" (UID: \"03e43939-3732-4b86-ad5d-0e04ef8570d0\") " Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.582628 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/03e43939-3732-4b86-ad5d-0e04ef8570d0-config-data-custom\") pod \"03e43939-3732-4b86-ad5d-0e04ef8570d0\" (UID: \"03e43939-3732-4b86-ad5d-0e04ef8570d0\") " Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.582684 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/03e43939-3732-4b86-ad5d-0e04ef8570d0-etc-machine-id\") pod \"03e43939-3732-4b86-ad5d-0e04ef8570d0\" (UID: \"03e43939-3732-4b86-ad5d-0e04ef8570d0\") " Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.582725 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03e43939-3732-4b86-ad5d-0e04ef8570d0-scripts\") pod \"03e43939-3732-4b86-ad5d-0e04ef8570d0\" (UID: \"03e43939-3732-4b86-ad5d-0e04ef8570d0\") " Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.582769 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/03e43939-3732-4b86-ad5d-0e04ef8570d0-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "03e43939-3732-4b86-ad5d-0e04ef8570d0" (UID: "03e43939-3732-4b86-ad5d-0e04ef8570d0"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.582787 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p5sqz\" (UniqueName: \"kubernetes.io/projected/03e43939-3732-4b86-ad5d-0e04ef8570d0-kube-api-access-p5sqz\") pod \"03e43939-3732-4b86-ad5d-0e04ef8570d0\" (UID: \"03e43939-3732-4b86-ad5d-0e04ef8570d0\") " Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.583223 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/03e43939-3732-4b86-ad5d-0e04ef8570d0-logs" (OuterVolumeSpecName: "logs") pod "03e43939-3732-4b86-ad5d-0e04ef8570d0" (UID: "03e43939-3732-4b86-ad5d-0e04ef8570d0"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.583663 4949 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/03e43939-3732-4b86-ad5d-0e04ef8570d0-logs\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.583686 4949 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/03e43939-3732-4b86-ad5d-0e04ef8570d0-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.587455 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03e43939-3732-4b86-ad5d-0e04ef8570d0-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "03e43939-3732-4b86-ad5d-0e04ef8570d0" (UID: "03e43939-3732-4b86-ad5d-0e04ef8570d0"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.590223 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03e43939-3732-4b86-ad5d-0e04ef8570d0-scripts" (OuterVolumeSpecName: "scripts") pod "03e43939-3732-4b86-ad5d-0e04ef8570d0" (UID: "03e43939-3732-4b86-ad5d-0e04ef8570d0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.607556 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/03e43939-3732-4b86-ad5d-0e04ef8570d0-kube-api-access-p5sqz" (OuterVolumeSpecName: "kube-api-access-p5sqz") pod "03e43939-3732-4b86-ad5d-0e04ef8570d0" (UID: "03e43939-3732-4b86-ad5d-0e04ef8570d0"). InnerVolumeSpecName "kube-api-access-p5sqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.619229 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03e43939-3732-4b86-ad5d-0e04ef8570d0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "03e43939-3732-4b86-ad5d-0e04ef8570d0" (UID: "03e43939-3732-4b86-ad5d-0e04ef8570d0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.639992 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/03e43939-3732-4b86-ad5d-0e04ef8570d0-config-data" (OuterVolumeSpecName: "config-data") pod "03e43939-3732-4b86-ad5d-0e04ef8570d0" (UID: "03e43939-3732-4b86-ad5d-0e04ef8570d0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.684920 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p5sqz\" (UniqueName: \"kubernetes.io/projected/03e43939-3732-4b86-ad5d-0e04ef8570d0-kube-api-access-p5sqz\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.684972 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/03e43939-3732-4b86-ad5d-0e04ef8570d0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.684984 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/03e43939-3732-4b86-ad5d-0e04ef8570d0-config-data\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.684994 4949 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/03e43939-3732-4b86-ad5d-0e04ef8570d0-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.685004 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/03e43939-3732-4b86-ad5d-0e04ef8570d0-scripts\") on node \"crc\" DevicePath \"\"" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.722655 4949 generic.go:334] "Generic (PLEG): container finished" podID="03e43939-3732-4b86-ad5d-0e04ef8570d0" containerID="fa8922aa260c95ef07a7ebcaa973ad8143b5cb8933f6ad3ba15ba60686b26c5d" exitCode=0 Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.722700 4949 generic.go:334] "Generic (PLEG): container finished" podID="03e43939-3732-4b86-ad5d-0e04ef8570d0" containerID="cb1addb80ddb43dcd25a2dc8e769f8873f315342841b263f1e4b2a2cf02fe42b" exitCode=143 Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.722700 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.722817 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"03e43939-3732-4b86-ad5d-0e04ef8570d0","Type":"ContainerDied","Data":"fa8922aa260c95ef07a7ebcaa973ad8143b5cb8933f6ad3ba15ba60686b26c5d"} Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.722850 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"03e43939-3732-4b86-ad5d-0e04ef8570d0","Type":"ContainerDied","Data":"cb1addb80ddb43dcd25a2dc8e769f8873f315342841b263f1e4b2a2cf02fe42b"} Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.722863 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"03e43939-3732-4b86-ad5d-0e04ef8570d0","Type":"ContainerDied","Data":"5f290efe3f4fb551fdc38d5fde3f95003344fcde597cb7f46767be076344a3ca"} Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.722880 4949 scope.go:117] "RemoveContainer" containerID="fa8922aa260c95ef07a7ebcaa973ad8143b5cb8933f6ad3ba15ba60686b26c5d" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.759715 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-api-0"] Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.759803 4949 scope.go:117] "RemoveContainer" containerID="cb1addb80ddb43dcd25a2dc8e769f8873f315342841b263f1e4b2a2cf02fe42b" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.768996 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-api-0"] Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.783217 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-api-0"] Oct 01 15:59:56 crc kubenswrapper[4949]: E1001 15:59:56.783646 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03e43939-3732-4b86-ad5d-0e04ef8570d0" containerName="cinder-api" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.783671 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="03e43939-3732-4b86-ad5d-0e04ef8570d0" containerName="cinder-api" Oct 01 15:59:56 crc kubenswrapper[4949]: E1001 15:59:56.783691 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09cb5d0c-f02b-4067-94ea-43e1067dbb5b" containerName="init" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.783700 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="09cb5d0c-f02b-4067-94ea-43e1067dbb5b" containerName="init" Oct 01 15:59:56 crc kubenswrapper[4949]: E1001 15:59:56.783715 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09cb5d0c-f02b-4067-94ea-43e1067dbb5b" containerName="dnsmasq-dns" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.783724 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="09cb5d0c-f02b-4067-94ea-43e1067dbb5b" containerName="dnsmasq-dns" Oct 01 15:59:56 crc kubenswrapper[4949]: E1001 15:59:56.783765 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="03e43939-3732-4b86-ad5d-0e04ef8570d0" containerName="cinder-api-log" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.783774 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="03e43939-3732-4b86-ad5d-0e04ef8570d0" containerName="cinder-api-log" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.783963 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="09cb5d0c-f02b-4067-94ea-43e1067dbb5b" containerName="dnsmasq-dns" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.783974 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="03e43939-3732-4b86-ad5d-0e04ef8570d0" containerName="cinder-api-log" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.783990 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="03e43939-3732-4b86-ad5d-0e04ef8570d0" containerName="cinder-api" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.785110 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.792427 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-internal-svc" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.792674 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-cinder-public-svc" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.792950 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-api-config-data" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.793109 4949 scope.go:117] "RemoveContainer" containerID="fa8922aa260c95ef07a7ebcaa973ad8143b5cb8933f6ad3ba15ba60686b26c5d" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.793613 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 01 15:59:56 crc kubenswrapper[4949]: E1001 15:59:56.794544 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fa8922aa260c95ef07a7ebcaa973ad8143b5cb8933f6ad3ba15ba60686b26c5d\": container with ID starting with fa8922aa260c95ef07a7ebcaa973ad8143b5cb8933f6ad3ba15ba60686b26c5d not found: ID does not exist" containerID="fa8922aa260c95ef07a7ebcaa973ad8143b5cb8933f6ad3ba15ba60686b26c5d" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.794624 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa8922aa260c95ef07a7ebcaa973ad8143b5cb8933f6ad3ba15ba60686b26c5d"} err="failed to get container status \"fa8922aa260c95ef07a7ebcaa973ad8143b5cb8933f6ad3ba15ba60686b26c5d\": rpc error: code = NotFound desc = could not find container \"fa8922aa260c95ef07a7ebcaa973ad8143b5cb8933f6ad3ba15ba60686b26c5d\": container with ID starting with fa8922aa260c95ef07a7ebcaa973ad8143b5cb8933f6ad3ba15ba60686b26c5d not found: ID does not exist" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.794664 4949 scope.go:117] "RemoveContainer" containerID="cb1addb80ddb43dcd25a2dc8e769f8873f315342841b263f1e4b2a2cf02fe42b" Oct 01 15:59:56 crc kubenswrapper[4949]: E1001 15:59:56.798489 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb1addb80ddb43dcd25a2dc8e769f8873f315342841b263f1e4b2a2cf02fe42b\": container with ID starting with cb1addb80ddb43dcd25a2dc8e769f8873f315342841b263f1e4b2a2cf02fe42b not found: ID does not exist" containerID="cb1addb80ddb43dcd25a2dc8e769f8873f315342841b263f1e4b2a2cf02fe42b" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.798539 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb1addb80ddb43dcd25a2dc8e769f8873f315342841b263f1e4b2a2cf02fe42b"} err="failed to get container status \"cb1addb80ddb43dcd25a2dc8e769f8873f315342841b263f1e4b2a2cf02fe42b\": rpc error: code = NotFound desc = could not find container \"cb1addb80ddb43dcd25a2dc8e769f8873f315342841b263f1e4b2a2cf02fe42b\": container with ID starting with cb1addb80ddb43dcd25a2dc8e769f8873f315342841b263f1e4b2a2cf02fe42b not found: ID does not exist" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.798571 4949 scope.go:117] "RemoveContainer" containerID="fa8922aa260c95ef07a7ebcaa973ad8143b5cb8933f6ad3ba15ba60686b26c5d" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.813089 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fa8922aa260c95ef07a7ebcaa973ad8143b5cb8933f6ad3ba15ba60686b26c5d"} err="failed to get container status \"fa8922aa260c95ef07a7ebcaa973ad8143b5cb8933f6ad3ba15ba60686b26c5d\": rpc error: code = NotFound desc = could not find container \"fa8922aa260c95ef07a7ebcaa973ad8143b5cb8933f6ad3ba15ba60686b26c5d\": container with ID starting with fa8922aa260c95ef07a7ebcaa973ad8143b5cb8933f6ad3ba15ba60686b26c5d not found: ID does not exist" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.813138 4949 scope.go:117] "RemoveContainer" containerID="cb1addb80ddb43dcd25a2dc8e769f8873f315342841b263f1e4b2a2cf02fe42b" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.818686 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb1addb80ddb43dcd25a2dc8e769f8873f315342841b263f1e4b2a2cf02fe42b"} err="failed to get container status \"cb1addb80ddb43dcd25a2dc8e769f8873f315342841b263f1e4b2a2cf02fe42b\": rpc error: code = NotFound desc = could not find container \"cb1addb80ddb43dcd25a2dc8e769f8873f315342841b263f1e4b2a2cf02fe42b\": container with ID starting with cb1addb80ddb43dcd25a2dc8e769f8873f315342841b263f1e4b2a2cf02fe42b not found: ID does not exist" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.889455 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxr7d\" (UniqueName: \"kubernetes.io/projected/2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a-kube-api-access-pxr7d\") pod \"cinder-api-0\" (UID: \"2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a\") " pod="openstack/cinder-api-0" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.889504 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a\") " pod="openstack/cinder-api-0" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.889535 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a-config-data-custom\") pod \"cinder-api-0\" (UID: \"2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a\") " pod="openstack/cinder-api-0" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.889571 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a\") " pod="openstack/cinder-api-0" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.889659 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a\") " pod="openstack/cinder-api-0" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.889683 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a-logs\") pod \"cinder-api-0\" (UID: \"2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a\") " pod="openstack/cinder-api-0" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.889725 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a\") " pod="openstack/cinder-api-0" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.889743 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a-config-data\") pod \"cinder-api-0\" (UID: \"2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a\") " pod="openstack/cinder-api-0" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.889769 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a-scripts\") pod \"cinder-api-0\" (UID: \"2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a\") " pod="openstack/cinder-api-0" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.990869 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a\") " pod="openstack/cinder-api-0" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.990945 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a-logs\") pod \"cinder-api-0\" (UID: \"2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a\") " pod="openstack/cinder-api-0" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.991004 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a\") " pod="openstack/cinder-api-0" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.991032 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a-config-data\") pod \"cinder-api-0\" (UID: \"2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a\") " pod="openstack/cinder-api-0" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.991068 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a-scripts\") pod \"cinder-api-0\" (UID: \"2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a\") " pod="openstack/cinder-api-0" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.991142 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pxr7d\" (UniqueName: \"kubernetes.io/projected/2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a-kube-api-access-pxr7d\") pod \"cinder-api-0\" (UID: \"2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a\") " pod="openstack/cinder-api-0" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.991146 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a-etc-machine-id\") pod \"cinder-api-0\" (UID: \"2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a\") " pod="openstack/cinder-api-0" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.991169 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a\") " pod="openstack/cinder-api-0" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.991216 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a-config-data-custom\") pod \"cinder-api-0\" (UID: \"2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a\") " pod="openstack/cinder-api-0" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.991239 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a\") " pod="openstack/cinder-api-0" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.991551 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a-logs\") pod \"cinder-api-0\" (UID: \"2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a\") " pod="openstack/cinder-api-0" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.996691 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a-combined-ca-bundle\") pod \"cinder-api-0\" (UID: \"2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a\") " pod="openstack/cinder-api-0" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.997497 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a-scripts\") pod \"cinder-api-0\" (UID: \"2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a\") " pod="openstack/cinder-api-0" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.997996 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a-config-data-custom\") pod \"cinder-api-0\" (UID: \"2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a\") " pod="openstack/cinder-api-0" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.997959 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a-internal-tls-certs\") pod \"cinder-api-0\" (UID: \"2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a\") " pod="openstack/cinder-api-0" Oct 01 15:59:56 crc kubenswrapper[4949]: I1001 15:59:56.998271 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a-public-tls-certs\") pod \"cinder-api-0\" (UID: \"2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a\") " pod="openstack/cinder-api-0" Oct 01 15:59:57 crc kubenswrapper[4949]: I1001 15:59:57.000404 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a-config-data\") pod \"cinder-api-0\" (UID: \"2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a\") " pod="openstack/cinder-api-0" Oct 01 15:59:57 crc kubenswrapper[4949]: I1001 15:59:57.010575 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxr7d\" (UniqueName: \"kubernetes.io/projected/2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a-kube-api-access-pxr7d\") pod \"cinder-api-0\" (UID: \"2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a\") " pod="openstack/cinder-api-0" Oct 01 15:59:57 crc kubenswrapper[4949]: I1001 15:59:57.117084 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-api-0" Oct 01 15:59:57 crc kubenswrapper[4949]: I1001 15:59:57.187448 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Oct 01 15:59:57 crc kubenswrapper[4949]: I1001 15:59:57.622463 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="03e43939-3732-4b86-ad5d-0e04ef8570d0" path="/var/lib/kubelet/pods/03e43939-3732-4b86-ad5d-0e04ef8570d0/volumes" Oct 01 15:59:57 crc kubenswrapper[4949]: I1001 15:59:57.623589 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-api-0"] Oct 01 15:59:57 crc kubenswrapper[4949]: I1001 15:59:57.736220 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a","Type":"ContainerStarted","Data":"cda666b2a428aea1cdfdbf748e8af163e4adafb619f478734335b90a097a188c"} Oct 01 15:59:58 crc kubenswrapper[4949]: I1001 15:59:58.746012 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a","Type":"ContainerStarted","Data":"9e0bfc8acaea6492913df22841dc60be0e0a6ca6361835591f5d0127c0c9270f"} Oct 01 15:59:59 crc kubenswrapper[4949]: I1001 15:59:59.757456 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-api-0" event={"ID":"2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a","Type":"ContainerStarted","Data":"d96e66429503d89e75483919dbc6b5ab1b3d008bbf44e134a46dfa01d66b60da"} Oct 01 15:59:59 crc kubenswrapper[4949]: I1001 15:59:59.757919 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/cinder-api-0" Oct 01 15:59:59 crc kubenswrapper[4949]: I1001 15:59:59.784729 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-api-0" podStartSLOduration=3.784707672 podStartE2EDuration="3.784707672s" podCreationTimestamp="2025-10-01 15:59:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 15:59:59.77652624 +0000 UTC m=+1099.082132481" watchObservedRunningTime="2025-10-01 15:59:59.784707672 +0000 UTC m=+1099.090313863" Oct 01 16:00:00 crc kubenswrapper[4949]: I1001 16:00:00.171674 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29322240-gggbl"] Oct 01 16:00:00 crc kubenswrapper[4949]: I1001 16:00:00.172946 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29322240-gggbl" Oct 01 16:00:00 crc kubenswrapper[4949]: I1001 16:00:00.179334 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 01 16:00:00 crc kubenswrapper[4949]: I1001 16:00:00.179468 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 01 16:00:00 crc kubenswrapper[4949]: I1001 16:00:00.179907 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29322240-gggbl"] Oct 01 16:00:00 crc kubenswrapper[4949]: I1001 16:00:00.358262 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5b65325-4c38-4c7a-99b4-3fc38060f598-config-volume\") pod \"collect-profiles-29322240-gggbl\" (UID: \"b5b65325-4c38-4c7a-99b4-3fc38060f598\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322240-gggbl" Oct 01 16:00:00 crc kubenswrapper[4949]: I1001 16:00:00.358363 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b5b65325-4c38-4c7a-99b4-3fc38060f598-secret-volume\") pod \"collect-profiles-29322240-gggbl\" (UID: \"b5b65325-4c38-4c7a-99b4-3fc38060f598\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322240-gggbl" Oct 01 16:00:00 crc kubenswrapper[4949]: I1001 16:00:00.358395 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2875c\" (UniqueName: \"kubernetes.io/projected/b5b65325-4c38-4c7a-99b4-3fc38060f598-kube-api-access-2875c\") pod \"collect-profiles-29322240-gggbl\" (UID: \"b5b65325-4c38-4c7a-99b4-3fc38060f598\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322240-gggbl" Oct 01 16:00:00 crc kubenswrapper[4949]: I1001 16:00:00.460438 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5b65325-4c38-4c7a-99b4-3fc38060f598-config-volume\") pod \"collect-profiles-29322240-gggbl\" (UID: \"b5b65325-4c38-4c7a-99b4-3fc38060f598\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322240-gggbl" Oct 01 16:00:00 crc kubenswrapper[4949]: I1001 16:00:00.460640 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b5b65325-4c38-4c7a-99b4-3fc38060f598-secret-volume\") pod \"collect-profiles-29322240-gggbl\" (UID: \"b5b65325-4c38-4c7a-99b4-3fc38060f598\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322240-gggbl" Oct 01 16:00:00 crc kubenswrapper[4949]: I1001 16:00:00.462022 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2875c\" (UniqueName: \"kubernetes.io/projected/b5b65325-4c38-4c7a-99b4-3fc38060f598-kube-api-access-2875c\") pod \"collect-profiles-29322240-gggbl\" (UID: \"b5b65325-4c38-4c7a-99b4-3fc38060f598\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322240-gggbl" Oct 01 16:00:00 crc kubenswrapper[4949]: I1001 16:00:00.461573 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5b65325-4c38-4c7a-99b4-3fc38060f598-config-volume\") pod \"collect-profiles-29322240-gggbl\" (UID: \"b5b65325-4c38-4c7a-99b4-3fc38060f598\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322240-gggbl" Oct 01 16:00:00 crc kubenswrapper[4949]: I1001 16:00:00.467633 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b5b65325-4c38-4c7a-99b4-3fc38060f598-secret-volume\") pod \"collect-profiles-29322240-gggbl\" (UID: \"b5b65325-4c38-4c7a-99b4-3fc38060f598\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322240-gggbl" Oct 01 16:00:00 crc kubenswrapper[4949]: I1001 16:00:00.483804 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2875c\" (UniqueName: \"kubernetes.io/projected/b5b65325-4c38-4c7a-99b4-3fc38060f598-kube-api-access-2875c\") pod \"collect-profiles-29322240-gggbl\" (UID: \"b5b65325-4c38-4c7a-99b4-3fc38060f598\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322240-gggbl" Oct 01 16:00:00 crc kubenswrapper[4949]: I1001 16:00:00.533509 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29322240-gggbl" Oct 01 16:00:01 crc kubenswrapper[4949]: I1001 16:00:01.019837 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29322240-gggbl"] Oct 01 16:00:01 crc kubenswrapper[4949]: I1001 16:00:01.778971 4949 generic.go:334] "Generic (PLEG): container finished" podID="b5b65325-4c38-4c7a-99b4-3fc38060f598" containerID="e1de2de2ea6f7e7f607a8e84d1054c2b1e49b2ff79099848223ed324c405699b" exitCode=0 Oct 01 16:00:01 crc kubenswrapper[4949]: I1001 16:00:01.779434 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29322240-gggbl" event={"ID":"b5b65325-4c38-4c7a-99b4-3fc38060f598","Type":"ContainerDied","Data":"e1de2de2ea6f7e7f607a8e84d1054c2b1e49b2ff79099848223ed324c405699b"} Oct 01 16:00:01 crc kubenswrapper[4949]: I1001 16:00:01.779504 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29322240-gggbl" event={"ID":"b5b65325-4c38-4c7a-99b4-3fc38060f598","Type":"ContainerStarted","Data":"569af4b7c9e270fe033b91d1acd0205696078bb2e2bdb16963a7178277fb6153"} Oct 01 16:00:02 crc kubenswrapper[4949]: I1001 16:00:02.250998 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-b577bcff4-r4wxv" Oct 01 16:00:02 crc kubenswrapper[4949]: I1001 16:00:02.283644 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/barbican-api-b577bcff4-r4wxv" Oct 01 16:00:02 crc kubenswrapper[4949]: I1001 16:00:02.366376 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6d97fcdd8f-dtzdc" Oct 01 16:00:02 crc kubenswrapper[4949]: I1001 16:00:02.374840 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-db9d4676b-6pnsc"] Oct 01 16:00:02 crc kubenswrapper[4949]: I1001 16:00:02.375263 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-db9d4676b-6pnsc" podUID="103475e5-ed89-4051-9173-43fd280a60c4" containerName="barbican-api-log" containerID="cri-o://53955d744a27802dcd05146fdedb9a20333246013bce43929e4546714ab369d9" gracePeriod=30 Oct 01 16:00:02 crc kubenswrapper[4949]: I1001 16:00:02.376334 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/barbican-api-db9d4676b-6pnsc" podUID="103475e5-ed89-4051-9173-43fd280a60c4" containerName="barbican-api" containerID="cri-o://24964535941ac1dc01ab9afae1eb89f6f1ef6eaa1955e9582ed1bea2533325ab" gracePeriod=30 Oct 01 16:00:02 crc kubenswrapper[4949]: I1001 16:00:02.447343 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-4qcv5"] Oct 01 16:00:02 crc kubenswrapper[4949]: I1001 16:00:02.447725 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-7987f74bbc-4qcv5" podUID="4e77e083-5e09-4383-8faa-1b16c353b5af" containerName="dnsmasq-dns" containerID="cri-o://2e1507190ea05aa88c12635ee1bddfcad1ca80c8b9a6c04f283668711e625c3d" gracePeriod=10 Oct 01 16:00:02 crc kubenswrapper[4949]: I1001 16:00:02.599235 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Oct 01 16:00:02 crc kubenswrapper[4949]: I1001 16:00:02.651812 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 01 16:00:02 crc kubenswrapper[4949]: I1001 16:00:02.789068 4949 generic.go:334] "Generic (PLEG): container finished" podID="4e77e083-5e09-4383-8faa-1b16c353b5af" containerID="2e1507190ea05aa88c12635ee1bddfcad1ca80c8b9a6c04f283668711e625c3d" exitCode=0 Oct 01 16:00:02 crc kubenswrapper[4949]: I1001 16:00:02.789313 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-4qcv5" event={"ID":"4e77e083-5e09-4383-8faa-1b16c353b5af","Type":"ContainerDied","Data":"2e1507190ea05aa88c12635ee1bddfcad1ca80c8b9a6c04f283668711e625c3d"} Oct 01 16:00:02 crc kubenswrapper[4949]: I1001 16:00:02.790798 4949 generic.go:334] "Generic (PLEG): container finished" podID="103475e5-ed89-4051-9173-43fd280a60c4" containerID="53955d744a27802dcd05146fdedb9a20333246013bce43929e4546714ab369d9" exitCode=143 Oct 01 16:00:02 crc kubenswrapper[4949]: I1001 16:00:02.790825 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-db9d4676b-6pnsc" event={"ID":"103475e5-ed89-4051-9173-43fd280a60c4","Type":"ContainerDied","Data":"53955d744a27802dcd05146fdedb9a20333246013bce43929e4546714ab369d9"} Oct 01 16:00:02 crc kubenswrapper[4949]: I1001 16:00:02.791210 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="8b11c0e4-ef14-48dd-84bb-2eb40ff02abf" containerName="cinder-scheduler" containerID="cri-o://6d6f483d3c6e9c65c5cf1756f97385ced18a02d66408b257d707d64e61a9299b" gracePeriod=30 Oct 01 16:00:02 crc kubenswrapper[4949]: I1001 16:00:02.791245 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/cinder-scheduler-0" podUID="8b11c0e4-ef14-48dd-84bb-2eb40ff02abf" containerName="probe" containerID="cri-o://30ecacb35674ab5285a3d0e3f1398d6ee78609d5e88bc49bae9b97e17eebf9f2" gracePeriod=30 Oct 01 16:00:02 crc kubenswrapper[4949]: I1001 16:00:02.992199 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-4qcv5" Oct 01 16:00:03 crc kubenswrapper[4949]: I1001 16:00:03.123298 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e77e083-5e09-4383-8faa-1b16c353b5af-ovsdbserver-nb\") pod \"4e77e083-5e09-4383-8faa-1b16c353b5af\" (UID: \"4e77e083-5e09-4383-8faa-1b16c353b5af\") " Oct 01 16:00:03 crc kubenswrapper[4949]: I1001 16:00:03.123665 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e77e083-5e09-4383-8faa-1b16c353b5af-dns-svc\") pod \"4e77e083-5e09-4383-8faa-1b16c353b5af\" (UID: \"4e77e083-5e09-4383-8faa-1b16c353b5af\") " Oct 01 16:00:03 crc kubenswrapper[4949]: I1001 16:00:03.123928 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e77e083-5e09-4383-8faa-1b16c353b5af-ovsdbserver-sb\") pod \"4e77e083-5e09-4383-8faa-1b16c353b5af\" (UID: \"4e77e083-5e09-4383-8faa-1b16c353b5af\") " Oct 01 16:00:03 crc kubenswrapper[4949]: I1001 16:00:03.124261 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4dnd\" (UniqueName: \"kubernetes.io/projected/4e77e083-5e09-4383-8faa-1b16c353b5af-kube-api-access-p4dnd\") pod \"4e77e083-5e09-4383-8faa-1b16c353b5af\" (UID: \"4e77e083-5e09-4383-8faa-1b16c353b5af\") " Oct 01 16:00:03 crc kubenswrapper[4949]: I1001 16:00:03.124364 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e77e083-5e09-4383-8faa-1b16c353b5af-config\") pod \"4e77e083-5e09-4383-8faa-1b16c353b5af\" (UID: \"4e77e083-5e09-4383-8faa-1b16c353b5af\") " Oct 01 16:00:03 crc kubenswrapper[4949]: I1001 16:00:03.131238 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e77e083-5e09-4383-8faa-1b16c353b5af-kube-api-access-p4dnd" (OuterVolumeSpecName: "kube-api-access-p4dnd") pod "4e77e083-5e09-4383-8faa-1b16c353b5af" (UID: "4e77e083-5e09-4383-8faa-1b16c353b5af"). InnerVolumeSpecName "kube-api-access-p4dnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:00:03 crc kubenswrapper[4949]: I1001 16:00:03.152806 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29322240-gggbl" Oct 01 16:00:03 crc kubenswrapper[4949]: I1001 16:00:03.201790 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e77e083-5e09-4383-8faa-1b16c353b5af-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4e77e083-5e09-4383-8faa-1b16c353b5af" (UID: "4e77e083-5e09-4383-8faa-1b16c353b5af"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 16:00:03 crc kubenswrapper[4949]: I1001 16:00:03.201794 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e77e083-5e09-4383-8faa-1b16c353b5af-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4e77e083-5e09-4383-8faa-1b16c353b5af" (UID: "4e77e083-5e09-4383-8faa-1b16c353b5af"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 16:00:03 crc kubenswrapper[4949]: I1001 16:00:03.206262 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e77e083-5e09-4383-8faa-1b16c353b5af-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4e77e083-5e09-4383-8faa-1b16c353b5af" (UID: "4e77e083-5e09-4383-8faa-1b16c353b5af"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 16:00:03 crc kubenswrapper[4949]: I1001 16:00:03.225481 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e77e083-5e09-4383-8faa-1b16c353b5af-config" (OuterVolumeSpecName: "config") pod "4e77e083-5e09-4383-8faa-1b16c353b5af" (UID: "4e77e083-5e09-4383-8faa-1b16c353b5af"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 16:00:03 crc kubenswrapper[4949]: I1001 16:00:03.228925 4949 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4e77e083-5e09-4383-8faa-1b16c353b5af-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 01 16:00:03 crc kubenswrapper[4949]: I1001 16:00:03.228956 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4e77e083-5e09-4383-8faa-1b16c353b5af-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 01 16:00:03 crc kubenswrapper[4949]: I1001 16:00:03.228970 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p4dnd\" (UniqueName: \"kubernetes.io/projected/4e77e083-5e09-4383-8faa-1b16c353b5af-kube-api-access-p4dnd\") on node \"crc\" DevicePath \"\"" Oct 01 16:00:03 crc kubenswrapper[4949]: I1001 16:00:03.229004 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e77e083-5e09-4383-8faa-1b16c353b5af-config\") on node \"crc\" DevicePath \"\"" Oct 01 16:00:03 crc kubenswrapper[4949]: I1001 16:00:03.229015 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4e77e083-5e09-4383-8faa-1b16c353b5af-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 01 16:00:03 crc kubenswrapper[4949]: I1001 16:00:03.329824 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b5b65325-4c38-4c7a-99b4-3fc38060f598-secret-volume\") pod \"b5b65325-4c38-4c7a-99b4-3fc38060f598\" (UID: \"b5b65325-4c38-4c7a-99b4-3fc38060f598\") " Oct 01 16:00:03 crc kubenswrapper[4949]: I1001 16:00:03.329896 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2875c\" (UniqueName: \"kubernetes.io/projected/b5b65325-4c38-4c7a-99b4-3fc38060f598-kube-api-access-2875c\") pod \"b5b65325-4c38-4c7a-99b4-3fc38060f598\" (UID: \"b5b65325-4c38-4c7a-99b4-3fc38060f598\") " Oct 01 16:00:03 crc kubenswrapper[4949]: I1001 16:00:03.329954 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5b65325-4c38-4c7a-99b4-3fc38060f598-config-volume\") pod \"b5b65325-4c38-4c7a-99b4-3fc38060f598\" (UID: \"b5b65325-4c38-4c7a-99b4-3fc38060f598\") " Oct 01 16:00:03 crc kubenswrapper[4949]: I1001 16:00:03.339771 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b5b65325-4c38-4c7a-99b4-3fc38060f598-config-volume" (OuterVolumeSpecName: "config-volume") pod "b5b65325-4c38-4c7a-99b4-3fc38060f598" (UID: "b5b65325-4c38-4c7a-99b4-3fc38060f598"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 16:00:03 crc kubenswrapper[4949]: I1001 16:00:03.355846 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b5b65325-4c38-4c7a-99b4-3fc38060f598-kube-api-access-2875c" (OuterVolumeSpecName: "kube-api-access-2875c") pod "b5b65325-4c38-4c7a-99b4-3fc38060f598" (UID: "b5b65325-4c38-4c7a-99b4-3fc38060f598"). InnerVolumeSpecName "kube-api-access-2875c". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:00:03 crc kubenswrapper[4949]: I1001 16:00:03.356543 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5b65325-4c38-4c7a-99b4-3fc38060f598-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "b5b65325-4c38-4c7a-99b4-3fc38060f598" (UID: "b5b65325-4c38-4c7a-99b4-3fc38060f598"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:00:03 crc kubenswrapper[4949]: I1001 16:00:03.433608 4949 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/b5b65325-4c38-4c7a-99b4-3fc38060f598-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 01 16:00:03 crc kubenswrapper[4949]: I1001 16:00:03.433658 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2875c\" (UniqueName: \"kubernetes.io/projected/b5b65325-4c38-4c7a-99b4-3fc38060f598-kube-api-access-2875c\") on node \"crc\" DevicePath \"\"" Oct 01 16:00:03 crc kubenswrapper[4949]: I1001 16:00:03.433670 4949 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5b65325-4c38-4c7a-99b4-3fc38060f598-config-volume\") on node \"crc\" DevicePath \"\"" Oct 01 16:00:03 crc kubenswrapper[4949]: I1001 16:00:03.801914 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29322240-gggbl" event={"ID":"b5b65325-4c38-4c7a-99b4-3fc38060f598","Type":"ContainerDied","Data":"569af4b7c9e270fe033b91d1acd0205696078bb2e2bdb16963a7178277fb6153"} Oct 01 16:00:03 crc kubenswrapper[4949]: I1001 16:00:03.802315 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="569af4b7c9e270fe033b91d1acd0205696078bb2e2bdb16963a7178277fb6153" Oct 01 16:00:03 crc kubenswrapper[4949]: I1001 16:00:03.802386 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29322240-gggbl" Oct 01 16:00:03 crc kubenswrapper[4949]: I1001 16:00:03.806301 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-7987f74bbc-4qcv5" Oct 01 16:00:03 crc kubenswrapper[4949]: I1001 16:00:03.806325 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-7987f74bbc-4qcv5" event={"ID":"4e77e083-5e09-4383-8faa-1b16c353b5af","Type":"ContainerDied","Data":"3da457eb9f56f34e1e29af8583b802f3c62e61060a533d456f24fcff49ad3010"} Oct 01 16:00:03 crc kubenswrapper[4949]: I1001 16:00:03.806369 4949 scope.go:117] "RemoveContainer" containerID="2e1507190ea05aa88c12635ee1bddfcad1ca80c8b9a6c04f283668711e625c3d" Oct 01 16:00:03 crc kubenswrapper[4949]: I1001 16:00:03.810259 4949 generic.go:334] "Generic (PLEG): container finished" podID="8b11c0e4-ef14-48dd-84bb-2eb40ff02abf" containerID="30ecacb35674ab5285a3d0e3f1398d6ee78609d5e88bc49bae9b97e17eebf9f2" exitCode=0 Oct 01 16:00:03 crc kubenswrapper[4949]: I1001 16:00:03.810297 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8b11c0e4-ef14-48dd-84bb-2eb40ff02abf","Type":"ContainerDied","Data":"30ecacb35674ab5285a3d0e3f1398d6ee78609d5e88bc49bae9b97e17eebf9f2"} Oct 01 16:00:03 crc kubenswrapper[4949]: I1001 16:00:03.840507 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-4qcv5"] Oct 01 16:00:03 crc kubenswrapper[4949]: I1001 16:00:03.851315 4949 scope.go:117] "RemoveContainer" containerID="6590b796907198c29431558d544819d99d2712a0951b198c93ae3d929c255546" Oct 01 16:00:03 crc kubenswrapper[4949]: I1001 16:00:03.854545 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-7987f74bbc-4qcv5"] Oct 01 16:00:05 crc kubenswrapper[4949]: I1001 16:00:05.554556 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-db9d4676b-6pnsc" podUID="103475e5-ed89-4051-9173-43fd280a60c4" containerName="barbican-api-log" probeResult="failure" output="Get \"http://10.217.0.146:9311/healthcheck\": read tcp 10.217.0.2:34078->10.217.0.146:9311: read: connection reset by peer" Oct 01 16:00:05 crc kubenswrapper[4949]: I1001 16:00:05.554619 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/barbican-api-db9d4676b-6pnsc" podUID="103475e5-ed89-4051-9173-43fd280a60c4" containerName="barbican-api" probeResult="failure" output="Get \"http://10.217.0.146:9311/healthcheck\": read tcp 10.217.0.2:34080->10.217.0.146:9311: read: connection reset by peer" Oct 01 16:00:05 crc kubenswrapper[4949]: I1001 16:00:05.613422 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e77e083-5e09-4383-8faa-1b16c353b5af" path="/var/lib/kubelet/pods/4e77e083-5e09-4383-8faa-1b16c353b5af/volumes" Oct 01 16:00:05 crc kubenswrapper[4949]: I1001 16:00:05.835289 4949 generic.go:334] "Generic (PLEG): container finished" podID="103475e5-ed89-4051-9173-43fd280a60c4" containerID="24964535941ac1dc01ab9afae1eb89f6f1ef6eaa1955e9582ed1bea2533325ab" exitCode=0 Oct 01 16:00:05 crc kubenswrapper[4949]: I1001 16:00:05.835349 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-db9d4676b-6pnsc" event={"ID":"103475e5-ed89-4051-9173-43fd280a60c4","Type":"ContainerDied","Data":"24964535941ac1dc01ab9afae1eb89f6f1ef6eaa1955e9582ed1bea2533325ab"} Oct 01 16:00:05 crc kubenswrapper[4949]: I1001 16:00:05.837543 4949 generic.go:334] "Generic (PLEG): container finished" podID="8b11c0e4-ef14-48dd-84bb-2eb40ff02abf" containerID="6d6f483d3c6e9c65c5cf1756f97385ced18a02d66408b257d707d64e61a9299b" exitCode=0 Oct 01 16:00:05 crc kubenswrapper[4949]: I1001 16:00:05.837562 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8b11c0e4-ef14-48dd-84bb-2eb40ff02abf","Type":"ContainerDied","Data":"6d6f483d3c6e9c65c5cf1756f97385ced18a02d66408b257d707d64e61a9299b"} Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.007167 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-db9d4676b-6pnsc" Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.173959 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.181761 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/103475e5-ed89-4051-9173-43fd280a60c4-config-data\") pod \"103475e5-ed89-4051-9173-43fd280a60c4\" (UID: \"103475e5-ed89-4051-9173-43fd280a60c4\") " Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.181927 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103475e5-ed89-4051-9173-43fd280a60c4-combined-ca-bundle\") pod \"103475e5-ed89-4051-9173-43fd280a60c4\" (UID: \"103475e5-ed89-4051-9173-43fd280a60c4\") " Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.181973 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/103475e5-ed89-4051-9173-43fd280a60c4-config-data-custom\") pod \"103475e5-ed89-4051-9173-43fd280a60c4\" (UID: \"103475e5-ed89-4051-9173-43fd280a60c4\") " Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.182028 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/103475e5-ed89-4051-9173-43fd280a60c4-logs\") pod \"103475e5-ed89-4051-9173-43fd280a60c4\" (UID: \"103475e5-ed89-4051-9173-43fd280a60c4\") " Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.182083 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrhml\" (UniqueName: \"kubernetes.io/projected/103475e5-ed89-4051-9173-43fd280a60c4-kube-api-access-jrhml\") pod \"103475e5-ed89-4051-9173-43fd280a60c4\" (UID: \"103475e5-ed89-4051-9173-43fd280a60c4\") " Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.182668 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/103475e5-ed89-4051-9173-43fd280a60c4-logs" (OuterVolumeSpecName: "logs") pod "103475e5-ed89-4051-9173-43fd280a60c4" (UID: "103475e5-ed89-4051-9173-43fd280a60c4"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.183161 4949 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/103475e5-ed89-4051-9173-43fd280a60c4-logs\") on node \"crc\" DevicePath \"\"" Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.189708 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/103475e5-ed89-4051-9173-43fd280a60c4-kube-api-access-jrhml" (OuterVolumeSpecName: "kube-api-access-jrhml") pod "103475e5-ed89-4051-9173-43fd280a60c4" (UID: "103475e5-ed89-4051-9173-43fd280a60c4"). InnerVolumeSpecName "kube-api-access-jrhml". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.194382 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/103475e5-ed89-4051-9173-43fd280a60c4-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "103475e5-ed89-4051-9173-43fd280a60c4" (UID: "103475e5-ed89-4051-9173-43fd280a60c4"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.236683 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/103475e5-ed89-4051-9173-43fd280a60c4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "103475e5-ed89-4051-9173-43fd280a60c4" (UID: "103475e5-ed89-4051-9173-43fd280a60c4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.258165 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/103475e5-ed89-4051-9173-43fd280a60c4-config-data" (OuterVolumeSpecName: "config-data") pod "103475e5-ed89-4051-9173-43fd280a60c4" (UID: "103475e5-ed89-4051-9173-43fd280a60c4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.284792 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8b11c0e4-ef14-48dd-84bb-2eb40ff02abf-config-data-custom\") pod \"8b11c0e4-ef14-48dd-84bb-2eb40ff02abf\" (UID: \"8b11c0e4-ef14-48dd-84bb-2eb40ff02abf\") " Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.284920 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8b11c0e4-ef14-48dd-84bb-2eb40ff02abf-etc-machine-id\") pod \"8b11c0e4-ef14-48dd-84bb-2eb40ff02abf\" (UID: \"8b11c0e4-ef14-48dd-84bb-2eb40ff02abf\") " Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.284984 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b11c0e4-ef14-48dd-84bb-2eb40ff02abf-scripts\") pod \"8b11c0e4-ef14-48dd-84bb-2eb40ff02abf\" (UID: \"8b11c0e4-ef14-48dd-84bb-2eb40ff02abf\") " Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.285014 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b11c0e4-ef14-48dd-84bb-2eb40ff02abf-config-data\") pod \"8b11c0e4-ef14-48dd-84bb-2eb40ff02abf\" (UID: \"8b11c0e4-ef14-48dd-84bb-2eb40ff02abf\") " Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.285155 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b11c0e4-ef14-48dd-84bb-2eb40ff02abf-combined-ca-bundle\") pod \"8b11c0e4-ef14-48dd-84bb-2eb40ff02abf\" (UID: \"8b11c0e4-ef14-48dd-84bb-2eb40ff02abf\") " Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.285197 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8phqg\" (UniqueName: \"kubernetes.io/projected/8b11c0e4-ef14-48dd-84bb-2eb40ff02abf-kube-api-access-8phqg\") pod \"8b11c0e4-ef14-48dd-84bb-2eb40ff02abf\" (UID: \"8b11c0e4-ef14-48dd-84bb-2eb40ff02abf\") " Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.285640 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/103475e5-ed89-4051-9173-43fd280a60c4-config-data\") on node \"crc\" DevicePath \"\"" Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.285665 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/103475e5-ed89-4051-9173-43fd280a60c4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.285680 4949 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/103475e5-ed89-4051-9173-43fd280a60c4-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.285692 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrhml\" (UniqueName: \"kubernetes.io/projected/103475e5-ed89-4051-9173-43fd280a60c4-kube-api-access-jrhml\") on node \"crc\" DevicePath \"\"" Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.285833 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8b11c0e4-ef14-48dd-84bb-2eb40ff02abf-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "8b11c0e4-ef14-48dd-84bb-2eb40ff02abf" (UID: "8b11c0e4-ef14-48dd-84bb-2eb40ff02abf"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.290245 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b11c0e4-ef14-48dd-84bb-2eb40ff02abf-kube-api-access-8phqg" (OuterVolumeSpecName: "kube-api-access-8phqg") pod "8b11c0e4-ef14-48dd-84bb-2eb40ff02abf" (UID: "8b11c0e4-ef14-48dd-84bb-2eb40ff02abf"). InnerVolumeSpecName "kube-api-access-8phqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.290548 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b11c0e4-ef14-48dd-84bb-2eb40ff02abf-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "8b11c0e4-ef14-48dd-84bb-2eb40ff02abf" (UID: "8b11c0e4-ef14-48dd-84bb-2eb40ff02abf"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.290879 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b11c0e4-ef14-48dd-84bb-2eb40ff02abf-scripts" (OuterVolumeSpecName: "scripts") pod "8b11c0e4-ef14-48dd-84bb-2eb40ff02abf" (UID: "8b11c0e4-ef14-48dd-84bb-2eb40ff02abf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.346203 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b11c0e4-ef14-48dd-84bb-2eb40ff02abf-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b11c0e4-ef14-48dd-84bb-2eb40ff02abf" (UID: "8b11c0e4-ef14-48dd-84bb-2eb40ff02abf"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.387661 4949 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/8b11c0e4-ef14-48dd-84bb-2eb40ff02abf-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.387982 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8b11c0e4-ef14-48dd-84bb-2eb40ff02abf-scripts\") on node \"crc\" DevicePath \"\"" Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.387997 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b11c0e4-ef14-48dd-84bb-2eb40ff02abf-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.388009 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8phqg\" (UniqueName: \"kubernetes.io/projected/8b11c0e4-ef14-48dd-84bb-2eb40ff02abf-kube-api-access-8phqg\") on node \"crc\" DevicePath \"\"" Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.388024 4949 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/8b11c0e4-ef14-48dd-84bb-2eb40ff02abf-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.396748 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b11c0e4-ef14-48dd-84bb-2eb40ff02abf-config-data" (OuterVolumeSpecName: "config-data") pod "8b11c0e4-ef14-48dd-84bb-2eb40ff02abf" (UID: "8b11c0e4-ef14-48dd-84bb-2eb40ff02abf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.489320 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b11c0e4-ef14-48dd-84bb-2eb40ff02abf-config-data\") on node \"crc\" DevicePath \"\"" Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.850593 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/barbican-api-db9d4676b-6pnsc" event={"ID":"103475e5-ed89-4051-9173-43fd280a60c4","Type":"ContainerDied","Data":"79cb92a86cbc41c4a571bdd0b64471e4c1e141e4e8fd3679409baade464188b5"} Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.850652 4949 scope.go:117] "RemoveContainer" containerID="24964535941ac1dc01ab9afae1eb89f6f1ef6eaa1955e9582ed1bea2533325ab" Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.850743 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/barbican-api-db9d4676b-6pnsc" Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.861200 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.861113 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"8b11c0e4-ef14-48dd-84bb-2eb40ff02abf","Type":"ContainerDied","Data":"6e91d20ec9dd1d03e3c2716a5a5855b0a3c9b65d83c7517fb241f8cb3239d9c4"} Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.886033 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-api-db9d4676b-6pnsc"] Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.900674 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-api-db9d4676b-6pnsc"] Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.902475 4949 scope.go:117] "RemoveContainer" containerID="53955d744a27802dcd05146fdedb9a20333246013bce43929e4546714ab369d9" Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.926630 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.951332 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.952571 4949 scope.go:117] "RemoveContainer" containerID="30ecacb35674ab5285a3d0e3f1398d6ee78609d5e88bc49bae9b97e17eebf9f2" Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.960173 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-scheduler-0"] Oct 01 16:00:06 crc kubenswrapper[4949]: E1001 16:00:06.961081 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b11c0e4-ef14-48dd-84bb-2eb40ff02abf" containerName="probe" Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.961107 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b11c0e4-ef14-48dd-84bb-2eb40ff02abf" containerName="probe" Oct 01 16:00:06 crc kubenswrapper[4949]: E1001 16:00:06.961173 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="103475e5-ed89-4051-9173-43fd280a60c4" containerName="barbican-api-log" Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.961181 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="103475e5-ed89-4051-9173-43fd280a60c4" containerName="barbican-api-log" Oct 01 16:00:06 crc kubenswrapper[4949]: E1001 16:00:06.961194 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b11c0e4-ef14-48dd-84bb-2eb40ff02abf" containerName="cinder-scheduler" Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.961199 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b11c0e4-ef14-48dd-84bb-2eb40ff02abf" containerName="cinder-scheduler" Oct 01 16:00:06 crc kubenswrapper[4949]: E1001 16:00:06.961216 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b5b65325-4c38-4c7a-99b4-3fc38060f598" containerName="collect-profiles" Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.961221 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="b5b65325-4c38-4c7a-99b4-3fc38060f598" containerName="collect-profiles" Oct 01 16:00:06 crc kubenswrapper[4949]: E1001 16:00:06.961235 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e77e083-5e09-4383-8faa-1b16c353b5af" containerName="dnsmasq-dns" Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.961240 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e77e083-5e09-4383-8faa-1b16c353b5af" containerName="dnsmasq-dns" Oct 01 16:00:06 crc kubenswrapper[4949]: E1001 16:00:06.961250 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="103475e5-ed89-4051-9173-43fd280a60c4" containerName="barbican-api" Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.961255 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="103475e5-ed89-4051-9173-43fd280a60c4" containerName="barbican-api" Oct 01 16:00:06 crc kubenswrapper[4949]: E1001 16:00:06.961263 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e77e083-5e09-4383-8faa-1b16c353b5af" containerName="init" Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.961269 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e77e083-5e09-4383-8faa-1b16c353b5af" containerName="init" Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.961574 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b11c0e4-ef14-48dd-84bb-2eb40ff02abf" containerName="cinder-scheduler" Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.961590 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b11c0e4-ef14-48dd-84bb-2eb40ff02abf" containerName="probe" Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.961603 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e77e083-5e09-4383-8faa-1b16c353b5af" containerName="dnsmasq-dns" Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.961617 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="103475e5-ed89-4051-9173-43fd280a60c4" containerName="barbican-api-log" Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.961629 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="103475e5-ed89-4051-9173-43fd280a60c4" containerName="barbican-api" Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.961637 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="b5b65325-4c38-4c7a-99b4-3fc38060f598" containerName="collect-profiles" Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.962804 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.964905 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-scheduler-config-data" Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.973521 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 01 16:00:06 crc kubenswrapper[4949]: I1001 16:00:06.991375 4949 scope.go:117] "RemoveContainer" containerID="6d6f483d3c6e9c65c5cf1756f97385ced18a02d66408b257d707d64e61a9299b" Oct 01 16:00:07 crc kubenswrapper[4949]: I1001 16:00:07.103978 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9621ba2f-9a4b-4a89-9e20-fd7f54600e34-scripts\") pod \"cinder-scheduler-0\" (UID: \"9621ba2f-9a4b-4a89-9e20-fd7f54600e34\") " pod="openstack/cinder-scheduler-0" Oct 01 16:00:07 crc kubenswrapper[4949]: I1001 16:00:07.104059 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9621ba2f-9a4b-4a89-9e20-fd7f54600e34-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9621ba2f-9a4b-4a89-9e20-fd7f54600e34\") " pod="openstack/cinder-scheduler-0" Oct 01 16:00:07 crc kubenswrapper[4949]: I1001 16:00:07.104106 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dg97m\" (UniqueName: \"kubernetes.io/projected/9621ba2f-9a4b-4a89-9e20-fd7f54600e34-kube-api-access-dg97m\") pod \"cinder-scheduler-0\" (UID: \"9621ba2f-9a4b-4a89-9e20-fd7f54600e34\") " pod="openstack/cinder-scheduler-0" Oct 01 16:00:07 crc kubenswrapper[4949]: I1001 16:00:07.104238 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9621ba2f-9a4b-4a89-9e20-fd7f54600e34-config-data\") pod \"cinder-scheduler-0\" (UID: \"9621ba2f-9a4b-4a89-9e20-fd7f54600e34\") " pod="openstack/cinder-scheduler-0" Oct 01 16:00:07 crc kubenswrapper[4949]: I1001 16:00:07.104311 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9621ba2f-9a4b-4a89-9e20-fd7f54600e34-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9621ba2f-9a4b-4a89-9e20-fd7f54600e34\") " pod="openstack/cinder-scheduler-0" Oct 01 16:00:07 crc kubenswrapper[4949]: I1001 16:00:07.104345 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9621ba2f-9a4b-4a89-9e20-fd7f54600e34-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9621ba2f-9a4b-4a89-9e20-fd7f54600e34\") " pod="openstack/cinder-scheduler-0" Oct 01 16:00:07 crc kubenswrapper[4949]: I1001 16:00:07.205264 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9621ba2f-9a4b-4a89-9e20-fd7f54600e34-config-data\") pod \"cinder-scheduler-0\" (UID: \"9621ba2f-9a4b-4a89-9e20-fd7f54600e34\") " pod="openstack/cinder-scheduler-0" Oct 01 16:00:07 crc kubenswrapper[4949]: I1001 16:00:07.205881 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9621ba2f-9a4b-4a89-9e20-fd7f54600e34-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9621ba2f-9a4b-4a89-9e20-fd7f54600e34\") " pod="openstack/cinder-scheduler-0" Oct 01 16:00:07 crc kubenswrapper[4949]: I1001 16:00:07.205930 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9621ba2f-9a4b-4a89-9e20-fd7f54600e34-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9621ba2f-9a4b-4a89-9e20-fd7f54600e34\") " pod="openstack/cinder-scheduler-0" Oct 01 16:00:07 crc kubenswrapper[4949]: I1001 16:00:07.206053 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9621ba2f-9a4b-4a89-9e20-fd7f54600e34-scripts\") pod \"cinder-scheduler-0\" (UID: \"9621ba2f-9a4b-4a89-9e20-fd7f54600e34\") " pod="openstack/cinder-scheduler-0" Oct 01 16:00:07 crc kubenswrapper[4949]: I1001 16:00:07.206116 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9621ba2f-9a4b-4a89-9e20-fd7f54600e34-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9621ba2f-9a4b-4a89-9e20-fd7f54600e34\") " pod="openstack/cinder-scheduler-0" Oct 01 16:00:07 crc kubenswrapper[4949]: I1001 16:00:07.206523 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dg97m\" (UniqueName: \"kubernetes.io/projected/9621ba2f-9a4b-4a89-9e20-fd7f54600e34-kube-api-access-dg97m\") pod \"cinder-scheduler-0\" (UID: \"9621ba2f-9a4b-4a89-9e20-fd7f54600e34\") " pod="openstack/cinder-scheduler-0" Oct 01 16:00:07 crc kubenswrapper[4949]: I1001 16:00:07.206356 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9621ba2f-9a4b-4a89-9e20-fd7f54600e34-etc-machine-id\") pod \"cinder-scheduler-0\" (UID: \"9621ba2f-9a4b-4a89-9e20-fd7f54600e34\") " pod="openstack/cinder-scheduler-0" Oct 01 16:00:07 crc kubenswrapper[4949]: I1001 16:00:07.210334 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9621ba2f-9a4b-4a89-9e20-fd7f54600e34-config-data\") pod \"cinder-scheduler-0\" (UID: \"9621ba2f-9a4b-4a89-9e20-fd7f54600e34\") " pod="openstack/cinder-scheduler-0" Oct 01 16:00:07 crc kubenswrapper[4949]: I1001 16:00:07.210716 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9621ba2f-9a4b-4a89-9e20-fd7f54600e34-scripts\") pod \"cinder-scheduler-0\" (UID: \"9621ba2f-9a4b-4a89-9e20-fd7f54600e34\") " pod="openstack/cinder-scheduler-0" Oct 01 16:00:07 crc kubenswrapper[4949]: I1001 16:00:07.210741 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9621ba2f-9a4b-4a89-9e20-fd7f54600e34-config-data-custom\") pod \"cinder-scheduler-0\" (UID: \"9621ba2f-9a4b-4a89-9e20-fd7f54600e34\") " pod="openstack/cinder-scheduler-0" Oct 01 16:00:07 crc kubenswrapper[4949]: I1001 16:00:07.211208 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9621ba2f-9a4b-4a89-9e20-fd7f54600e34-combined-ca-bundle\") pod \"cinder-scheduler-0\" (UID: \"9621ba2f-9a4b-4a89-9e20-fd7f54600e34\") " pod="openstack/cinder-scheduler-0" Oct 01 16:00:07 crc kubenswrapper[4949]: I1001 16:00:07.239165 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dg97m\" (UniqueName: \"kubernetes.io/projected/9621ba2f-9a4b-4a89-9e20-fd7f54600e34-kube-api-access-dg97m\") pod \"cinder-scheduler-0\" (UID: \"9621ba2f-9a4b-4a89-9e20-fd7f54600e34\") " pod="openstack/cinder-scheduler-0" Oct 01 16:00:07 crc kubenswrapper[4949]: I1001 16:00:07.286602 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-scheduler-0" Oct 01 16:00:07 crc kubenswrapper[4949]: I1001 16:00:07.612758 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="103475e5-ed89-4051-9173-43fd280a60c4" path="/var/lib/kubelet/pods/103475e5-ed89-4051-9173-43fd280a60c4/volumes" Oct 01 16:00:07 crc kubenswrapper[4949]: I1001 16:00:07.613616 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b11c0e4-ef14-48dd-84bb-2eb40ff02abf" path="/var/lib/kubelet/pods/8b11c0e4-ef14-48dd-84bb-2eb40ff02abf/volumes" Oct 01 16:00:07 crc kubenswrapper[4949]: I1001 16:00:07.734029 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-scheduler-0"] Oct 01 16:00:07 crc kubenswrapper[4949]: W1001 16:00:07.734959 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9621ba2f_9a4b_4a89_9e20_fd7f54600e34.slice/crio-4abbc92242359a5d775c9f518914c81c4ed30577d89419ecc21078c2b50f74b4 WatchSource:0}: Error finding container 4abbc92242359a5d775c9f518914c81c4ed30577d89419ecc21078c2b50f74b4: Status 404 returned error can't find the container with id 4abbc92242359a5d775c9f518914c81c4ed30577d89419ecc21078c2b50f74b4 Oct 01 16:00:07 crc kubenswrapper[4949]: I1001 16:00:07.876763 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9621ba2f-9a4b-4a89-9e20-fd7f54600e34","Type":"ContainerStarted","Data":"4abbc92242359a5d775c9f518914c81c4ed30577d89419ecc21078c2b50f74b4"} Oct 01 16:00:07 crc kubenswrapper[4949]: I1001 16:00:07.913822 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/dnsmasq-dns-7987f74bbc-4qcv5" podUID="4e77e083-5e09-4383-8faa-1b16c353b5af" containerName="dnsmasq-dns" probeResult="failure" output="dial tcp 10.217.0.139:5353: i/o timeout" Oct 01 16:00:07 crc kubenswrapper[4949]: I1001 16:00:07.945765 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/keystone-6544c97df6-6skzg" Oct 01 16:00:08 crc kubenswrapper[4949]: I1001 16:00:08.888948 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9621ba2f-9a4b-4a89-9e20-fd7f54600e34","Type":"ContainerStarted","Data":"782e583f79bd8de584f52d9dbef9be5159f082081ca7479bfba57f8b21932c52"} Oct 01 16:00:09 crc kubenswrapper[4949]: I1001 16:00:09.110078 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/cinder-api-0" Oct 01 16:00:09 crc kubenswrapper[4949]: I1001 16:00:09.903054 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-scheduler-0" event={"ID":"9621ba2f-9a4b-4a89-9e20-fd7f54600e34","Type":"ContainerStarted","Data":"0197a9609699f5e433503e10f4ed22b71adbfff5738652f81c82e337d18590b5"} Oct 01 16:00:09 crc kubenswrapper[4949]: I1001 16:00:09.932632 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-scheduler-0" podStartSLOduration=3.932611695 podStartE2EDuration="3.932611695s" podCreationTimestamp="2025-10-01 16:00:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 16:00:09.925117607 +0000 UTC m=+1109.230723798" watchObservedRunningTime="2025-10-01 16:00:09.932611695 +0000 UTC m=+1109.238217906" Oct 01 16:00:10 crc kubenswrapper[4949]: I1001 16:00:10.768088 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/openstackclient"] Oct 01 16:00:10 crc kubenswrapper[4949]: I1001 16:00:10.769800 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 01 16:00:10 crc kubenswrapper[4949]: I1001 16:00:10.772086 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-config" Oct 01 16:00:10 crc kubenswrapper[4949]: I1001 16:00:10.772702 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-config-secret" Oct 01 16:00:10 crc kubenswrapper[4949]: I1001 16:00:10.773516 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstackclient-openstackclient-dockercfg-lw2gp" Oct 01 16:00:10 crc kubenswrapper[4949]: I1001 16:00:10.779047 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 01 16:00:10 crc kubenswrapper[4949]: I1001 16:00:10.873163 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pg9d5\" (UniqueName: \"kubernetes.io/projected/b0b87944-8572-4efb-b446-46b0aa47a9ed-kube-api-access-pg9d5\") pod \"openstackclient\" (UID: \"b0b87944-8572-4efb-b446-46b0aa47a9ed\") " pod="openstack/openstackclient" Oct 01 16:00:10 crc kubenswrapper[4949]: I1001 16:00:10.873274 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b0b87944-8572-4efb-b446-46b0aa47a9ed-openstack-config-secret\") pod \"openstackclient\" (UID: \"b0b87944-8572-4efb-b446-46b0aa47a9ed\") " pod="openstack/openstackclient" Oct 01 16:00:10 crc kubenswrapper[4949]: I1001 16:00:10.873321 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b0b87944-8572-4efb-b446-46b0aa47a9ed-openstack-config\") pod \"openstackclient\" (UID: \"b0b87944-8572-4efb-b446-46b0aa47a9ed\") " pod="openstack/openstackclient" Oct 01 16:00:10 crc kubenswrapper[4949]: I1001 16:00:10.873421 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0b87944-8572-4efb-b446-46b0aa47a9ed-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b0b87944-8572-4efb-b446-46b0aa47a9ed\") " pod="openstack/openstackclient" Oct 01 16:00:10 crc kubenswrapper[4949]: I1001 16:00:10.974596 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0b87944-8572-4efb-b446-46b0aa47a9ed-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b0b87944-8572-4efb-b446-46b0aa47a9ed\") " pod="openstack/openstackclient" Oct 01 16:00:10 crc kubenswrapper[4949]: I1001 16:00:10.974693 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pg9d5\" (UniqueName: \"kubernetes.io/projected/b0b87944-8572-4efb-b446-46b0aa47a9ed-kube-api-access-pg9d5\") pod \"openstackclient\" (UID: \"b0b87944-8572-4efb-b446-46b0aa47a9ed\") " pod="openstack/openstackclient" Oct 01 16:00:10 crc kubenswrapper[4949]: I1001 16:00:10.974752 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b0b87944-8572-4efb-b446-46b0aa47a9ed-openstack-config-secret\") pod \"openstackclient\" (UID: \"b0b87944-8572-4efb-b446-46b0aa47a9ed\") " pod="openstack/openstackclient" Oct 01 16:00:10 crc kubenswrapper[4949]: I1001 16:00:10.974784 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b0b87944-8572-4efb-b446-46b0aa47a9ed-openstack-config\") pod \"openstackclient\" (UID: \"b0b87944-8572-4efb-b446-46b0aa47a9ed\") " pod="openstack/openstackclient" Oct 01 16:00:10 crc kubenswrapper[4949]: I1001 16:00:10.975949 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/b0b87944-8572-4efb-b446-46b0aa47a9ed-openstack-config\") pod \"openstackclient\" (UID: \"b0b87944-8572-4efb-b446-46b0aa47a9ed\") " pod="openstack/openstackclient" Oct 01 16:00:10 crc kubenswrapper[4949]: I1001 16:00:10.980151 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/b0b87944-8572-4efb-b446-46b0aa47a9ed-openstack-config-secret\") pod \"openstackclient\" (UID: \"b0b87944-8572-4efb-b446-46b0aa47a9ed\") " pod="openstack/openstackclient" Oct 01 16:00:10 crc kubenswrapper[4949]: I1001 16:00:10.980236 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b0b87944-8572-4efb-b446-46b0aa47a9ed-combined-ca-bundle\") pod \"openstackclient\" (UID: \"b0b87944-8572-4efb-b446-46b0aa47a9ed\") " pod="openstack/openstackclient" Oct 01 16:00:10 crc kubenswrapper[4949]: I1001 16:00:10.999308 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pg9d5\" (UniqueName: \"kubernetes.io/projected/b0b87944-8572-4efb-b446-46b0aa47a9ed-kube-api-access-pg9d5\") pod \"openstackclient\" (UID: \"b0b87944-8572-4efb-b446-46b0aa47a9ed\") " pod="openstack/openstackclient" Oct 01 16:00:11 crc kubenswrapper[4949]: I1001 16:00:11.096919 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/openstackclient" Oct 01 16:00:11 crc kubenswrapper[4949]: I1001 16:00:11.561970 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/openstackclient"] Oct 01 16:00:11 crc kubenswrapper[4949]: I1001 16:00:11.933721 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"b0b87944-8572-4efb-b446-46b0aa47a9ed","Type":"ContainerStarted","Data":"aadbd52a0ff3f0db727f6db69e626605c6a66b99545ed78c47e492aba662136a"} Oct 01 16:00:12 crc kubenswrapper[4949]: I1001 16:00:12.286999 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-scheduler-0" Oct 01 16:00:12 crc kubenswrapper[4949]: I1001 16:00:12.651898 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-575569c7bd-g6srl" Oct 01 16:00:12 crc kubenswrapper[4949]: I1001 16:00:12.668895 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/placement-575569c7bd-g6srl" Oct 01 16:00:14 crc kubenswrapper[4949]: I1001 16:00:14.992955 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-5b4b76d758-g7k8p" Oct 01 16:00:17 crc kubenswrapper[4949]: I1001 16:00:17.464617 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-db-create-bpchz"] Oct 01 16:00:17 crc kubenswrapper[4949]: I1001 16:00:17.465970 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-bpchz" Oct 01 16:00:17 crc kubenswrapper[4949]: I1001 16:00:17.479793 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-bpchz"] Oct 01 16:00:17 crc kubenswrapper[4949]: I1001 16:00:17.595820 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-scheduler-0" Oct 01 16:00:17 crc kubenswrapper[4949]: I1001 16:00:17.602031 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4sjt\" (UniqueName: \"kubernetes.io/projected/838c9578-08c9-4330-bae8-1ee82a2acc71-kube-api-access-l4sjt\") pod \"nova-api-db-create-bpchz\" (UID: \"838c9578-08c9-4330-bae8-1ee82a2acc71\") " pod="openstack/nova-api-db-create-bpchz" Oct 01 16:00:17 crc kubenswrapper[4949]: I1001 16:00:17.704072 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l4sjt\" (UniqueName: \"kubernetes.io/projected/838c9578-08c9-4330-bae8-1ee82a2acc71-kube-api-access-l4sjt\") pod \"nova-api-db-create-bpchz\" (UID: \"838c9578-08c9-4330-bae8-1ee82a2acc71\") " pod="openstack/nova-api-db-create-bpchz" Oct 01 16:00:17 crc kubenswrapper[4949]: I1001 16:00:17.724528 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4sjt\" (UniqueName: \"kubernetes.io/projected/838c9578-08c9-4330-bae8-1ee82a2acc71-kube-api-access-l4sjt\") pod \"nova-api-db-create-bpchz\" (UID: \"838c9578-08c9-4330-bae8-1ee82a2acc71\") " pod="openstack/nova-api-db-create-bpchz" Oct 01 16:00:17 crc kubenswrapper[4949]: I1001 16:00:17.766821 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-db-create-js2pl"] Oct 01 16:00:17 crc kubenswrapper[4949]: I1001 16:00:17.768494 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-js2pl" Oct 01 16:00:17 crc kubenswrapper[4949]: I1001 16:00:17.788672 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-js2pl"] Oct 01 16:00:17 crc kubenswrapper[4949]: I1001 16:00:17.802232 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/neutron-559fd97bd5-6zst2" Oct 01 16:00:17 crc kubenswrapper[4949]: I1001 16:00:17.860786 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-bpchz" Oct 01 16:00:17 crc kubenswrapper[4949]: I1001 16:00:17.863409 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 01 16:00:17 crc kubenswrapper[4949]: I1001 16:00:17.870487 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5b4b76d758-g7k8p"] Oct 01 16:00:17 crc kubenswrapper[4949]: I1001 16:00:17.870801 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5b4b76d758-g7k8p" podUID="83844cc6-633a-48b1-aa76-e1cf34582971" containerName="neutron-api" containerID="cri-o://4296e7f70193aaff1a495f543a6785dd939d89c451bf8a090ec069a2e6ac997c" gracePeriod=30 Oct 01 16:00:17 crc kubenswrapper[4949]: I1001 16:00:17.871044 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/neutron-5b4b76d758-g7k8p" podUID="83844cc6-633a-48b1-aa76-e1cf34582971" containerName="neutron-httpd" containerID="cri-o://2054f5ff6ae03b875b6cbc963c060c686e4c972fa288f3d9ac7c5bfcdf50f28a" gracePeriod=30 Oct 01 16:00:17 crc kubenswrapper[4949]: I1001 16:00:17.877702 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-db-create-pbcdp"] Oct 01 16:00:17 crc kubenswrapper[4949]: I1001 16:00:17.890197 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-pbcdp" Oct 01 16:00:17 crc kubenswrapper[4949]: I1001 16:00:17.902755 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-pbcdp"] Oct 01 16:00:17 crc kubenswrapper[4949]: I1001 16:00:17.907499 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkl4m\" (UniqueName: \"kubernetes.io/projected/9251efff-db93-42b2-a1ba-62cfdee08c7f-kube-api-access-tkl4m\") pod \"nova-cell0-db-create-js2pl\" (UID: \"9251efff-db93-42b2-a1ba-62cfdee08c7f\") " pod="openstack/nova-cell0-db-create-js2pl" Oct 01 16:00:18 crc kubenswrapper[4949]: I1001 16:00:18.009659 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrfbg\" (UniqueName: \"kubernetes.io/projected/94c5fb52-ae85-4c88-a811-fde1ae61a33a-kube-api-access-zrfbg\") pod \"nova-cell1-db-create-pbcdp\" (UID: \"94c5fb52-ae85-4c88-a811-fde1ae61a33a\") " pod="openstack/nova-cell1-db-create-pbcdp" Oct 01 16:00:18 crc kubenswrapper[4949]: I1001 16:00:18.009810 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tkl4m\" (UniqueName: \"kubernetes.io/projected/9251efff-db93-42b2-a1ba-62cfdee08c7f-kube-api-access-tkl4m\") pod \"nova-cell0-db-create-js2pl\" (UID: \"9251efff-db93-42b2-a1ba-62cfdee08c7f\") " pod="openstack/nova-cell0-db-create-js2pl" Oct 01 16:00:18 crc kubenswrapper[4949]: I1001 16:00:18.031684 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkl4m\" (UniqueName: \"kubernetes.io/projected/9251efff-db93-42b2-a1ba-62cfdee08c7f-kube-api-access-tkl4m\") pod \"nova-cell0-db-create-js2pl\" (UID: \"9251efff-db93-42b2-a1ba-62cfdee08c7f\") " pod="openstack/nova-cell0-db-create-js2pl" Oct 01 16:00:18 crc kubenswrapper[4949]: I1001 16:00:18.038906 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 16:00:18 crc kubenswrapper[4949]: I1001 16:00:18.038996 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 16:00:18 crc kubenswrapper[4949]: I1001 16:00:18.094563 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-js2pl" Oct 01 16:00:18 crc kubenswrapper[4949]: I1001 16:00:18.112047 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrfbg\" (UniqueName: \"kubernetes.io/projected/94c5fb52-ae85-4c88-a811-fde1ae61a33a-kube-api-access-zrfbg\") pod \"nova-cell1-db-create-pbcdp\" (UID: \"94c5fb52-ae85-4c88-a811-fde1ae61a33a\") " pod="openstack/nova-cell1-db-create-pbcdp" Oct 01 16:00:18 crc kubenswrapper[4949]: I1001 16:00:18.132181 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrfbg\" (UniqueName: \"kubernetes.io/projected/94c5fb52-ae85-4c88-a811-fde1ae61a33a-kube-api-access-zrfbg\") pod \"nova-cell1-db-create-pbcdp\" (UID: \"94c5fb52-ae85-4c88-a811-fde1ae61a33a\") " pod="openstack/nova-cell1-db-create-pbcdp" Oct 01 16:00:18 crc kubenswrapper[4949]: I1001 16:00:18.228855 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-pbcdp" Oct 01 16:00:18 crc kubenswrapper[4949]: I1001 16:00:18.994851 4949 generic.go:334] "Generic (PLEG): container finished" podID="83844cc6-633a-48b1-aa76-e1cf34582971" containerID="2054f5ff6ae03b875b6cbc963c060c686e4c972fa288f3d9ac7c5bfcdf50f28a" exitCode=0 Oct 01 16:00:18 crc kubenswrapper[4949]: I1001 16:00:18.994923 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b4b76d758-g7k8p" event={"ID":"83844cc6-633a-48b1-aa76-e1cf34582971","Type":"ContainerDied","Data":"2054f5ff6ae03b875b6cbc963c060c686e4c972fa288f3d9ac7c5bfcdf50f28a"} Oct 01 16:00:20 crc kubenswrapper[4949]: I1001 16:00:20.005456 4949 generic.go:334] "Generic (PLEG): container finished" podID="83844cc6-633a-48b1-aa76-e1cf34582971" containerID="4296e7f70193aaff1a495f543a6785dd939d89c451bf8a090ec069a2e6ac997c" exitCode=0 Oct 01 16:00:20 crc kubenswrapper[4949]: I1001 16:00:20.005491 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b4b76d758-g7k8p" event={"ID":"83844cc6-633a-48b1-aa76-e1cf34582971","Type":"ContainerDied","Data":"4296e7f70193aaff1a495f543a6785dd939d89c451bf8a090ec069a2e6ac997c"} Oct 01 16:00:22 crc kubenswrapper[4949]: I1001 16:00:21.531713 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-db-create-bpchz"] Oct 01 16:00:22 crc kubenswrapper[4949]: I1001 16:00:21.570182 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-db-create-js2pl"] Oct 01 16:00:22 crc kubenswrapper[4949]: W1001 16:00:21.624297 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9251efff_db93_42b2_a1ba_62cfdee08c7f.slice/crio-02e71ae209d0114ca17f956d0759444e486eaa913323f20f902978b0ae3377f4 WatchSource:0}: Error finding container 02e71ae209d0114ca17f956d0759444e486eaa913323f20f902978b0ae3377f4: Status 404 returned error can't find the container with id 02e71ae209d0114ca17f956d0759444e486eaa913323f20f902978b0ae3377f4 Oct 01 16:00:22 crc kubenswrapper[4949]: I1001 16:00:21.772347 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-db-create-pbcdp"] Oct 01 16:00:22 crc kubenswrapper[4949]: I1001 16:00:22.025074 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-pbcdp" event={"ID":"94c5fb52-ae85-4c88-a811-fde1ae61a33a","Type":"ContainerStarted","Data":"3d90b8786bacf03f48526bc45504ac7d65b7c8bb873ddfe990c613c2ad769367"} Oct 01 16:00:22 crc kubenswrapper[4949]: I1001 16:00:22.025414 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-pbcdp" event={"ID":"94c5fb52-ae85-4c88-a811-fde1ae61a33a","Type":"ContainerStarted","Data":"b4a5e71a617e9695574146426aecbe08e16a6294c40405e8e7e46d906afad646"} Oct 01 16:00:22 crc kubenswrapper[4949]: I1001 16:00:22.027292 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-js2pl" event={"ID":"9251efff-db93-42b2-a1ba-62cfdee08c7f","Type":"ContainerStarted","Data":"35f2947ad6a5c0243df86288559da71faff80418f8959b879d0ddb82b1689af6"} Oct 01 16:00:22 crc kubenswrapper[4949]: I1001 16:00:22.027334 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-js2pl" event={"ID":"9251efff-db93-42b2-a1ba-62cfdee08c7f","Type":"ContainerStarted","Data":"02e71ae209d0114ca17f956d0759444e486eaa913323f20f902978b0ae3377f4"} Oct 01 16:00:22 crc kubenswrapper[4949]: I1001 16:00:22.029673 4949 generic.go:334] "Generic (PLEG): container finished" podID="838c9578-08c9-4330-bae8-1ee82a2acc71" containerID="d0ee2e913b1a3940c452da8b7cf4f4e528043c258432cf2f90a83594e9e39a7d" exitCode=0 Oct 01 16:00:22 crc kubenswrapper[4949]: I1001 16:00:22.029828 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-bpchz" event={"ID":"838c9578-08c9-4330-bae8-1ee82a2acc71","Type":"ContainerDied","Data":"d0ee2e913b1a3940c452da8b7cf4f4e528043c258432cf2f90a83594e9e39a7d"} Oct 01 16:00:22 crc kubenswrapper[4949]: I1001 16:00:22.029877 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-bpchz" event={"ID":"838c9578-08c9-4330-bae8-1ee82a2acc71","Type":"ContainerStarted","Data":"dc398d18315f2ab1f5df3b982675e0486489aec245f2a0d94468b38c581a368d"} Oct 01 16:00:22 crc kubenswrapper[4949]: I1001 16:00:22.031800 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/openstackclient" event={"ID":"b0b87944-8572-4efb-b446-46b0aa47a9ed","Type":"ContainerStarted","Data":"858ff82d9596360edb3a7245f3533ac83f9116326d4e58d6a74e4fb2f74950e9"} Oct 01 16:00:22 crc kubenswrapper[4949]: I1001 16:00:22.044073 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-db-create-pbcdp" podStartSLOduration=5.044052422 podStartE2EDuration="5.044052422s" podCreationTimestamp="2025-10-01 16:00:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 16:00:22.04145284 +0000 UTC m=+1121.347059031" watchObservedRunningTime="2025-10-01 16:00:22.044052422 +0000 UTC m=+1121.349658613" Oct 01 16:00:22 crc kubenswrapper[4949]: I1001 16:00:22.066489 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/openstackclient" podStartSLOduration=2.62952051 podStartE2EDuration="12.066467927s" podCreationTimestamp="2025-10-01 16:00:10 +0000 UTC" firstStartedPulling="2025-10-01 16:00:11.562363802 +0000 UTC m=+1110.867969993" lastFinishedPulling="2025-10-01 16:00:20.999311219 +0000 UTC m=+1120.304917410" observedRunningTime="2025-10-01 16:00:22.059026619 +0000 UTC m=+1121.364632810" watchObservedRunningTime="2025-10-01 16:00:22.066467927 +0000 UTC m=+1121.372074118" Oct 01 16:00:22 crc kubenswrapper[4949]: I1001 16:00:22.100549 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-db-create-js2pl" podStartSLOduration=5.100533797 podStartE2EDuration="5.100533797s" podCreationTimestamp="2025-10-01 16:00:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 16:00:22.097445301 +0000 UTC m=+1121.403051492" watchObservedRunningTime="2025-10-01 16:00:22.100533797 +0000 UTC m=+1121.406139988" Oct 01 16:00:22 crc kubenswrapper[4949]: I1001 16:00:22.547333 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5b4b76d758-g7k8p" Oct 01 16:00:22 crc kubenswrapper[4949]: I1001 16:00:22.609918 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/83844cc6-633a-48b1-aa76-e1cf34582971-httpd-config\") pod \"83844cc6-633a-48b1-aa76-e1cf34582971\" (UID: \"83844cc6-633a-48b1-aa76-e1cf34582971\") " Oct 01 16:00:22 crc kubenswrapper[4949]: I1001 16:00:22.609969 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/83844cc6-633a-48b1-aa76-e1cf34582971-config\") pod \"83844cc6-633a-48b1-aa76-e1cf34582971\" (UID: \"83844cc6-633a-48b1-aa76-e1cf34582971\") " Oct 01 16:00:22 crc kubenswrapper[4949]: I1001 16:00:22.610011 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-575l4\" (UniqueName: \"kubernetes.io/projected/83844cc6-633a-48b1-aa76-e1cf34582971-kube-api-access-575l4\") pod \"83844cc6-633a-48b1-aa76-e1cf34582971\" (UID: \"83844cc6-633a-48b1-aa76-e1cf34582971\") " Oct 01 16:00:22 crc kubenswrapper[4949]: I1001 16:00:22.610058 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/83844cc6-633a-48b1-aa76-e1cf34582971-ovndb-tls-certs\") pod \"83844cc6-633a-48b1-aa76-e1cf34582971\" (UID: \"83844cc6-633a-48b1-aa76-e1cf34582971\") " Oct 01 16:00:22 crc kubenswrapper[4949]: I1001 16:00:22.610082 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83844cc6-633a-48b1-aa76-e1cf34582971-combined-ca-bundle\") pod \"83844cc6-633a-48b1-aa76-e1cf34582971\" (UID: \"83844cc6-633a-48b1-aa76-e1cf34582971\") " Oct 01 16:00:22 crc kubenswrapper[4949]: I1001 16:00:22.617916 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83844cc6-633a-48b1-aa76-e1cf34582971-kube-api-access-575l4" (OuterVolumeSpecName: "kube-api-access-575l4") pod "83844cc6-633a-48b1-aa76-e1cf34582971" (UID: "83844cc6-633a-48b1-aa76-e1cf34582971"). InnerVolumeSpecName "kube-api-access-575l4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:00:22 crc kubenswrapper[4949]: I1001 16:00:22.619058 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83844cc6-633a-48b1-aa76-e1cf34582971-httpd-config" (OuterVolumeSpecName: "httpd-config") pod "83844cc6-633a-48b1-aa76-e1cf34582971" (UID: "83844cc6-633a-48b1-aa76-e1cf34582971"). InnerVolumeSpecName "httpd-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:00:22 crc kubenswrapper[4949]: I1001 16:00:22.665780 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83844cc6-633a-48b1-aa76-e1cf34582971-config" (OuterVolumeSpecName: "config") pod "83844cc6-633a-48b1-aa76-e1cf34582971" (UID: "83844cc6-633a-48b1-aa76-e1cf34582971"). InnerVolumeSpecName "config". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:00:22 crc kubenswrapper[4949]: I1001 16:00:22.697697 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83844cc6-633a-48b1-aa76-e1cf34582971-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83844cc6-633a-48b1-aa76-e1cf34582971" (UID: "83844cc6-633a-48b1-aa76-e1cf34582971"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:00:22 crc kubenswrapper[4949]: I1001 16:00:22.698265 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83844cc6-633a-48b1-aa76-e1cf34582971-ovndb-tls-certs" (OuterVolumeSpecName: "ovndb-tls-certs") pod "83844cc6-633a-48b1-aa76-e1cf34582971" (UID: "83844cc6-633a-48b1-aa76-e1cf34582971"). InnerVolumeSpecName "ovndb-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:00:22 crc kubenswrapper[4949]: I1001 16:00:22.711954 4949 reconciler_common.go:293] "Volume detached for volume \"httpd-config\" (UniqueName: \"kubernetes.io/secret/83844cc6-633a-48b1-aa76-e1cf34582971-httpd-config\") on node \"crc\" DevicePath \"\"" Oct 01 16:00:22 crc kubenswrapper[4949]: I1001 16:00:22.711993 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/secret/83844cc6-633a-48b1-aa76-e1cf34582971-config\") on node \"crc\" DevicePath \"\"" Oct 01 16:00:22 crc kubenswrapper[4949]: I1001 16:00:22.712007 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-575l4\" (UniqueName: \"kubernetes.io/projected/83844cc6-633a-48b1-aa76-e1cf34582971-kube-api-access-575l4\") on node \"crc\" DevicePath \"\"" Oct 01 16:00:22 crc kubenswrapper[4949]: I1001 16:00:22.712019 4949 reconciler_common.go:293] "Volume detached for volume \"ovndb-tls-certs\" (UniqueName: \"kubernetes.io/secret/83844cc6-633a-48b1-aa76-e1cf34582971-ovndb-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 01 16:00:22 crc kubenswrapper[4949]: I1001 16:00:22.712030 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83844cc6-633a-48b1-aa76-e1cf34582971-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 16:00:23 crc kubenswrapper[4949]: I1001 16:00:23.041319 4949 generic.go:334] "Generic (PLEG): container finished" podID="9251efff-db93-42b2-a1ba-62cfdee08c7f" containerID="35f2947ad6a5c0243df86288559da71faff80418f8959b879d0ddb82b1689af6" exitCode=0 Oct 01 16:00:23 crc kubenswrapper[4949]: I1001 16:00:23.041532 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-js2pl" event={"ID":"9251efff-db93-42b2-a1ba-62cfdee08c7f","Type":"ContainerDied","Data":"35f2947ad6a5c0243df86288559da71faff80418f8959b879d0ddb82b1689af6"} Oct 01 16:00:23 crc kubenswrapper[4949]: I1001 16:00:23.043330 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-5b4b76d758-g7k8p" event={"ID":"83844cc6-633a-48b1-aa76-e1cf34582971","Type":"ContainerDied","Data":"4164bd3f1f93a37e7f003254156224cf9b668b89782c5ca6b0c1bddf366c033e"} Oct 01 16:00:23 crc kubenswrapper[4949]: I1001 16:00:23.043502 4949 scope.go:117] "RemoveContainer" containerID="2054f5ff6ae03b875b6cbc963c060c686e4c972fa288f3d9ac7c5bfcdf50f28a" Oct 01 16:00:23 crc kubenswrapper[4949]: I1001 16:00:23.043356 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-5b4b76d758-g7k8p" Oct 01 16:00:23 crc kubenswrapper[4949]: I1001 16:00:23.044717 4949 generic.go:334] "Generic (PLEG): container finished" podID="94c5fb52-ae85-4c88-a811-fde1ae61a33a" containerID="3d90b8786bacf03f48526bc45504ac7d65b7c8bb873ddfe990c613c2ad769367" exitCode=0 Oct 01 16:00:23 crc kubenswrapper[4949]: I1001 16:00:23.044902 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-pbcdp" event={"ID":"94c5fb52-ae85-4c88-a811-fde1ae61a33a","Type":"ContainerDied","Data":"3d90b8786bacf03f48526bc45504ac7d65b7c8bb873ddfe990c613c2ad769367"} Oct 01 16:00:23 crc kubenswrapper[4949]: I1001 16:00:23.103969 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-5b4b76d758-g7k8p"] Oct 01 16:00:23 crc kubenswrapper[4949]: I1001 16:00:23.112650 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-5b4b76d758-g7k8p"] Oct 01 16:00:23 crc kubenswrapper[4949]: I1001 16:00:23.123635 4949 scope.go:117] "RemoveContainer" containerID="4296e7f70193aaff1a495f543a6785dd939d89c451bf8a090ec069a2e6ac997c" Oct 01 16:00:23 crc kubenswrapper[4949]: I1001 16:00:23.361622 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-bpchz" Oct 01 16:00:23 crc kubenswrapper[4949]: I1001 16:00:23.423916 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l4sjt\" (UniqueName: \"kubernetes.io/projected/838c9578-08c9-4330-bae8-1ee82a2acc71-kube-api-access-l4sjt\") pod \"838c9578-08c9-4330-bae8-1ee82a2acc71\" (UID: \"838c9578-08c9-4330-bae8-1ee82a2acc71\") " Oct 01 16:00:23 crc kubenswrapper[4949]: I1001 16:00:23.427011 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/838c9578-08c9-4330-bae8-1ee82a2acc71-kube-api-access-l4sjt" (OuterVolumeSpecName: "kube-api-access-l4sjt") pod "838c9578-08c9-4330-bae8-1ee82a2acc71" (UID: "838c9578-08c9-4330-bae8-1ee82a2acc71"). InnerVolumeSpecName "kube-api-access-l4sjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:00:23 crc kubenswrapper[4949]: I1001 16:00:23.526242 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l4sjt\" (UniqueName: \"kubernetes.io/projected/838c9578-08c9-4330-bae8-1ee82a2acc71-kube-api-access-l4sjt\") on node \"crc\" DevicePath \"\"" Oct 01 16:00:23 crc kubenswrapper[4949]: I1001 16:00:23.613297 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="83844cc6-633a-48b1-aa76-e1cf34582971" path="/var/lib/kubelet/pods/83844cc6-633a-48b1-aa76-e1cf34582971/volumes" Oct 01 16:00:23 crc kubenswrapper[4949]: I1001 16:00:23.840156 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 01 16:00:23 crc kubenswrapper[4949]: I1001 16:00:23.840812 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a5778394-f962-4a15-960f-85a81428d1d0" containerName="ceilometer-central-agent" containerID="cri-o://4b0f4da982ac1cb12ae872a62f2fa0647f16d867df5c73fdff58d635c7b2c4e2" gracePeriod=30 Oct 01 16:00:23 crc kubenswrapper[4949]: I1001 16:00:23.841532 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a5778394-f962-4a15-960f-85a81428d1d0" containerName="proxy-httpd" containerID="cri-o://d07612193ee4a35973eda9f834ce730ba9cb448535bb8f8f248cde50e4eba70d" gracePeriod=30 Oct 01 16:00:23 crc kubenswrapper[4949]: I1001 16:00:23.841659 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a5778394-f962-4a15-960f-85a81428d1d0" containerName="sg-core" containerID="cri-o://5740517d36f8c670db6bde5ccf307226113b3f4bb8e2b99873b800747677b625" gracePeriod=30 Oct 01 16:00:23 crc kubenswrapper[4949]: I1001 16:00:23.841718 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="a5778394-f962-4a15-960f-85a81428d1d0" containerName="ceilometer-notification-agent" containerID="cri-o://2c77e2a952014e68e92bfd5fd2704deaeab4a43eed03f675bbefd4f124091bc4" gracePeriod=30 Oct 01 16:00:24 crc kubenswrapper[4949]: I1001 16:00:24.066157 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-db-create-bpchz" event={"ID":"838c9578-08c9-4330-bae8-1ee82a2acc71","Type":"ContainerDied","Data":"dc398d18315f2ab1f5df3b982675e0486489aec245f2a0d94468b38c581a368d"} Oct 01 16:00:24 crc kubenswrapper[4949]: I1001 16:00:24.066186 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-db-create-bpchz" Oct 01 16:00:24 crc kubenswrapper[4949]: I1001 16:00:24.066214 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc398d18315f2ab1f5df3b982675e0486489aec245f2a0d94468b38c581a368d" Oct 01 16:00:24 crc kubenswrapper[4949]: I1001 16:00:24.068759 4949 generic.go:334] "Generic (PLEG): container finished" podID="a5778394-f962-4a15-960f-85a81428d1d0" containerID="5740517d36f8c670db6bde5ccf307226113b3f4bb8e2b99873b800747677b625" exitCode=2 Oct 01 16:00:24 crc kubenswrapper[4949]: I1001 16:00:24.068830 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5778394-f962-4a15-960f-85a81428d1d0","Type":"ContainerDied","Data":"5740517d36f8c670db6bde5ccf307226113b3f4bb8e2b99873b800747677b625"} Oct 01 16:00:24 crc kubenswrapper[4949]: I1001 16:00:24.480083 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-pbcdp" Oct 01 16:00:24 crc kubenswrapper[4949]: I1001 16:00:24.491689 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-js2pl" Oct 01 16:00:24 crc kubenswrapper[4949]: I1001 16:00:24.548278 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrfbg\" (UniqueName: \"kubernetes.io/projected/94c5fb52-ae85-4c88-a811-fde1ae61a33a-kube-api-access-zrfbg\") pod \"94c5fb52-ae85-4c88-a811-fde1ae61a33a\" (UID: \"94c5fb52-ae85-4c88-a811-fde1ae61a33a\") " Oct 01 16:00:24 crc kubenswrapper[4949]: I1001 16:00:24.548345 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkl4m\" (UniqueName: \"kubernetes.io/projected/9251efff-db93-42b2-a1ba-62cfdee08c7f-kube-api-access-tkl4m\") pod \"9251efff-db93-42b2-a1ba-62cfdee08c7f\" (UID: \"9251efff-db93-42b2-a1ba-62cfdee08c7f\") " Oct 01 16:00:24 crc kubenswrapper[4949]: I1001 16:00:24.553974 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94c5fb52-ae85-4c88-a811-fde1ae61a33a-kube-api-access-zrfbg" (OuterVolumeSpecName: "kube-api-access-zrfbg") pod "94c5fb52-ae85-4c88-a811-fde1ae61a33a" (UID: "94c5fb52-ae85-4c88-a811-fde1ae61a33a"). InnerVolumeSpecName "kube-api-access-zrfbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:00:24 crc kubenswrapper[4949]: I1001 16:00:24.561472 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9251efff-db93-42b2-a1ba-62cfdee08c7f-kube-api-access-tkl4m" (OuterVolumeSpecName: "kube-api-access-tkl4m") pod "9251efff-db93-42b2-a1ba-62cfdee08c7f" (UID: "9251efff-db93-42b2-a1ba-62cfdee08c7f"). InnerVolumeSpecName "kube-api-access-tkl4m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:00:24 crc kubenswrapper[4949]: I1001 16:00:24.650708 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrfbg\" (UniqueName: \"kubernetes.io/projected/94c5fb52-ae85-4c88-a811-fde1ae61a33a-kube-api-access-zrfbg\") on node \"crc\" DevicePath \"\"" Oct 01 16:00:24 crc kubenswrapper[4949]: I1001 16:00:24.650741 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tkl4m\" (UniqueName: \"kubernetes.io/projected/9251efff-db93-42b2-a1ba-62cfdee08c7f-kube-api-access-tkl4m\") on node \"crc\" DevicePath \"\"" Oct 01 16:00:25 crc kubenswrapper[4949]: I1001 16:00:25.080355 4949 generic.go:334] "Generic (PLEG): container finished" podID="a5778394-f962-4a15-960f-85a81428d1d0" containerID="d07612193ee4a35973eda9f834ce730ba9cb448535bb8f8f248cde50e4eba70d" exitCode=0 Oct 01 16:00:25 crc kubenswrapper[4949]: I1001 16:00:25.080385 4949 generic.go:334] "Generic (PLEG): container finished" podID="a5778394-f962-4a15-960f-85a81428d1d0" containerID="4b0f4da982ac1cb12ae872a62f2fa0647f16d867df5c73fdff58d635c7b2c4e2" exitCode=0 Oct 01 16:00:25 crc kubenswrapper[4949]: I1001 16:00:25.080447 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5778394-f962-4a15-960f-85a81428d1d0","Type":"ContainerDied","Data":"d07612193ee4a35973eda9f834ce730ba9cb448535bb8f8f248cde50e4eba70d"} Oct 01 16:00:25 crc kubenswrapper[4949]: I1001 16:00:25.080499 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5778394-f962-4a15-960f-85a81428d1d0","Type":"ContainerDied","Data":"4b0f4da982ac1cb12ae872a62f2fa0647f16d867df5c73fdff58d635c7b2c4e2"} Oct 01 16:00:25 crc kubenswrapper[4949]: I1001 16:00:25.082326 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-db-create-pbcdp" event={"ID":"94c5fb52-ae85-4c88-a811-fde1ae61a33a","Type":"ContainerDied","Data":"b4a5e71a617e9695574146426aecbe08e16a6294c40405e8e7e46d906afad646"} Oct 01 16:00:25 crc kubenswrapper[4949]: I1001 16:00:25.082342 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-db-create-pbcdp" Oct 01 16:00:25 crc kubenswrapper[4949]: I1001 16:00:25.082353 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4a5e71a617e9695574146426aecbe08e16a6294c40405e8e7e46d906afad646" Oct 01 16:00:25 crc kubenswrapper[4949]: I1001 16:00:25.084036 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-db-create-js2pl" event={"ID":"9251efff-db93-42b2-a1ba-62cfdee08c7f","Type":"ContainerDied","Data":"02e71ae209d0114ca17f956d0759444e486eaa913323f20f902978b0ae3377f4"} Oct 01 16:00:25 crc kubenswrapper[4949]: I1001 16:00:25.084066 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02e71ae209d0114ca17f956d0759444e486eaa913323f20f902978b0ae3377f4" Oct 01 16:00:25 crc kubenswrapper[4949]: I1001 16:00:25.084081 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-db-create-js2pl" Oct 01 16:00:26 crc kubenswrapper[4949]: I1001 16:00:26.198549 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 01 16:00:26 crc kubenswrapper[4949]: I1001 16:00:26.199490 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/kube-state-metrics-0" podUID="a8530d62-62b2-46a8-be1c-7061ce71f1c2" containerName="kube-state-metrics" containerID="cri-o://5674f2d49d805f85b8a995f4a345a82f8b2bd21f686da3abdb1c3f3c76e3cb69" gracePeriod=30 Oct 01 16:00:26 crc kubenswrapper[4949]: I1001 16:00:26.267516 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/kube-state-metrics-0" podUID="a8530d62-62b2-46a8-be1c-7061ce71f1c2" containerName="kube-state-metrics" probeResult="failure" output="Get \"http://10.217.0.106:8081/readyz\": dial tcp 10.217.0.106:8081: connect: connection refused" Oct 01 16:00:26 crc kubenswrapper[4949]: I1001 16:00:26.661003 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 01 16:00:26 crc kubenswrapper[4949]: I1001 16:00:26.684887 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hf7s9\" (UniqueName: \"kubernetes.io/projected/a8530d62-62b2-46a8-be1c-7061ce71f1c2-kube-api-access-hf7s9\") pod \"a8530d62-62b2-46a8-be1c-7061ce71f1c2\" (UID: \"a8530d62-62b2-46a8-be1c-7061ce71f1c2\") " Oct 01 16:00:26 crc kubenswrapper[4949]: I1001 16:00:26.703141 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a8530d62-62b2-46a8-be1c-7061ce71f1c2-kube-api-access-hf7s9" (OuterVolumeSpecName: "kube-api-access-hf7s9") pod "a8530d62-62b2-46a8-be1c-7061ce71f1c2" (UID: "a8530d62-62b2-46a8-be1c-7061ce71f1c2"). InnerVolumeSpecName "kube-api-access-hf7s9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:00:26 crc kubenswrapper[4949]: I1001 16:00:26.787109 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hf7s9\" (UniqueName: \"kubernetes.io/projected/a8530d62-62b2-46a8-be1c-7061ce71f1c2-kube-api-access-hf7s9\") on node \"crc\" DevicePath \"\"" Oct 01 16:00:27 crc kubenswrapper[4949]: I1001 16:00:27.100805 4949 generic.go:334] "Generic (PLEG): container finished" podID="a8530d62-62b2-46a8-be1c-7061ce71f1c2" containerID="5674f2d49d805f85b8a995f4a345a82f8b2bd21f686da3abdb1c3f3c76e3cb69" exitCode=2 Oct 01 16:00:27 crc kubenswrapper[4949]: I1001 16:00:27.100902 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 01 16:00:27 crc kubenswrapper[4949]: I1001 16:00:27.101012 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a8530d62-62b2-46a8-be1c-7061ce71f1c2","Type":"ContainerDied","Data":"5674f2d49d805f85b8a995f4a345a82f8b2bd21f686da3abdb1c3f3c76e3cb69"} Oct 01 16:00:27 crc kubenswrapper[4949]: I1001 16:00:27.101859 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"a8530d62-62b2-46a8-be1c-7061ce71f1c2","Type":"ContainerDied","Data":"bafeda7a5783bda51d7bfb433b86f713b0f6beafe670e7dfb5118258a0eb0eb3"} Oct 01 16:00:27 crc kubenswrapper[4949]: I1001 16:00:27.101949 4949 scope.go:117] "RemoveContainer" containerID="5674f2d49d805f85b8a995f4a345a82f8b2bd21f686da3abdb1c3f3c76e3cb69" Oct 01 16:00:27 crc kubenswrapper[4949]: I1001 16:00:27.132710 4949 scope.go:117] "RemoveContainer" containerID="5674f2d49d805f85b8a995f4a345a82f8b2bd21f686da3abdb1c3f3c76e3cb69" Oct 01 16:00:27 crc kubenswrapper[4949]: E1001 16:00:27.133280 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5674f2d49d805f85b8a995f4a345a82f8b2bd21f686da3abdb1c3f3c76e3cb69\": container with ID starting with 5674f2d49d805f85b8a995f4a345a82f8b2bd21f686da3abdb1c3f3c76e3cb69 not found: ID does not exist" containerID="5674f2d49d805f85b8a995f4a345a82f8b2bd21f686da3abdb1c3f3c76e3cb69" Oct 01 16:00:27 crc kubenswrapper[4949]: I1001 16:00:27.133316 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5674f2d49d805f85b8a995f4a345a82f8b2bd21f686da3abdb1c3f3c76e3cb69"} err="failed to get container status \"5674f2d49d805f85b8a995f4a345a82f8b2bd21f686da3abdb1c3f3c76e3cb69\": rpc error: code = NotFound desc = could not find container \"5674f2d49d805f85b8a995f4a345a82f8b2bd21f686da3abdb1c3f3c76e3cb69\": container with ID starting with 5674f2d49d805f85b8a995f4a345a82f8b2bd21f686da3abdb1c3f3c76e3cb69 not found: ID does not exist" Oct 01 16:00:27 crc kubenswrapper[4949]: I1001 16:00:27.141074 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 01 16:00:27 crc kubenswrapper[4949]: I1001 16:00:27.149644 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 01 16:00:27 crc kubenswrapper[4949]: I1001 16:00:27.160908 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/kube-state-metrics-0"] Oct 01 16:00:27 crc kubenswrapper[4949]: E1001 16:00:27.161395 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83844cc6-633a-48b1-aa76-e1cf34582971" containerName="neutron-httpd" Oct 01 16:00:27 crc kubenswrapper[4949]: I1001 16:00:27.161420 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="83844cc6-633a-48b1-aa76-e1cf34582971" containerName="neutron-httpd" Oct 01 16:00:27 crc kubenswrapper[4949]: E1001 16:00:27.161439 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="838c9578-08c9-4330-bae8-1ee82a2acc71" containerName="mariadb-database-create" Oct 01 16:00:27 crc kubenswrapper[4949]: I1001 16:00:27.161447 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="838c9578-08c9-4330-bae8-1ee82a2acc71" containerName="mariadb-database-create" Oct 01 16:00:27 crc kubenswrapper[4949]: E1001 16:00:27.161467 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a8530d62-62b2-46a8-be1c-7061ce71f1c2" containerName="kube-state-metrics" Oct 01 16:00:27 crc kubenswrapper[4949]: I1001 16:00:27.161475 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="a8530d62-62b2-46a8-be1c-7061ce71f1c2" containerName="kube-state-metrics" Oct 01 16:00:27 crc kubenswrapper[4949]: E1001 16:00:27.161484 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83844cc6-633a-48b1-aa76-e1cf34582971" containerName="neutron-api" Oct 01 16:00:27 crc kubenswrapper[4949]: I1001 16:00:27.161493 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="83844cc6-633a-48b1-aa76-e1cf34582971" containerName="neutron-api" Oct 01 16:00:27 crc kubenswrapper[4949]: E1001 16:00:27.161507 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9251efff-db93-42b2-a1ba-62cfdee08c7f" containerName="mariadb-database-create" Oct 01 16:00:27 crc kubenswrapper[4949]: I1001 16:00:27.161514 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="9251efff-db93-42b2-a1ba-62cfdee08c7f" containerName="mariadb-database-create" Oct 01 16:00:27 crc kubenswrapper[4949]: E1001 16:00:27.161531 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="94c5fb52-ae85-4c88-a811-fde1ae61a33a" containerName="mariadb-database-create" Oct 01 16:00:27 crc kubenswrapper[4949]: I1001 16:00:27.161539 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="94c5fb52-ae85-4c88-a811-fde1ae61a33a" containerName="mariadb-database-create" Oct 01 16:00:27 crc kubenswrapper[4949]: I1001 16:00:27.161727 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="83844cc6-633a-48b1-aa76-e1cf34582971" containerName="neutron-httpd" Oct 01 16:00:27 crc kubenswrapper[4949]: I1001 16:00:27.161742 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="838c9578-08c9-4330-bae8-1ee82a2acc71" containerName="mariadb-database-create" Oct 01 16:00:27 crc kubenswrapper[4949]: I1001 16:00:27.161757 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="a8530d62-62b2-46a8-be1c-7061ce71f1c2" containerName="kube-state-metrics" Oct 01 16:00:27 crc kubenswrapper[4949]: I1001 16:00:27.161770 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="83844cc6-633a-48b1-aa76-e1cf34582971" containerName="neutron-api" Oct 01 16:00:27 crc kubenswrapper[4949]: I1001 16:00:27.161784 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="94c5fb52-ae85-4c88-a811-fde1ae61a33a" containerName="mariadb-database-create" Oct 01 16:00:27 crc kubenswrapper[4949]: I1001 16:00:27.161806 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="9251efff-db93-42b2-a1ba-62cfdee08c7f" containerName="mariadb-database-create" Oct 01 16:00:27 crc kubenswrapper[4949]: I1001 16:00:27.162711 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 01 16:00:27 crc kubenswrapper[4949]: I1001 16:00:27.167586 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-kube-state-metrics-svc" Oct 01 16:00:27 crc kubenswrapper[4949]: I1001 16:00:27.167762 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"kube-state-metrics-tls-config" Oct 01 16:00:27 crc kubenswrapper[4949]: I1001 16:00:27.181156 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 01 16:00:27 crc kubenswrapper[4949]: I1001 16:00:27.194477 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3cf67ff-4577-4cea-9cae-bf40fce7d527-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"d3cf67ff-4577-4cea-9cae-bf40fce7d527\") " pod="openstack/kube-state-metrics-0" Oct 01 16:00:27 crc kubenswrapper[4949]: I1001 16:00:27.194535 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hb2zz\" (UniqueName: \"kubernetes.io/projected/d3cf67ff-4577-4cea-9cae-bf40fce7d527-kube-api-access-hb2zz\") pod \"kube-state-metrics-0\" (UID: \"d3cf67ff-4577-4cea-9cae-bf40fce7d527\") " pod="openstack/kube-state-metrics-0" Oct 01 16:00:27 crc kubenswrapper[4949]: I1001 16:00:27.194954 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3cf67ff-4577-4cea-9cae-bf40fce7d527-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"d3cf67ff-4577-4cea-9cae-bf40fce7d527\") " pod="openstack/kube-state-metrics-0" Oct 01 16:00:27 crc kubenswrapper[4949]: I1001 16:00:27.195065 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/d3cf67ff-4577-4cea-9cae-bf40fce7d527-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"d3cf67ff-4577-4cea-9cae-bf40fce7d527\") " pod="openstack/kube-state-metrics-0" Oct 01 16:00:27 crc kubenswrapper[4949]: I1001 16:00:27.297208 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3cf67ff-4577-4cea-9cae-bf40fce7d527-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"d3cf67ff-4577-4cea-9cae-bf40fce7d527\") " pod="openstack/kube-state-metrics-0" Oct 01 16:00:27 crc kubenswrapper[4949]: I1001 16:00:27.297785 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hb2zz\" (UniqueName: \"kubernetes.io/projected/d3cf67ff-4577-4cea-9cae-bf40fce7d527-kube-api-access-hb2zz\") pod \"kube-state-metrics-0\" (UID: \"d3cf67ff-4577-4cea-9cae-bf40fce7d527\") " pod="openstack/kube-state-metrics-0" Oct 01 16:00:27 crc kubenswrapper[4949]: I1001 16:00:27.297931 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3cf67ff-4577-4cea-9cae-bf40fce7d527-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"d3cf67ff-4577-4cea-9cae-bf40fce7d527\") " pod="openstack/kube-state-metrics-0" Oct 01 16:00:27 crc kubenswrapper[4949]: I1001 16:00:27.297957 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/d3cf67ff-4577-4cea-9cae-bf40fce7d527-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"d3cf67ff-4577-4cea-9cae-bf40fce7d527\") " pod="openstack/kube-state-metrics-0" Oct 01 16:00:27 crc kubenswrapper[4949]: I1001 16:00:27.300725 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3cf67ff-4577-4cea-9cae-bf40fce7d527-combined-ca-bundle\") pod \"kube-state-metrics-0\" (UID: \"d3cf67ff-4577-4cea-9cae-bf40fce7d527\") " pod="openstack/kube-state-metrics-0" Oct 01 16:00:27 crc kubenswrapper[4949]: I1001 16:00:27.302712 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-config\" (UniqueName: \"kubernetes.io/secret/d3cf67ff-4577-4cea-9cae-bf40fce7d527-kube-state-metrics-tls-config\") pod \"kube-state-metrics-0\" (UID: \"d3cf67ff-4577-4cea-9cae-bf40fce7d527\") " pod="openstack/kube-state-metrics-0" Oct 01 16:00:27 crc kubenswrapper[4949]: I1001 16:00:27.305043 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-state-metrics-tls-certs\" (UniqueName: \"kubernetes.io/secret/d3cf67ff-4577-4cea-9cae-bf40fce7d527-kube-state-metrics-tls-certs\") pod \"kube-state-metrics-0\" (UID: \"d3cf67ff-4577-4cea-9cae-bf40fce7d527\") " pod="openstack/kube-state-metrics-0" Oct 01 16:00:27 crc kubenswrapper[4949]: I1001 16:00:27.321808 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hb2zz\" (UniqueName: \"kubernetes.io/projected/d3cf67ff-4577-4cea-9cae-bf40fce7d527-kube-api-access-hb2zz\") pod \"kube-state-metrics-0\" (UID: \"d3cf67ff-4577-4cea-9cae-bf40fce7d527\") " pod="openstack/kube-state-metrics-0" Oct 01 16:00:27 crc kubenswrapper[4949]: I1001 16:00:27.480568 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/kube-state-metrics-0" Oct 01 16:00:27 crc kubenswrapper[4949]: I1001 16:00:27.622918 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a8530d62-62b2-46a8-be1c-7061ce71f1c2" path="/var/lib/kubelet/pods/a8530d62-62b2-46a8-be1c-7061ce71f1c2/volumes" Oct 01 16:00:27 crc kubenswrapper[4949]: I1001 16:00:27.683413 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-3f61-account-create-c4f44"] Oct 01 16:00:27 crc kubenswrapper[4949]: I1001 16:00:27.684573 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3f61-account-create-c4f44" Oct 01 16:00:27 crc kubenswrapper[4949]: I1001 16:00:27.688034 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-db-secret" Oct 01 16:00:27 crc kubenswrapper[4949]: I1001 16:00:27.700821 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-3f61-account-create-c4f44"] Oct 01 16:00:27 crc kubenswrapper[4949]: I1001 16:00:27.706215 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gqmj5\" (UniqueName: \"kubernetes.io/projected/12d472aa-0dd1-4b72-a1e6-384fb866b92f-kube-api-access-gqmj5\") pod \"nova-api-3f61-account-create-c4f44\" (UID: \"12d472aa-0dd1-4b72-a1e6-384fb866b92f\") " pod="openstack/nova-api-3f61-account-create-c4f44" Oct 01 16:00:27 crc kubenswrapper[4949]: I1001 16:00:27.807715 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gqmj5\" (UniqueName: \"kubernetes.io/projected/12d472aa-0dd1-4b72-a1e6-384fb866b92f-kube-api-access-gqmj5\") pod \"nova-api-3f61-account-create-c4f44\" (UID: \"12d472aa-0dd1-4b72-a1e6-384fb866b92f\") " pod="openstack/nova-api-3f61-account-create-c4f44" Oct 01 16:00:27 crc kubenswrapper[4949]: I1001 16:00:27.828878 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gqmj5\" (UniqueName: \"kubernetes.io/projected/12d472aa-0dd1-4b72-a1e6-384fb866b92f-kube-api-access-gqmj5\") pod \"nova-api-3f61-account-create-c4f44\" (UID: \"12d472aa-0dd1-4b72-a1e6-384fb866b92f\") " pod="openstack/nova-api-3f61-account-create-c4f44" Oct 01 16:00:27 crc kubenswrapper[4949]: I1001 16:00:27.990407 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/kube-state-metrics-0"] Oct 01 16:00:28 crc kubenswrapper[4949]: I1001 16:00:28.002180 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3f61-account-create-c4f44" Oct 01 16:00:28 crc kubenswrapper[4949]: I1001 16:00:28.115883 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d3cf67ff-4577-4cea-9cae-bf40fce7d527","Type":"ContainerStarted","Data":"143ce53edd28aefad2f95f7d6970207ac93d21a9fabbad007e61bc3a5c911dea"} Oct 01 16:00:28 crc kubenswrapper[4949]: I1001 16:00:28.455094 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-3f61-account-create-c4f44"] Oct 01 16:00:28 crc kubenswrapper[4949]: W1001 16:00:28.462076 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod12d472aa_0dd1_4b72_a1e6_384fb866b92f.slice/crio-14777874efb3d0f6f065bfd044dcd1104db99857433991b7e80ab5a6685a1a3f WatchSource:0}: Error finding container 14777874efb3d0f6f065bfd044dcd1104db99857433991b7e80ab5a6685a1a3f: Status 404 returned error can't find the container with id 14777874efb3d0f6f065bfd044dcd1104db99857433991b7e80ab5a6685a1a3f Oct 01 16:00:29 crc kubenswrapper[4949]: I1001 16:00:29.126425 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3f61-account-create-c4f44" event={"ID":"12d472aa-0dd1-4b72-a1e6-384fb866b92f","Type":"ContainerStarted","Data":"9232a624cce7eb367e7276f825c669f16894df640253fbd499fc15094cd8b29b"} Oct 01 16:00:29 crc kubenswrapper[4949]: I1001 16:00:29.126922 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3f61-account-create-c4f44" event={"ID":"12d472aa-0dd1-4b72-a1e6-384fb866b92f","Type":"ContainerStarted","Data":"14777874efb3d0f6f065bfd044dcd1104db99857433991b7e80ab5a6685a1a3f"} Oct 01 16:00:29 crc kubenswrapper[4949]: I1001 16:00:29.128718 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/kube-state-metrics-0" event={"ID":"d3cf67ff-4577-4cea-9cae-bf40fce7d527","Type":"ContainerStarted","Data":"e861918c9aa707c3c995dcc3c6e7c5fe0f4e49255ae13bbe355e458942634bcf"} Oct 01 16:00:29 crc kubenswrapper[4949]: I1001 16:00:29.128880 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/kube-state-metrics-0" Oct 01 16:00:29 crc kubenswrapper[4949]: I1001 16:00:29.131854 4949 generic.go:334] "Generic (PLEG): container finished" podID="a5778394-f962-4a15-960f-85a81428d1d0" containerID="2c77e2a952014e68e92bfd5fd2704deaeab4a43eed03f675bbefd4f124091bc4" exitCode=0 Oct 01 16:00:29 crc kubenswrapper[4949]: I1001 16:00:29.131887 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5778394-f962-4a15-960f-85a81428d1d0","Type":"ContainerDied","Data":"2c77e2a952014e68e92bfd5fd2704deaeab4a43eed03f675bbefd4f124091bc4"} Oct 01 16:00:29 crc kubenswrapper[4949]: I1001 16:00:29.141422 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-3f61-account-create-c4f44" podStartSLOduration=2.141402936 podStartE2EDuration="2.141402936s" podCreationTimestamp="2025-10-01 16:00:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 16:00:29.140276015 +0000 UTC m=+1128.445882226" watchObservedRunningTime="2025-10-01 16:00:29.141402936 +0000 UTC m=+1128.447009137" Oct 01 16:00:29 crc kubenswrapper[4949]: I1001 16:00:29.159848 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/kube-state-metrics-0" podStartSLOduration=1.765809344 podStartE2EDuration="2.159824321s" podCreationTimestamp="2025-10-01 16:00:27 +0000 UTC" firstStartedPulling="2025-10-01 16:00:27.996745567 +0000 UTC m=+1127.302351758" lastFinishedPulling="2025-10-01 16:00:28.390760544 +0000 UTC m=+1127.696366735" observedRunningTime="2025-10-01 16:00:29.154842631 +0000 UTC m=+1128.460448822" watchObservedRunningTime="2025-10-01 16:00:29.159824321 +0000 UTC m=+1128.465430512" Oct 01 16:00:30 crc kubenswrapper[4949]: I1001 16:00:30.146245 4949 generic.go:334] "Generic (PLEG): container finished" podID="12d472aa-0dd1-4b72-a1e6-384fb866b92f" containerID="9232a624cce7eb367e7276f825c669f16894df640253fbd499fc15094cd8b29b" exitCode=0 Oct 01 16:00:30 crc kubenswrapper[4949]: I1001 16:00:30.146425 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3f61-account-create-c4f44" event={"ID":"12d472aa-0dd1-4b72-a1e6-384fb866b92f","Type":"ContainerDied","Data":"9232a624cce7eb367e7276f825c669f16894df640253fbd499fc15094cd8b29b"} Oct 01 16:00:30 crc kubenswrapper[4949]: I1001 16:00:30.398732 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 01 16:00:30 crc kubenswrapper[4949]: I1001 16:00:30.465050 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5778394-f962-4a15-960f-85a81428d1d0-combined-ca-bundle\") pod \"a5778394-f962-4a15-960f-85a81428d1d0\" (UID: \"a5778394-f962-4a15-960f-85a81428d1d0\") " Oct 01 16:00:30 crc kubenswrapper[4949]: I1001 16:00:30.465245 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a5778394-f962-4a15-960f-85a81428d1d0-sg-core-conf-yaml\") pod \"a5778394-f962-4a15-960f-85a81428d1d0\" (UID: \"a5778394-f962-4a15-960f-85a81428d1d0\") " Oct 01 16:00:30 crc kubenswrapper[4949]: I1001 16:00:30.465303 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5778394-f962-4a15-960f-85a81428d1d0-scripts\") pod \"a5778394-f962-4a15-960f-85a81428d1d0\" (UID: \"a5778394-f962-4a15-960f-85a81428d1d0\") " Oct 01 16:00:30 crc kubenswrapper[4949]: I1001 16:00:30.465348 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5778394-f962-4a15-960f-85a81428d1d0-run-httpd\") pod \"a5778394-f962-4a15-960f-85a81428d1d0\" (UID: \"a5778394-f962-4a15-960f-85a81428d1d0\") " Oct 01 16:00:30 crc kubenswrapper[4949]: I1001 16:00:30.465391 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5778394-f962-4a15-960f-85a81428d1d0-log-httpd\") pod \"a5778394-f962-4a15-960f-85a81428d1d0\" (UID: \"a5778394-f962-4a15-960f-85a81428d1d0\") " Oct 01 16:00:30 crc kubenswrapper[4949]: I1001 16:00:30.465427 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5778394-f962-4a15-960f-85a81428d1d0-config-data\") pod \"a5778394-f962-4a15-960f-85a81428d1d0\" (UID: \"a5778394-f962-4a15-960f-85a81428d1d0\") " Oct 01 16:00:30 crc kubenswrapper[4949]: I1001 16:00:30.465455 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zzvd\" (UniqueName: \"kubernetes.io/projected/a5778394-f962-4a15-960f-85a81428d1d0-kube-api-access-9zzvd\") pod \"a5778394-f962-4a15-960f-85a81428d1d0\" (UID: \"a5778394-f962-4a15-960f-85a81428d1d0\") " Oct 01 16:00:30 crc kubenswrapper[4949]: I1001 16:00:30.466139 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5778394-f962-4a15-960f-85a81428d1d0-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "a5778394-f962-4a15-960f-85a81428d1d0" (UID: "a5778394-f962-4a15-960f-85a81428d1d0"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:00:30 crc kubenswrapper[4949]: I1001 16:00:30.466160 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5778394-f962-4a15-960f-85a81428d1d0-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "a5778394-f962-4a15-960f-85a81428d1d0" (UID: "a5778394-f962-4a15-960f-85a81428d1d0"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:00:30 crc kubenswrapper[4949]: I1001 16:00:30.478334 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5778394-f962-4a15-960f-85a81428d1d0-kube-api-access-9zzvd" (OuterVolumeSpecName: "kube-api-access-9zzvd") pod "a5778394-f962-4a15-960f-85a81428d1d0" (UID: "a5778394-f962-4a15-960f-85a81428d1d0"). InnerVolumeSpecName "kube-api-access-9zzvd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:00:30 crc kubenswrapper[4949]: I1001 16:00:30.483047 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5778394-f962-4a15-960f-85a81428d1d0-scripts" (OuterVolumeSpecName: "scripts") pod "a5778394-f962-4a15-960f-85a81428d1d0" (UID: "a5778394-f962-4a15-960f-85a81428d1d0"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:00:30 crc kubenswrapper[4949]: I1001 16:00:30.483945 4949 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5778394-f962-4a15-960f-85a81428d1d0-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 01 16:00:30 crc kubenswrapper[4949]: I1001 16:00:30.496717 4949 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/a5778394-f962-4a15-960f-85a81428d1d0-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 01 16:00:30 crc kubenswrapper[4949]: I1001 16:00:30.496783 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zzvd\" (UniqueName: \"kubernetes.io/projected/a5778394-f962-4a15-960f-85a81428d1d0-kube-api-access-9zzvd\") on node \"crc\" DevicePath \"\"" Oct 01 16:00:30 crc kubenswrapper[4949]: I1001 16:00:30.508568 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5778394-f962-4a15-960f-85a81428d1d0-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "a5778394-f962-4a15-960f-85a81428d1d0" (UID: "a5778394-f962-4a15-960f-85a81428d1d0"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:00:30 crc kubenswrapper[4949]: I1001 16:00:30.597360 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5778394-f962-4a15-960f-85a81428d1d0-config-data" (OuterVolumeSpecName: "config-data") pod "a5778394-f962-4a15-960f-85a81428d1d0" (UID: "a5778394-f962-4a15-960f-85a81428d1d0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:00:30 crc kubenswrapper[4949]: I1001 16:00:30.598563 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5778394-f962-4a15-960f-85a81428d1d0-config-data\") pod \"a5778394-f962-4a15-960f-85a81428d1d0\" (UID: \"a5778394-f962-4a15-960f-85a81428d1d0\") " Oct 01 16:00:30 crc kubenswrapper[4949]: I1001 16:00:30.599073 4949 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/a5778394-f962-4a15-960f-85a81428d1d0-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 01 16:00:30 crc kubenswrapper[4949]: I1001 16:00:30.599187 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5778394-f962-4a15-960f-85a81428d1d0-scripts\") on node \"crc\" DevicePath \"\"" Oct 01 16:00:30 crc kubenswrapper[4949]: W1001 16:00:30.599343 4949 empty_dir.go:500] Warning: Unmount skipped because path does not exist: /var/lib/kubelet/pods/a5778394-f962-4a15-960f-85a81428d1d0/volumes/kubernetes.io~secret/config-data Oct 01 16:00:30 crc kubenswrapper[4949]: I1001 16:00:30.599411 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5778394-f962-4a15-960f-85a81428d1d0-config-data" (OuterVolumeSpecName: "config-data") pod "a5778394-f962-4a15-960f-85a81428d1d0" (UID: "a5778394-f962-4a15-960f-85a81428d1d0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:00:30 crc kubenswrapper[4949]: I1001 16:00:30.633547 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5778394-f962-4a15-960f-85a81428d1d0-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a5778394-f962-4a15-960f-85a81428d1d0" (UID: "a5778394-f962-4a15-960f-85a81428d1d0"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:00:30 crc kubenswrapper[4949]: I1001 16:00:30.702552 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5778394-f962-4a15-960f-85a81428d1d0-config-data\") on node \"crc\" DevicePath \"\"" Oct 01 16:00:30 crc kubenswrapper[4949]: I1001 16:00:30.703209 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a5778394-f962-4a15-960f-85a81428d1d0-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 16:00:31 crc kubenswrapper[4949]: I1001 16:00:31.168018 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 01 16:00:31 crc kubenswrapper[4949]: I1001 16:00:31.172199 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"a5778394-f962-4a15-960f-85a81428d1d0","Type":"ContainerDied","Data":"df8d1c45c7a1f28112b7a2ba4ddcdaccfb88f324feeb362fc47a187baf2cde8b"} Oct 01 16:00:31 crc kubenswrapper[4949]: I1001 16:00:31.172256 4949 scope.go:117] "RemoveContainer" containerID="d07612193ee4a35973eda9f834ce730ba9cb448535bb8f8f248cde50e4eba70d" Oct 01 16:00:31 crc kubenswrapper[4949]: I1001 16:00:31.201101 4949 scope.go:117] "RemoveContainer" containerID="5740517d36f8c670db6bde5ccf307226113b3f4bb8e2b99873b800747677b625" Oct 01 16:00:31 crc kubenswrapper[4949]: I1001 16:00:31.202760 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 01 16:00:31 crc kubenswrapper[4949]: I1001 16:00:31.235437 4949 scope.go:117] "RemoveContainer" containerID="2c77e2a952014e68e92bfd5fd2704deaeab4a43eed03f675bbefd4f124091bc4" Oct 01 16:00:31 crc kubenswrapper[4949]: I1001 16:00:31.235609 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 01 16:00:31 crc kubenswrapper[4949]: I1001 16:00:31.244001 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 01 16:00:31 crc kubenswrapper[4949]: E1001 16:00:31.244497 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5778394-f962-4a15-960f-85a81428d1d0" containerName="proxy-httpd" Oct 01 16:00:31 crc kubenswrapper[4949]: I1001 16:00:31.244521 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5778394-f962-4a15-960f-85a81428d1d0" containerName="proxy-httpd" Oct 01 16:00:31 crc kubenswrapper[4949]: E1001 16:00:31.244536 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5778394-f962-4a15-960f-85a81428d1d0" containerName="ceilometer-notification-agent" Oct 01 16:00:31 crc kubenswrapper[4949]: I1001 16:00:31.244544 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5778394-f962-4a15-960f-85a81428d1d0" containerName="ceilometer-notification-agent" Oct 01 16:00:31 crc kubenswrapper[4949]: E1001 16:00:31.244567 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5778394-f962-4a15-960f-85a81428d1d0" containerName="sg-core" Oct 01 16:00:31 crc kubenswrapper[4949]: I1001 16:00:31.244574 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5778394-f962-4a15-960f-85a81428d1d0" containerName="sg-core" Oct 01 16:00:31 crc kubenswrapper[4949]: E1001 16:00:31.244590 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5778394-f962-4a15-960f-85a81428d1d0" containerName="ceilometer-central-agent" Oct 01 16:00:31 crc kubenswrapper[4949]: I1001 16:00:31.244597 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5778394-f962-4a15-960f-85a81428d1d0" containerName="ceilometer-central-agent" Oct 01 16:00:31 crc kubenswrapper[4949]: I1001 16:00:31.244800 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5778394-f962-4a15-960f-85a81428d1d0" containerName="proxy-httpd" Oct 01 16:00:31 crc kubenswrapper[4949]: I1001 16:00:31.244827 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5778394-f962-4a15-960f-85a81428d1d0" containerName="ceilometer-notification-agent" Oct 01 16:00:31 crc kubenswrapper[4949]: I1001 16:00:31.244840 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5778394-f962-4a15-960f-85a81428d1d0" containerName="ceilometer-central-agent" Oct 01 16:00:31 crc kubenswrapper[4949]: I1001 16:00:31.244856 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5778394-f962-4a15-960f-85a81428d1d0" containerName="sg-core" Oct 01 16:00:31 crc kubenswrapper[4949]: I1001 16:00:31.246565 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 01 16:00:31 crc kubenswrapper[4949]: I1001 16:00:31.249060 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 01 16:00:31 crc kubenswrapper[4949]: I1001 16:00:31.249581 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 01 16:00:31 crc kubenswrapper[4949]: I1001 16:00:31.249603 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 01 16:00:31 crc kubenswrapper[4949]: I1001 16:00:31.254138 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 01 16:00:31 crc kubenswrapper[4949]: I1001 16:00:31.271506 4949 scope.go:117] "RemoveContainer" containerID="4b0f4da982ac1cb12ae872a62f2fa0647f16d867df5c73fdff58d635c7b2c4e2" Oct 01 16:00:31 crc kubenswrapper[4949]: I1001 16:00:31.315084 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/edf4ef39-b627-4ddd-8f6d-293745e2cfc3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"edf4ef39-b627-4ddd-8f6d-293745e2cfc3\") " pod="openstack/ceilometer-0" Oct 01 16:00:31 crc kubenswrapper[4949]: I1001 16:00:31.315152 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/edf4ef39-b627-4ddd-8f6d-293745e2cfc3-log-httpd\") pod \"ceilometer-0\" (UID: \"edf4ef39-b627-4ddd-8f6d-293745e2cfc3\") " pod="openstack/ceilometer-0" Oct 01 16:00:31 crc kubenswrapper[4949]: I1001 16:00:31.315170 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edf4ef39-b627-4ddd-8f6d-293745e2cfc3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"edf4ef39-b627-4ddd-8f6d-293745e2cfc3\") " pod="openstack/ceilometer-0" Oct 01 16:00:31 crc kubenswrapper[4949]: I1001 16:00:31.315225 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gd65w\" (UniqueName: \"kubernetes.io/projected/edf4ef39-b627-4ddd-8f6d-293745e2cfc3-kube-api-access-gd65w\") pod \"ceilometer-0\" (UID: \"edf4ef39-b627-4ddd-8f6d-293745e2cfc3\") " pod="openstack/ceilometer-0" Oct 01 16:00:31 crc kubenswrapper[4949]: I1001 16:00:31.315253 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/edf4ef39-b627-4ddd-8f6d-293745e2cfc3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"edf4ef39-b627-4ddd-8f6d-293745e2cfc3\") " pod="openstack/ceilometer-0" Oct 01 16:00:31 crc kubenswrapper[4949]: I1001 16:00:31.315316 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edf4ef39-b627-4ddd-8f6d-293745e2cfc3-scripts\") pod \"ceilometer-0\" (UID: \"edf4ef39-b627-4ddd-8f6d-293745e2cfc3\") " pod="openstack/ceilometer-0" Oct 01 16:00:31 crc kubenswrapper[4949]: I1001 16:00:31.315350 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edf4ef39-b627-4ddd-8f6d-293745e2cfc3-config-data\") pod \"ceilometer-0\" (UID: \"edf4ef39-b627-4ddd-8f6d-293745e2cfc3\") " pod="openstack/ceilometer-0" Oct 01 16:00:31 crc kubenswrapper[4949]: I1001 16:00:31.315384 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/edf4ef39-b627-4ddd-8f6d-293745e2cfc3-run-httpd\") pod \"ceilometer-0\" (UID: \"edf4ef39-b627-4ddd-8f6d-293745e2cfc3\") " pod="openstack/ceilometer-0" Oct 01 16:00:31 crc kubenswrapper[4949]: I1001 16:00:31.416943 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gd65w\" (UniqueName: \"kubernetes.io/projected/edf4ef39-b627-4ddd-8f6d-293745e2cfc3-kube-api-access-gd65w\") pod \"ceilometer-0\" (UID: \"edf4ef39-b627-4ddd-8f6d-293745e2cfc3\") " pod="openstack/ceilometer-0" Oct 01 16:00:31 crc kubenswrapper[4949]: I1001 16:00:31.417211 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/edf4ef39-b627-4ddd-8f6d-293745e2cfc3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"edf4ef39-b627-4ddd-8f6d-293745e2cfc3\") " pod="openstack/ceilometer-0" Oct 01 16:00:31 crc kubenswrapper[4949]: I1001 16:00:31.417290 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edf4ef39-b627-4ddd-8f6d-293745e2cfc3-scripts\") pod \"ceilometer-0\" (UID: \"edf4ef39-b627-4ddd-8f6d-293745e2cfc3\") " pod="openstack/ceilometer-0" Oct 01 16:00:31 crc kubenswrapper[4949]: I1001 16:00:31.417332 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edf4ef39-b627-4ddd-8f6d-293745e2cfc3-config-data\") pod \"ceilometer-0\" (UID: \"edf4ef39-b627-4ddd-8f6d-293745e2cfc3\") " pod="openstack/ceilometer-0" Oct 01 16:00:31 crc kubenswrapper[4949]: I1001 16:00:31.417380 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/edf4ef39-b627-4ddd-8f6d-293745e2cfc3-run-httpd\") pod \"ceilometer-0\" (UID: \"edf4ef39-b627-4ddd-8f6d-293745e2cfc3\") " pod="openstack/ceilometer-0" Oct 01 16:00:31 crc kubenswrapper[4949]: I1001 16:00:31.417406 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/edf4ef39-b627-4ddd-8f6d-293745e2cfc3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"edf4ef39-b627-4ddd-8f6d-293745e2cfc3\") " pod="openstack/ceilometer-0" Oct 01 16:00:31 crc kubenswrapper[4949]: I1001 16:00:31.417448 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/edf4ef39-b627-4ddd-8f6d-293745e2cfc3-log-httpd\") pod \"ceilometer-0\" (UID: \"edf4ef39-b627-4ddd-8f6d-293745e2cfc3\") " pod="openstack/ceilometer-0" Oct 01 16:00:31 crc kubenswrapper[4949]: I1001 16:00:31.417470 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edf4ef39-b627-4ddd-8f6d-293745e2cfc3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"edf4ef39-b627-4ddd-8f6d-293745e2cfc3\") " pod="openstack/ceilometer-0" Oct 01 16:00:31 crc kubenswrapper[4949]: I1001 16:00:31.421381 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/edf4ef39-b627-4ddd-8f6d-293745e2cfc3-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"edf4ef39-b627-4ddd-8f6d-293745e2cfc3\") " pod="openstack/ceilometer-0" Oct 01 16:00:31 crc kubenswrapper[4949]: I1001 16:00:31.421501 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/edf4ef39-b627-4ddd-8f6d-293745e2cfc3-run-httpd\") pod \"ceilometer-0\" (UID: \"edf4ef39-b627-4ddd-8f6d-293745e2cfc3\") " pod="openstack/ceilometer-0" Oct 01 16:00:31 crc kubenswrapper[4949]: I1001 16:00:31.421517 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/edf4ef39-b627-4ddd-8f6d-293745e2cfc3-log-httpd\") pod \"ceilometer-0\" (UID: \"edf4ef39-b627-4ddd-8f6d-293745e2cfc3\") " pod="openstack/ceilometer-0" Oct 01 16:00:31 crc kubenswrapper[4949]: I1001 16:00:31.424244 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edf4ef39-b627-4ddd-8f6d-293745e2cfc3-scripts\") pod \"ceilometer-0\" (UID: \"edf4ef39-b627-4ddd-8f6d-293745e2cfc3\") " pod="openstack/ceilometer-0" Oct 01 16:00:31 crc kubenswrapper[4949]: I1001 16:00:31.424571 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edf4ef39-b627-4ddd-8f6d-293745e2cfc3-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"edf4ef39-b627-4ddd-8f6d-293745e2cfc3\") " pod="openstack/ceilometer-0" Oct 01 16:00:31 crc kubenswrapper[4949]: I1001 16:00:31.424914 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/edf4ef39-b627-4ddd-8f6d-293745e2cfc3-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"edf4ef39-b627-4ddd-8f6d-293745e2cfc3\") " pod="openstack/ceilometer-0" Oct 01 16:00:31 crc kubenswrapper[4949]: I1001 16:00:31.427000 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edf4ef39-b627-4ddd-8f6d-293745e2cfc3-config-data\") pod \"ceilometer-0\" (UID: \"edf4ef39-b627-4ddd-8f6d-293745e2cfc3\") " pod="openstack/ceilometer-0" Oct 01 16:00:31 crc kubenswrapper[4949]: I1001 16:00:31.439544 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gd65w\" (UniqueName: \"kubernetes.io/projected/edf4ef39-b627-4ddd-8f6d-293745e2cfc3-kube-api-access-gd65w\") pod \"ceilometer-0\" (UID: \"edf4ef39-b627-4ddd-8f6d-293745e2cfc3\") " pod="openstack/ceilometer-0" Oct 01 16:00:31 crc kubenswrapper[4949]: I1001 16:00:31.468319 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3f61-account-create-c4f44" Oct 01 16:00:31 crc kubenswrapper[4949]: I1001 16:00:31.518535 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gqmj5\" (UniqueName: \"kubernetes.io/projected/12d472aa-0dd1-4b72-a1e6-384fb866b92f-kube-api-access-gqmj5\") pod \"12d472aa-0dd1-4b72-a1e6-384fb866b92f\" (UID: \"12d472aa-0dd1-4b72-a1e6-384fb866b92f\") " Oct 01 16:00:31 crc kubenswrapper[4949]: I1001 16:00:31.521531 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12d472aa-0dd1-4b72-a1e6-384fb866b92f-kube-api-access-gqmj5" (OuterVolumeSpecName: "kube-api-access-gqmj5") pod "12d472aa-0dd1-4b72-a1e6-384fb866b92f" (UID: "12d472aa-0dd1-4b72-a1e6-384fb866b92f"). InnerVolumeSpecName "kube-api-access-gqmj5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:00:31 crc kubenswrapper[4949]: I1001 16:00:31.583482 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 01 16:00:31 crc kubenswrapper[4949]: I1001 16:00:31.611775 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5778394-f962-4a15-960f-85a81428d1d0" path="/var/lib/kubelet/pods/a5778394-f962-4a15-960f-85a81428d1d0/volumes" Oct 01 16:00:31 crc kubenswrapper[4949]: I1001 16:00:31.620278 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gqmj5\" (UniqueName: \"kubernetes.io/projected/12d472aa-0dd1-4b72-a1e6-384fb866b92f-kube-api-access-gqmj5\") on node \"crc\" DevicePath \"\"" Oct 01 16:00:32 crc kubenswrapper[4949]: I1001 16:00:32.043618 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 01 16:00:32 crc kubenswrapper[4949]: I1001 16:00:32.180022 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-3f61-account-create-c4f44" Oct 01 16:00:32 crc kubenswrapper[4949]: I1001 16:00:32.180026 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-3f61-account-create-c4f44" event={"ID":"12d472aa-0dd1-4b72-a1e6-384fb866b92f","Type":"ContainerDied","Data":"14777874efb3d0f6f065bfd044dcd1104db99857433991b7e80ab5a6685a1a3f"} Oct 01 16:00:32 crc kubenswrapper[4949]: I1001 16:00:32.180152 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="14777874efb3d0f6f065bfd044dcd1104db99857433991b7e80ab5a6685a1a3f" Oct 01 16:00:32 crc kubenswrapper[4949]: I1001 16:00:32.181417 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"edf4ef39-b627-4ddd-8f6d-293745e2cfc3","Type":"ContainerStarted","Data":"fb518cb7996b86d0cc2fb3d9b4e0f506739d8a47a5a0f3f85d829ffd77ccec63"} Oct 01 16:00:32 crc kubenswrapper[4949]: I1001 16:00:32.762936 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 01 16:00:33 crc kubenswrapper[4949]: I1001 16:00:33.198265 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"edf4ef39-b627-4ddd-8f6d-293745e2cfc3","Type":"ContainerStarted","Data":"7e87c790fabda9344b33ba74b81cd835661c19db51a895a0d3d2ef30ca000edd"} Oct 01 16:00:34 crc kubenswrapper[4949]: I1001 16:00:34.210419 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"edf4ef39-b627-4ddd-8f6d-293745e2cfc3","Type":"ContainerStarted","Data":"1a9341233116f1a5bc9b6a797bd0e03a795a5a97034377639e714ac5b8cf3965"} Oct 01 16:00:34 crc kubenswrapper[4949]: I1001 16:00:34.210751 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"edf4ef39-b627-4ddd-8f6d-293745e2cfc3","Type":"ContainerStarted","Data":"e9daf6bfce93f60aa08e2e7040b37d57982a080a4a28ee2cc5b1c4e4495e42bd"} Oct 01 16:00:36 crc kubenswrapper[4949]: I1001 16:00:36.234332 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"edf4ef39-b627-4ddd-8f6d-293745e2cfc3","Type":"ContainerStarted","Data":"0b96858e818a4c6e6c8d294ff21ec55ebe8ae2b732eb598b470be263475628c0"} Oct 01 16:00:36 crc kubenswrapper[4949]: I1001 16:00:36.235096 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 01 16:00:36 crc kubenswrapper[4949]: I1001 16:00:36.234574 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="edf4ef39-b627-4ddd-8f6d-293745e2cfc3" containerName="ceilometer-central-agent" containerID="cri-o://7e87c790fabda9344b33ba74b81cd835661c19db51a895a0d3d2ef30ca000edd" gracePeriod=30 Oct 01 16:00:36 crc kubenswrapper[4949]: I1001 16:00:36.234681 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="edf4ef39-b627-4ddd-8f6d-293745e2cfc3" containerName="ceilometer-notification-agent" containerID="cri-o://e9daf6bfce93f60aa08e2e7040b37d57982a080a4a28ee2cc5b1c4e4495e42bd" gracePeriod=30 Oct 01 16:00:36 crc kubenswrapper[4949]: I1001 16:00:36.234744 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="edf4ef39-b627-4ddd-8f6d-293745e2cfc3" containerName="sg-core" containerID="cri-o://1a9341233116f1a5bc9b6a797bd0e03a795a5a97034377639e714ac5b8cf3965" gracePeriod=30 Oct 01 16:00:36 crc kubenswrapper[4949]: I1001 16:00:36.234640 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="edf4ef39-b627-4ddd-8f6d-293745e2cfc3" containerName="proxy-httpd" containerID="cri-o://0b96858e818a4c6e6c8d294ff21ec55ebe8ae2b732eb598b470be263475628c0" gracePeriod=30 Oct 01 16:00:36 crc kubenswrapper[4949]: I1001 16:00:36.272597 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.702945433 podStartE2EDuration="5.272576465s" podCreationTimestamp="2025-10-01 16:00:31 +0000 UTC" firstStartedPulling="2025-10-01 16:00:32.058700888 +0000 UTC m=+1131.364307079" lastFinishedPulling="2025-10-01 16:00:35.62833191 +0000 UTC m=+1134.933938111" observedRunningTime="2025-10-01 16:00:36.269716175 +0000 UTC m=+1135.575322366" watchObservedRunningTime="2025-10-01 16:00:36.272576465 +0000 UTC m=+1135.578182656" Oct 01 16:00:37 crc kubenswrapper[4949]: I1001 16:00:37.243595 4949 generic.go:334] "Generic (PLEG): container finished" podID="edf4ef39-b627-4ddd-8f6d-293745e2cfc3" containerID="0b96858e818a4c6e6c8d294ff21ec55ebe8ae2b732eb598b470be263475628c0" exitCode=0 Oct 01 16:00:37 crc kubenswrapper[4949]: I1001 16:00:37.244074 4949 generic.go:334] "Generic (PLEG): container finished" podID="edf4ef39-b627-4ddd-8f6d-293745e2cfc3" containerID="1a9341233116f1a5bc9b6a797bd0e03a795a5a97034377639e714ac5b8cf3965" exitCode=2 Oct 01 16:00:37 crc kubenswrapper[4949]: I1001 16:00:37.244083 4949 generic.go:334] "Generic (PLEG): container finished" podID="edf4ef39-b627-4ddd-8f6d-293745e2cfc3" containerID="e9daf6bfce93f60aa08e2e7040b37d57982a080a4a28ee2cc5b1c4e4495e42bd" exitCode=0 Oct 01 16:00:37 crc kubenswrapper[4949]: I1001 16:00:37.243670 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"edf4ef39-b627-4ddd-8f6d-293745e2cfc3","Type":"ContainerDied","Data":"0b96858e818a4c6e6c8d294ff21ec55ebe8ae2b732eb598b470be263475628c0"} Oct 01 16:00:37 crc kubenswrapper[4949]: I1001 16:00:37.244140 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"edf4ef39-b627-4ddd-8f6d-293745e2cfc3","Type":"ContainerDied","Data":"1a9341233116f1a5bc9b6a797bd0e03a795a5a97034377639e714ac5b8cf3965"} Oct 01 16:00:37 crc kubenswrapper[4949]: I1001 16:00:37.244170 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"edf4ef39-b627-4ddd-8f6d-293745e2cfc3","Type":"ContainerDied","Data":"e9daf6bfce93f60aa08e2e7040b37d57982a080a4a28ee2cc5b1c4e4495e42bd"} Oct 01 16:00:37 crc kubenswrapper[4949]: I1001 16:00:37.489535 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/kube-state-metrics-0" Oct 01 16:00:37 crc kubenswrapper[4949]: I1001 16:00:37.825012 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-8367-account-create-vptlt"] Oct 01 16:00:37 crc kubenswrapper[4949]: E1001 16:00:37.825396 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="12d472aa-0dd1-4b72-a1e6-384fb866b92f" containerName="mariadb-account-create" Oct 01 16:00:37 crc kubenswrapper[4949]: I1001 16:00:37.825413 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="12d472aa-0dd1-4b72-a1e6-384fb866b92f" containerName="mariadb-account-create" Oct 01 16:00:37 crc kubenswrapper[4949]: I1001 16:00:37.825623 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="12d472aa-0dd1-4b72-a1e6-384fb866b92f" containerName="mariadb-account-create" Oct 01 16:00:37 crc kubenswrapper[4949]: I1001 16:00:37.826161 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8367-account-create-vptlt" Oct 01 16:00:37 crc kubenswrapper[4949]: I1001 16:00:37.828305 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-db-secret" Oct 01 16:00:37 crc kubenswrapper[4949]: I1001 16:00:37.833955 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-8367-account-create-vptlt"] Oct 01 16:00:37 crc kubenswrapper[4949]: I1001 16:00:37.925202 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj2nd\" (UniqueName: \"kubernetes.io/projected/084e617b-6327-45db-8d6c-61f5d0f779c2-kube-api-access-qj2nd\") pod \"nova-cell0-8367-account-create-vptlt\" (UID: \"084e617b-6327-45db-8d6c-61f5d0f779c2\") " pod="openstack/nova-cell0-8367-account-create-vptlt" Oct 01 16:00:38 crc kubenswrapper[4949]: I1001 16:00:38.029509 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qj2nd\" (UniqueName: \"kubernetes.io/projected/084e617b-6327-45db-8d6c-61f5d0f779c2-kube-api-access-qj2nd\") pod \"nova-cell0-8367-account-create-vptlt\" (UID: \"084e617b-6327-45db-8d6c-61f5d0f779c2\") " pod="openstack/nova-cell0-8367-account-create-vptlt" Oct 01 16:00:38 crc kubenswrapper[4949]: I1001 16:00:38.033573 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-67ab-account-create-8wlvb"] Oct 01 16:00:38 crc kubenswrapper[4949]: I1001 16:00:38.034907 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-67ab-account-create-8wlvb" Oct 01 16:00:38 crc kubenswrapper[4949]: I1001 16:00:38.037874 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-db-secret" Oct 01 16:00:38 crc kubenswrapper[4949]: I1001 16:00:38.041751 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-67ab-account-create-8wlvb"] Oct 01 16:00:38 crc kubenswrapper[4949]: I1001 16:00:38.065076 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qj2nd\" (UniqueName: \"kubernetes.io/projected/084e617b-6327-45db-8d6c-61f5d0f779c2-kube-api-access-qj2nd\") pod \"nova-cell0-8367-account-create-vptlt\" (UID: \"084e617b-6327-45db-8d6c-61f5d0f779c2\") " pod="openstack/nova-cell0-8367-account-create-vptlt" Oct 01 16:00:38 crc kubenswrapper[4949]: I1001 16:00:38.130759 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lcq2z\" (UniqueName: \"kubernetes.io/projected/0b3553a0-42ab-4edb-9a35-4268e81df5ce-kube-api-access-lcq2z\") pod \"nova-cell1-67ab-account-create-8wlvb\" (UID: \"0b3553a0-42ab-4edb-9a35-4268e81df5ce\") " pod="openstack/nova-cell1-67ab-account-create-8wlvb" Oct 01 16:00:38 crc kubenswrapper[4949]: I1001 16:00:38.155331 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8367-account-create-vptlt" Oct 01 16:00:38 crc kubenswrapper[4949]: I1001 16:00:38.232635 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lcq2z\" (UniqueName: \"kubernetes.io/projected/0b3553a0-42ab-4edb-9a35-4268e81df5ce-kube-api-access-lcq2z\") pod \"nova-cell1-67ab-account-create-8wlvb\" (UID: \"0b3553a0-42ab-4edb-9a35-4268e81df5ce\") " pod="openstack/nova-cell1-67ab-account-create-8wlvb" Oct 01 16:00:38 crc kubenswrapper[4949]: I1001 16:00:38.263948 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lcq2z\" (UniqueName: \"kubernetes.io/projected/0b3553a0-42ab-4edb-9a35-4268e81df5ce-kube-api-access-lcq2z\") pod \"nova-cell1-67ab-account-create-8wlvb\" (UID: \"0b3553a0-42ab-4edb-9a35-4268e81df5ce\") " pod="openstack/nova-cell1-67ab-account-create-8wlvb" Oct 01 16:00:38 crc kubenswrapper[4949]: I1001 16:00:38.355456 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-67ab-account-create-8wlvb" Oct 01 16:00:38 crc kubenswrapper[4949]: I1001 16:00:38.638630 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-8367-account-create-vptlt"] Oct 01 16:00:38 crc kubenswrapper[4949]: I1001 16:00:38.801048 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-67ab-account-create-8wlvb"] Oct 01 16:00:38 crc kubenswrapper[4949]: W1001 16:00:38.807165 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b3553a0_42ab_4edb_9a35_4268e81df5ce.slice/crio-96b8c3f2ea65c84289119d03f62f637a73baaaf09b6db1807ede860d0657e9ef WatchSource:0}: Error finding container 96b8c3f2ea65c84289119d03f62f637a73baaaf09b6db1807ede860d0657e9ef: Status 404 returned error can't find the container with id 96b8c3f2ea65c84289119d03f62f637a73baaaf09b6db1807ede860d0657e9ef Oct 01 16:00:39 crc kubenswrapper[4949]: I1001 16:00:39.268060 4949 generic.go:334] "Generic (PLEG): container finished" podID="edf4ef39-b627-4ddd-8f6d-293745e2cfc3" containerID="7e87c790fabda9344b33ba74b81cd835661c19db51a895a0d3d2ef30ca000edd" exitCode=0 Oct 01 16:00:39 crc kubenswrapper[4949]: I1001 16:00:39.268158 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"edf4ef39-b627-4ddd-8f6d-293745e2cfc3","Type":"ContainerDied","Data":"7e87c790fabda9344b33ba74b81cd835661c19db51a895a0d3d2ef30ca000edd"} Oct 01 16:00:39 crc kubenswrapper[4949]: I1001 16:00:39.268442 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"edf4ef39-b627-4ddd-8f6d-293745e2cfc3","Type":"ContainerDied","Data":"fb518cb7996b86d0cc2fb3d9b4e0f506739d8a47a5a0f3f85d829ffd77ccec63"} Oct 01 16:00:39 crc kubenswrapper[4949]: I1001 16:00:39.268461 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fb518cb7996b86d0cc2fb3d9b4e0f506739d8a47a5a0f3f85d829ffd77ccec63" Oct 01 16:00:39 crc kubenswrapper[4949]: I1001 16:00:39.270140 4949 generic.go:334] "Generic (PLEG): container finished" podID="0b3553a0-42ab-4edb-9a35-4268e81df5ce" containerID="afb5c0e994ef3c39c3836b90b514109c09a7dc733f280b31f6e71e8671ad2d30" exitCode=0 Oct 01 16:00:39 crc kubenswrapper[4949]: I1001 16:00:39.270198 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-67ab-account-create-8wlvb" event={"ID":"0b3553a0-42ab-4edb-9a35-4268e81df5ce","Type":"ContainerDied","Data":"afb5c0e994ef3c39c3836b90b514109c09a7dc733f280b31f6e71e8671ad2d30"} Oct 01 16:00:39 crc kubenswrapper[4949]: I1001 16:00:39.270220 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-67ab-account-create-8wlvb" event={"ID":"0b3553a0-42ab-4edb-9a35-4268e81df5ce","Type":"ContainerStarted","Data":"96b8c3f2ea65c84289119d03f62f637a73baaaf09b6db1807ede860d0657e9ef"} Oct 01 16:00:39 crc kubenswrapper[4949]: I1001 16:00:39.271465 4949 generic.go:334] "Generic (PLEG): container finished" podID="084e617b-6327-45db-8d6c-61f5d0f779c2" containerID="95aa3f67524ee4c2cad309bf3cef511fcecd1ff565ecb211657f310b04931304" exitCode=0 Oct 01 16:00:39 crc kubenswrapper[4949]: I1001 16:00:39.271492 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8367-account-create-vptlt" event={"ID":"084e617b-6327-45db-8d6c-61f5d0f779c2","Type":"ContainerDied","Data":"95aa3f67524ee4c2cad309bf3cef511fcecd1ff565ecb211657f310b04931304"} Oct 01 16:00:39 crc kubenswrapper[4949]: I1001 16:00:39.271515 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8367-account-create-vptlt" event={"ID":"084e617b-6327-45db-8d6c-61f5d0f779c2","Type":"ContainerStarted","Data":"18029f90e7b8a47bd21a7b933163a81fc91c93400f98f978bdb97cd4d9b685a7"} Oct 01 16:00:39 crc kubenswrapper[4949]: I1001 16:00:39.329573 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 01 16:00:39 crc kubenswrapper[4949]: I1001 16:00:39.456119 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/edf4ef39-b627-4ddd-8f6d-293745e2cfc3-ceilometer-tls-certs\") pod \"edf4ef39-b627-4ddd-8f6d-293745e2cfc3\" (UID: \"edf4ef39-b627-4ddd-8f6d-293745e2cfc3\") " Oct 01 16:00:39 crc kubenswrapper[4949]: I1001 16:00:39.456233 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edf4ef39-b627-4ddd-8f6d-293745e2cfc3-combined-ca-bundle\") pod \"edf4ef39-b627-4ddd-8f6d-293745e2cfc3\" (UID: \"edf4ef39-b627-4ddd-8f6d-293745e2cfc3\") " Oct 01 16:00:39 crc kubenswrapper[4949]: I1001 16:00:39.456274 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gd65w\" (UniqueName: \"kubernetes.io/projected/edf4ef39-b627-4ddd-8f6d-293745e2cfc3-kube-api-access-gd65w\") pod \"edf4ef39-b627-4ddd-8f6d-293745e2cfc3\" (UID: \"edf4ef39-b627-4ddd-8f6d-293745e2cfc3\") " Oct 01 16:00:39 crc kubenswrapper[4949]: I1001 16:00:39.456366 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/edf4ef39-b627-4ddd-8f6d-293745e2cfc3-log-httpd\") pod \"edf4ef39-b627-4ddd-8f6d-293745e2cfc3\" (UID: \"edf4ef39-b627-4ddd-8f6d-293745e2cfc3\") " Oct 01 16:00:39 crc kubenswrapper[4949]: I1001 16:00:39.456412 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/edf4ef39-b627-4ddd-8f6d-293745e2cfc3-sg-core-conf-yaml\") pod \"edf4ef39-b627-4ddd-8f6d-293745e2cfc3\" (UID: \"edf4ef39-b627-4ddd-8f6d-293745e2cfc3\") " Oct 01 16:00:39 crc kubenswrapper[4949]: I1001 16:00:39.456437 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/edf4ef39-b627-4ddd-8f6d-293745e2cfc3-run-httpd\") pod \"edf4ef39-b627-4ddd-8f6d-293745e2cfc3\" (UID: \"edf4ef39-b627-4ddd-8f6d-293745e2cfc3\") " Oct 01 16:00:39 crc kubenswrapper[4949]: I1001 16:00:39.456451 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edf4ef39-b627-4ddd-8f6d-293745e2cfc3-config-data\") pod \"edf4ef39-b627-4ddd-8f6d-293745e2cfc3\" (UID: \"edf4ef39-b627-4ddd-8f6d-293745e2cfc3\") " Oct 01 16:00:39 crc kubenswrapper[4949]: I1001 16:00:39.456513 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edf4ef39-b627-4ddd-8f6d-293745e2cfc3-scripts\") pod \"edf4ef39-b627-4ddd-8f6d-293745e2cfc3\" (UID: \"edf4ef39-b627-4ddd-8f6d-293745e2cfc3\") " Oct 01 16:00:39 crc kubenswrapper[4949]: I1001 16:00:39.457005 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edf4ef39-b627-4ddd-8f6d-293745e2cfc3-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "edf4ef39-b627-4ddd-8f6d-293745e2cfc3" (UID: "edf4ef39-b627-4ddd-8f6d-293745e2cfc3"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:00:39 crc kubenswrapper[4949]: I1001 16:00:39.457019 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/edf4ef39-b627-4ddd-8f6d-293745e2cfc3-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "edf4ef39-b627-4ddd-8f6d-293745e2cfc3" (UID: "edf4ef39-b627-4ddd-8f6d-293745e2cfc3"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:00:39 crc kubenswrapper[4949]: I1001 16:00:39.469063 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/edf4ef39-b627-4ddd-8f6d-293745e2cfc3-kube-api-access-gd65w" (OuterVolumeSpecName: "kube-api-access-gd65w") pod "edf4ef39-b627-4ddd-8f6d-293745e2cfc3" (UID: "edf4ef39-b627-4ddd-8f6d-293745e2cfc3"). InnerVolumeSpecName "kube-api-access-gd65w". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:00:39 crc kubenswrapper[4949]: I1001 16:00:39.472891 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edf4ef39-b627-4ddd-8f6d-293745e2cfc3-scripts" (OuterVolumeSpecName: "scripts") pod "edf4ef39-b627-4ddd-8f6d-293745e2cfc3" (UID: "edf4ef39-b627-4ddd-8f6d-293745e2cfc3"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:00:39 crc kubenswrapper[4949]: I1001 16:00:39.481489 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edf4ef39-b627-4ddd-8f6d-293745e2cfc3-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "edf4ef39-b627-4ddd-8f6d-293745e2cfc3" (UID: "edf4ef39-b627-4ddd-8f6d-293745e2cfc3"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:00:39 crc kubenswrapper[4949]: I1001 16:00:39.540174 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edf4ef39-b627-4ddd-8f6d-293745e2cfc3-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "edf4ef39-b627-4ddd-8f6d-293745e2cfc3" (UID: "edf4ef39-b627-4ddd-8f6d-293745e2cfc3"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:00:39 crc kubenswrapper[4949]: I1001 16:00:39.558706 4949 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/edf4ef39-b627-4ddd-8f6d-293745e2cfc3-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 01 16:00:39 crc kubenswrapper[4949]: I1001 16:00:39.558884 4949 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/edf4ef39-b627-4ddd-8f6d-293745e2cfc3-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 01 16:00:39 crc kubenswrapper[4949]: I1001 16:00:39.558945 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/edf4ef39-b627-4ddd-8f6d-293745e2cfc3-scripts\") on node \"crc\" DevicePath \"\"" Oct 01 16:00:39 crc kubenswrapper[4949]: I1001 16:00:39.558998 4949 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/edf4ef39-b627-4ddd-8f6d-293745e2cfc3-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 01 16:00:39 crc kubenswrapper[4949]: I1001 16:00:39.559049 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gd65w\" (UniqueName: \"kubernetes.io/projected/edf4ef39-b627-4ddd-8f6d-293745e2cfc3-kube-api-access-gd65w\") on node \"crc\" DevicePath \"\"" Oct 01 16:00:39 crc kubenswrapper[4949]: I1001 16:00:39.559106 4949 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/edf4ef39-b627-4ddd-8f6d-293745e2cfc3-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 01 16:00:39 crc kubenswrapper[4949]: I1001 16:00:39.570333 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edf4ef39-b627-4ddd-8f6d-293745e2cfc3-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "edf4ef39-b627-4ddd-8f6d-293745e2cfc3" (UID: "edf4ef39-b627-4ddd-8f6d-293745e2cfc3"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:00:39 crc kubenswrapper[4949]: I1001 16:00:39.578496 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/edf4ef39-b627-4ddd-8f6d-293745e2cfc3-config-data" (OuterVolumeSpecName: "config-data") pod "edf4ef39-b627-4ddd-8f6d-293745e2cfc3" (UID: "edf4ef39-b627-4ddd-8f6d-293745e2cfc3"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:00:39 crc kubenswrapper[4949]: I1001 16:00:39.661147 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/edf4ef39-b627-4ddd-8f6d-293745e2cfc3-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 16:00:39 crc kubenswrapper[4949]: I1001 16:00:39.661189 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/edf4ef39-b627-4ddd-8f6d-293745e2cfc3-config-data\") on node \"crc\" DevicePath \"\"" Oct 01 16:00:40 crc kubenswrapper[4949]: I1001 16:00:40.278372 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 01 16:00:40 crc kubenswrapper[4949]: I1001 16:00:40.303614 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 01 16:00:40 crc kubenswrapper[4949]: I1001 16:00:40.315599 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 01 16:00:40 crc kubenswrapper[4949]: I1001 16:00:40.329438 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 01 16:00:40 crc kubenswrapper[4949]: E1001 16:00:40.329816 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edf4ef39-b627-4ddd-8f6d-293745e2cfc3" containerName="ceilometer-notification-agent" Oct 01 16:00:40 crc kubenswrapper[4949]: I1001 16:00:40.329834 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="edf4ef39-b627-4ddd-8f6d-293745e2cfc3" containerName="ceilometer-notification-agent" Oct 01 16:00:40 crc kubenswrapper[4949]: E1001 16:00:40.329853 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edf4ef39-b627-4ddd-8f6d-293745e2cfc3" containerName="proxy-httpd" Oct 01 16:00:40 crc kubenswrapper[4949]: I1001 16:00:40.329861 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="edf4ef39-b627-4ddd-8f6d-293745e2cfc3" containerName="proxy-httpd" Oct 01 16:00:40 crc kubenswrapper[4949]: E1001 16:00:40.329892 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edf4ef39-b627-4ddd-8f6d-293745e2cfc3" containerName="ceilometer-central-agent" Oct 01 16:00:40 crc kubenswrapper[4949]: I1001 16:00:40.329898 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="edf4ef39-b627-4ddd-8f6d-293745e2cfc3" containerName="ceilometer-central-agent" Oct 01 16:00:40 crc kubenswrapper[4949]: E1001 16:00:40.329908 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="edf4ef39-b627-4ddd-8f6d-293745e2cfc3" containerName="sg-core" Oct 01 16:00:40 crc kubenswrapper[4949]: I1001 16:00:40.329915 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="edf4ef39-b627-4ddd-8f6d-293745e2cfc3" containerName="sg-core" Oct 01 16:00:40 crc kubenswrapper[4949]: I1001 16:00:40.330064 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="edf4ef39-b627-4ddd-8f6d-293745e2cfc3" containerName="sg-core" Oct 01 16:00:40 crc kubenswrapper[4949]: I1001 16:00:40.330075 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="edf4ef39-b627-4ddd-8f6d-293745e2cfc3" containerName="ceilometer-notification-agent" Oct 01 16:00:40 crc kubenswrapper[4949]: I1001 16:00:40.330089 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="edf4ef39-b627-4ddd-8f6d-293745e2cfc3" containerName="proxy-httpd" Oct 01 16:00:40 crc kubenswrapper[4949]: I1001 16:00:40.330098 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="edf4ef39-b627-4ddd-8f6d-293745e2cfc3" containerName="ceilometer-central-agent" Oct 01 16:00:40 crc kubenswrapper[4949]: I1001 16:00:40.332422 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 01 16:00:40 crc kubenswrapper[4949]: I1001 16:00:40.334996 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 01 16:00:40 crc kubenswrapper[4949]: I1001 16:00:40.335348 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 01 16:00:40 crc kubenswrapper[4949]: I1001 16:00:40.337673 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 01 16:00:40 crc kubenswrapper[4949]: I1001 16:00:40.343641 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 01 16:00:40 crc kubenswrapper[4949]: I1001 16:00:40.372332 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7-scripts\") pod \"ceilometer-0\" (UID: \"9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7\") " pod="openstack/ceilometer-0" Oct 01 16:00:40 crc kubenswrapper[4949]: I1001 16:00:40.372626 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7\") " pod="openstack/ceilometer-0" Oct 01 16:00:40 crc kubenswrapper[4949]: I1001 16:00:40.372705 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7\") " pod="openstack/ceilometer-0" Oct 01 16:00:40 crc kubenswrapper[4949]: I1001 16:00:40.372750 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7-run-httpd\") pod \"ceilometer-0\" (UID: \"9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7\") " pod="openstack/ceilometer-0" Oct 01 16:00:40 crc kubenswrapper[4949]: I1001 16:00:40.372802 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpn8l\" (UniqueName: \"kubernetes.io/projected/9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7-kube-api-access-rpn8l\") pod \"ceilometer-0\" (UID: \"9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7\") " pod="openstack/ceilometer-0" Oct 01 16:00:40 crc kubenswrapper[4949]: I1001 16:00:40.372931 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7-log-httpd\") pod \"ceilometer-0\" (UID: \"9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7\") " pod="openstack/ceilometer-0" Oct 01 16:00:40 crc kubenswrapper[4949]: I1001 16:00:40.372967 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7\") " pod="openstack/ceilometer-0" Oct 01 16:00:40 crc kubenswrapper[4949]: I1001 16:00:40.373186 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7-config-data\") pod \"ceilometer-0\" (UID: \"9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7\") " pod="openstack/ceilometer-0" Oct 01 16:00:40 crc kubenswrapper[4949]: I1001 16:00:40.474830 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7-scripts\") pod \"ceilometer-0\" (UID: \"9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7\") " pod="openstack/ceilometer-0" Oct 01 16:00:40 crc kubenswrapper[4949]: I1001 16:00:40.474869 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7\") " pod="openstack/ceilometer-0" Oct 01 16:00:40 crc kubenswrapper[4949]: I1001 16:00:40.474919 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7\") " pod="openstack/ceilometer-0" Oct 01 16:00:40 crc kubenswrapper[4949]: I1001 16:00:40.474940 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7-run-httpd\") pod \"ceilometer-0\" (UID: \"9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7\") " pod="openstack/ceilometer-0" Oct 01 16:00:40 crc kubenswrapper[4949]: I1001 16:00:40.474964 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpn8l\" (UniqueName: \"kubernetes.io/projected/9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7-kube-api-access-rpn8l\") pod \"ceilometer-0\" (UID: \"9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7\") " pod="openstack/ceilometer-0" Oct 01 16:00:40 crc kubenswrapper[4949]: I1001 16:00:40.474984 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7-log-httpd\") pod \"ceilometer-0\" (UID: \"9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7\") " pod="openstack/ceilometer-0" Oct 01 16:00:40 crc kubenswrapper[4949]: I1001 16:00:40.475000 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7\") " pod="openstack/ceilometer-0" Oct 01 16:00:40 crc kubenswrapper[4949]: I1001 16:00:40.475053 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7-config-data\") pod \"ceilometer-0\" (UID: \"9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7\") " pod="openstack/ceilometer-0" Oct 01 16:00:40 crc kubenswrapper[4949]: I1001 16:00:40.476218 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7-run-httpd\") pod \"ceilometer-0\" (UID: \"9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7\") " pod="openstack/ceilometer-0" Oct 01 16:00:40 crc kubenswrapper[4949]: I1001 16:00:40.476511 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7-log-httpd\") pod \"ceilometer-0\" (UID: \"9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7\") " pod="openstack/ceilometer-0" Oct 01 16:00:40 crc kubenswrapper[4949]: I1001 16:00:40.481912 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7\") " pod="openstack/ceilometer-0" Oct 01 16:00:40 crc kubenswrapper[4949]: I1001 16:00:40.484908 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7\") " pod="openstack/ceilometer-0" Oct 01 16:00:40 crc kubenswrapper[4949]: I1001 16:00:40.485077 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7\") " pod="openstack/ceilometer-0" Oct 01 16:00:40 crc kubenswrapper[4949]: I1001 16:00:40.485202 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7-config-data\") pod \"ceilometer-0\" (UID: \"9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7\") " pod="openstack/ceilometer-0" Oct 01 16:00:40 crc kubenswrapper[4949]: I1001 16:00:40.493971 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7-scripts\") pod \"ceilometer-0\" (UID: \"9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7\") " pod="openstack/ceilometer-0" Oct 01 16:00:40 crc kubenswrapper[4949]: I1001 16:00:40.497762 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpn8l\" (UniqueName: \"kubernetes.io/projected/9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7-kube-api-access-rpn8l\") pod \"ceilometer-0\" (UID: \"9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7\") " pod="openstack/ceilometer-0" Oct 01 16:00:40 crc kubenswrapper[4949]: I1001 16:00:40.645493 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8367-account-create-vptlt" Oct 01 16:00:40 crc kubenswrapper[4949]: I1001 16:00:40.649643 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 01 16:00:40 crc kubenswrapper[4949]: I1001 16:00:40.656865 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-67ab-account-create-8wlvb" Oct 01 16:00:40 crc kubenswrapper[4949]: I1001 16:00:40.790896 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qj2nd\" (UniqueName: \"kubernetes.io/projected/084e617b-6327-45db-8d6c-61f5d0f779c2-kube-api-access-qj2nd\") pod \"084e617b-6327-45db-8d6c-61f5d0f779c2\" (UID: \"084e617b-6327-45db-8d6c-61f5d0f779c2\") " Oct 01 16:00:40 crc kubenswrapper[4949]: I1001 16:00:40.791508 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lcq2z\" (UniqueName: \"kubernetes.io/projected/0b3553a0-42ab-4edb-9a35-4268e81df5ce-kube-api-access-lcq2z\") pod \"0b3553a0-42ab-4edb-9a35-4268e81df5ce\" (UID: \"0b3553a0-42ab-4edb-9a35-4268e81df5ce\") " Oct 01 16:00:40 crc kubenswrapper[4949]: I1001 16:00:40.796674 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/084e617b-6327-45db-8d6c-61f5d0f779c2-kube-api-access-qj2nd" (OuterVolumeSpecName: "kube-api-access-qj2nd") pod "084e617b-6327-45db-8d6c-61f5d0f779c2" (UID: "084e617b-6327-45db-8d6c-61f5d0f779c2"). InnerVolumeSpecName "kube-api-access-qj2nd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:00:40 crc kubenswrapper[4949]: I1001 16:00:40.796728 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b3553a0-42ab-4edb-9a35-4268e81df5ce-kube-api-access-lcq2z" (OuterVolumeSpecName: "kube-api-access-lcq2z") pod "0b3553a0-42ab-4edb-9a35-4268e81df5ce" (UID: "0b3553a0-42ab-4edb-9a35-4268e81df5ce"). InnerVolumeSpecName "kube-api-access-lcq2z". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:00:40 crc kubenswrapper[4949]: I1001 16:00:40.893664 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lcq2z\" (UniqueName: \"kubernetes.io/projected/0b3553a0-42ab-4edb-9a35-4268e81df5ce-kube-api-access-lcq2z\") on node \"crc\" DevicePath \"\"" Oct 01 16:00:40 crc kubenswrapper[4949]: I1001 16:00:40.893696 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qj2nd\" (UniqueName: \"kubernetes.io/projected/084e617b-6327-45db-8d6c-61f5d0f779c2-kube-api-access-qj2nd\") on node \"crc\" DevicePath \"\"" Oct 01 16:00:41 crc kubenswrapper[4949]: I1001 16:00:41.076271 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 01 16:00:41 crc kubenswrapper[4949]: W1001 16:00:41.084912 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9cbd7766_b0b0_4766_a89f_0d1f0bfb1fa7.slice/crio-d25ee1dba6d5d767a6925704a75154fcbf87aa238318c98d264c019a95c2c374 WatchSource:0}: Error finding container d25ee1dba6d5d767a6925704a75154fcbf87aa238318c98d264c019a95c2c374: Status 404 returned error can't find the container with id d25ee1dba6d5d767a6925704a75154fcbf87aa238318c98d264c019a95c2c374 Oct 01 16:00:41 crc kubenswrapper[4949]: I1001 16:00:41.287112 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-8367-account-create-vptlt" event={"ID":"084e617b-6327-45db-8d6c-61f5d0f779c2","Type":"ContainerDied","Data":"18029f90e7b8a47bd21a7b933163a81fc91c93400f98f978bdb97cd4d9b685a7"} Oct 01 16:00:41 crc kubenswrapper[4949]: I1001 16:00:41.287156 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-8367-account-create-vptlt" Oct 01 16:00:41 crc kubenswrapper[4949]: I1001 16:00:41.287173 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="18029f90e7b8a47bd21a7b933163a81fc91c93400f98f978bdb97cd4d9b685a7" Oct 01 16:00:41 crc kubenswrapper[4949]: I1001 16:00:41.288535 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7","Type":"ContainerStarted","Data":"d25ee1dba6d5d767a6925704a75154fcbf87aa238318c98d264c019a95c2c374"} Oct 01 16:00:41 crc kubenswrapper[4949]: I1001 16:00:41.290034 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-67ab-account-create-8wlvb" event={"ID":"0b3553a0-42ab-4edb-9a35-4268e81df5ce","Type":"ContainerDied","Data":"96b8c3f2ea65c84289119d03f62f637a73baaaf09b6db1807ede860d0657e9ef"} Oct 01 16:00:41 crc kubenswrapper[4949]: I1001 16:00:41.290058 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="96b8c3f2ea65c84289119d03f62f637a73baaaf09b6db1807ede860d0657e9ef" Oct 01 16:00:41 crc kubenswrapper[4949]: I1001 16:00:41.290080 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-67ab-account-create-8wlvb" Oct 01 16:00:41 crc kubenswrapper[4949]: I1001 16:00:41.619433 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="edf4ef39-b627-4ddd-8f6d-293745e2cfc3" path="/var/lib/kubelet/pods/edf4ef39-b627-4ddd-8f6d-293745e2cfc3/volumes" Oct 01 16:00:42 crc kubenswrapper[4949]: I1001 16:00:42.301285 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7","Type":"ContainerStarted","Data":"7492efd1e575c76cde19a3982e727743ba9f888aa74bf111a896e63fb8c084ef"} Oct 01 16:00:43 crc kubenswrapper[4949]: I1001 16:00:43.055300 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zpzx9"] Oct 01 16:00:43 crc kubenswrapper[4949]: E1001 16:00:43.055818 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b3553a0-42ab-4edb-9a35-4268e81df5ce" containerName="mariadb-account-create" Oct 01 16:00:43 crc kubenswrapper[4949]: I1001 16:00:43.055829 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b3553a0-42ab-4edb-9a35-4268e81df5ce" containerName="mariadb-account-create" Oct 01 16:00:43 crc kubenswrapper[4949]: E1001 16:00:43.055858 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="084e617b-6327-45db-8d6c-61f5d0f779c2" containerName="mariadb-account-create" Oct 01 16:00:43 crc kubenswrapper[4949]: I1001 16:00:43.055864 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="084e617b-6327-45db-8d6c-61f5d0f779c2" containerName="mariadb-account-create" Oct 01 16:00:43 crc kubenswrapper[4949]: I1001 16:00:43.056023 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="084e617b-6327-45db-8d6c-61f5d0f779c2" containerName="mariadb-account-create" Oct 01 16:00:43 crc kubenswrapper[4949]: I1001 16:00:43.056043 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b3553a0-42ab-4edb-9a35-4268e81df5ce" containerName="mariadb-account-create" Oct 01 16:00:43 crc kubenswrapper[4949]: I1001 16:00:43.056553 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zpzx9" Oct 01 16:00:43 crc kubenswrapper[4949]: I1001 16:00:43.058442 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-5wkfn" Oct 01 16:00:43 crc kubenswrapper[4949]: I1001 16:00:43.059478 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-scripts" Oct 01 16:00:43 crc kubenswrapper[4949]: I1001 16:00:43.059657 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Oct 01 16:00:43 crc kubenswrapper[4949]: I1001 16:00:43.072479 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zpzx9"] Oct 01 16:00:43 crc kubenswrapper[4949]: I1001 16:00:43.127886 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb75992b-3dfd-40ee-a759-9ef7b3372366-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zpzx9\" (UID: \"cb75992b-3dfd-40ee-a759-9ef7b3372366\") " pod="openstack/nova-cell0-conductor-db-sync-zpzx9" Oct 01 16:00:43 crc kubenswrapper[4949]: I1001 16:00:43.127966 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb75992b-3dfd-40ee-a759-9ef7b3372366-scripts\") pod \"nova-cell0-conductor-db-sync-zpzx9\" (UID: \"cb75992b-3dfd-40ee-a759-9ef7b3372366\") " pod="openstack/nova-cell0-conductor-db-sync-zpzx9" Oct 01 16:00:43 crc kubenswrapper[4949]: I1001 16:00:43.128013 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb75992b-3dfd-40ee-a759-9ef7b3372366-config-data\") pod \"nova-cell0-conductor-db-sync-zpzx9\" (UID: \"cb75992b-3dfd-40ee-a759-9ef7b3372366\") " pod="openstack/nova-cell0-conductor-db-sync-zpzx9" Oct 01 16:00:43 crc kubenswrapper[4949]: I1001 16:00:43.128037 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqtv7\" (UniqueName: \"kubernetes.io/projected/cb75992b-3dfd-40ee-a759-9ef7b3372366-kube-api-access-nqtv7\") pod \"nova-cell0-conductor-db-sync-zpzx9\" (UID: \"cb75992b-3dfd-40ee-a759-9ef7b3372366\") " pod="openstack/nova-cell0-conductor-db-sync-zpzx9" Oct 01 16:00:43 crc kubenswrapper[4949]: I1001 16:00:43.230442 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb75992b-3dfd-40ee-a759-9ef7b3372366-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zpzx9\" (UID: \"cb75992b-3dfd-40ee-a759-9ef7b3372366\") " pod="openstack/nova-cell0-conductor-db-sync-zpzx9" Oct 01 16:00:43 crc kubenswrapper[4949]: I1001 16:00:43.230555 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb75992b-3dfd-40ee-a759-9ef7b3372366-scripts\") pod \"nova-cell0-conductor-db-sync-zpzx9\" (UID: \"cb75992b-3dfd-40ee-a759-9ef7b3372366\") " pod="openstack/nova-cell0-conductor-db-sync-zpzx9" Oct 01 16:00:43 crc kubenswrapper[4949]: I1001 16:00:43.230622 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb75992b-3dfd-40ee-a759-9ef7b3372366-config-data\") pod \"nova-cell0-conductor-db-sync-zpzx9\" (UID: \"cb75992b-3dfd-40ee-a759-9ef7b3372366\") " pod="openstack/nova-cell0-conductor-db-sync-zpzx9" Oct 01 16:00:43 crc kubenswrapper[4949]: I1001 16:00:43.230653 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nqtv7\" (UniqueName: \"kubernetes.io/projected/cb75992b-3dfd-40ee-a759-9ef7b3372366-kube-api-access-nqtv7\") pod \"nova-cell0-conductor-db-sync-zpzx9\" (UID: \"cb75992b-3dfd-40ee-a759-9ef7b3372366\") " pod="openstack/nova-cell0-conductor-db-sync-zpzx9" Oct 01 16:00:43 crc kubenswrapper[4949]: I1001 16:00:43.239885 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb75992b-3dfd-40ee-a759-9ef7b3372366-combined-ca-bundle\") pod \"nova-cell0-conductor-db-sync-zpzx9\" (UID: \"cb75992b-3dfd-40ee-a759-9ef7b3372366\") " pod="openstack/nova-cell0-conductor-db-sync-zpzx9" Oct 01 16:00:43 crc kubenswrapper[4949]: I1001 16:00:43.240316 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb75992b-3dfd-40ee-a759-9ef7b3372366-config-data\") pod \"nova-cell0-conductor-db-sync-zpzx9\" (UID: \"cb75992b-3dfd-40ee-a759-9ef7b3372366\") " pod="openstack/nova-cell0-conductor-db-sync-zpzx9" Oct 01 16:00:43 crc kubenswrapper[4949]: I1001 16:00:43.240721 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb75992b-3dfd-40ee-a759-9ef7b3372366-scripts\") pod \"nova-cell0-conductor-db-sync-zpzx9\" (UID: \"cb75992b-3dfd-40ee-a759-9ef7b3372366\") " pod="openstack/nova-cell0-conductor-db-sync-zpzx9" Oct 01 16:00:43 crc kubenswrapper[4949]: I1001 16:00:43.249699 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqtv7\" (UniqueName: \"kubernetes.io/projected/cb75992b-3dfd-40ee-a759-9ef7b3372366-kube-api-access-nqtv7\") pod \"nova-cell0-conductor-db-sync-zpzx9\" (UID: \"cb75992b-3dfd-40ee-a759-9ef7b3372366\") " pod="openstack/nova-cell0-conductor-db-sync-zpzx9" Oct 01 16:00:43 crc kubenswrapper[4949]: I1001 16:00:43.311843 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7","Type":"ContainerStarted","Data":"d621bf56498b2f79c4d1ae216e2a200a2a5a9e71a942bad32ab6104a94a3ab12"} Oct 01 16:00:43 crc kubenswrapper[4949]: I1001 16:00:43.311884 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7","Type":"ContainerStarted","Data":"9b80a3caf6be28c81a0be839223c4c5ed03a307538c8613024a7ff602573ec63"} Oct 01 16:00:43 crc kubenswrapper[4949]: I1001 16:00:43.371640 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zpzx9" Oct 01 16:00:43 crc kubenswrapper[4949]: W1001 16:00:43.794411 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcb75992b_3dfd_40ee_a759_9ef7b3372366.slice/crio-1d60450b16f1e0e4db576e7822f9fa9de26105250df2de3bd0b5d0c8eed1b884 WatchSource:0}: Error finding container 1d60450b16f1e0e4db576e7822f9fa9de26105250df2de3bd0b5d0c8eed1b884: Status 404 returned error can't find the container with id 1d60450b16f1e0e4db576e7822f9fa9de26105250df2de3bd0b5d0c8eed1b884 Oct 01 16:00:43 crc kubenswrapper[4949]: I1001 16:00:43.794933 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zpzx9"] Oct 01 16:00:44 crc kubenswrapper[4949]: I1001 16:00:44.323538 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zpzx9" event={"ID":"cb75992b-3dfd-40ee-a759-9ef7b3372366","Type":"ContainerStarted","Data":"1d60450b16f1e0e4db576e7822f9fa9de26105250df2de3bd0b5d0c8eed1b884"} Oct 01 16:00:44 crc kubenswrapper[4949]: E1001 16:00:44.716035 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = initializing source docker://quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified: decoding bearer token (last URL \"https://quay.io/v2/auth?account=openshift-release-dev%2Bocm_access_1b89217552bc42d1be3fb06a1aed001a&scope=repository%3Apodified-antelope-centos9%2Fopenstack-nova-conductor%3Apull&service=quay.io\", body start \"\"): unexpected end of JSON input" image="quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified" Oct 01 16:00:44 crc kubenswrapper[4949]: E1001 16:00:44.716238 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:nova-cell0-conductor-db-sync,Image:quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified,Command:[/bin/bash],Args:[-c /usr/local/bin/kolla_start],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CELL_NAME,Value:cell0,ValueFrom:nil,},EnvVar{Name:KOLLA_BOOTSTRAP,Value:true,ValueFrom:nil,},EnvVar{Name:KOLLA_CONFIG_STRATEGY,Value:COPY_ALWAYS,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/openstack/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:scripts,ReadOnly:false,MountPath:/var/lib/openstack/bin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/var/lib/kolla/config_files/config.json,SubPath:nova-conductor-dbsync-config.json,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:combined-ca-bundle,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nqtv7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42436,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod nova-cell0-conductor-db-sync-zpzx9_openstack(cb75992b-3dfd-40ee-a759-9ef7b3372366): ErrImagePull: initializing source docker://quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified: decoding bearer token (last URL \"https://quay.io/v2/auth?account=openshift-release-dev%2Bocm_access_1b89217552bc42d1be3fb06a1aed001a&scope=repository%3Apodified-antelope-centos9%2Fopenstack-nova-conductor%3Apull&service=quay.io\", body start \"\"): unexpected end of JSON input" logger="UnhandledError" Oct 01 16:00:44 crc kubenswrapper[4949]: E1001 16:00:44.717462 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ErrImagePull: \"initializing source docker://quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified: decoding bearer token (last URL \\\"https://quay.io/v2/auth?account=openshift-release-dev%2Bocm_access_1b89217552bc42d1be3fb06a1aed001a&scope=repository%3Apodified-antelope-centos9%2Fopenstack-nova-conductor%3Apull&service=quay.io\\\", body start \\\"\\\"): unexpected end of JSON input\"" pod="openstack/nova-cell0-conductor-db-sync-zpzx9" podUID="cb75992b-3dfd-40ee-a759-9ef7b3372366" Oct 01 16:00:45 crc kubenswrapper[4949]: E1001 16:00:45.333099 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"nova-cell0-conductor-db-sync\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-nova-conductor:current-podified\\\"\"" pod="openstack/nova-cell0-conductor-db-sync-zpzx9" podUID="cb75992b-3dfd-40ee-a759-9ef7b3372366" Oct 01 16:00:46 crc kubenswrapper[4949]: I1001 16:00:46.345051 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7","Type":"ContainerStarted","Data":"b6e520a4ab3ec0344e5a328b0d487c299448002c3336f5e05d51ed955fc9f403"} Oct 01 16:00:46 crc kubenswrapper[4949]: I1001 16:00:46.345569 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 01 16:00:46 crc kubenswrapper[4949]: I1001 16:00:46.376311 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.693599525 podStartE2EDuration="6.376293725s" podCreationTimestamp="2025-10-01 16:00:40 +0000 UTC" firstStartedPulling="2025-10-01 16:00:41.087631416 +0000 UTC m=+1140.393237607" lastFinishedPulling="2025-10-01 16:00:45.770325616 +0000 UTC m=+1145.075931807" observedRunningTime="2025-10-01 16:00:46.370313327 +0000 UTC m=+1145.675919538" watchObservedRunningTime="2025-10-01 16:00:46.376293725 +0000 UTC m=+1145.681899916" Oct 01 16:00:48 crc kubenswrapper[4949]: I1001 16:00:48.038787 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 16:00:48 crc kubenswrapper[4949]: I1001 16:00:48.039160 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 16:00:48 crc kubenswrapper[4949]: I1001 16:00:48.039216 4949 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l6287" Oct 01 16:00:48 crc kubenswrapper[4949]: I1001 16:00:48.040008 4949 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7dd8953e4b2c2c8892c46fcb9ba2ef1fa5099f63e2374f27e45351f899f750d3"} pod="openshift-machine-config-operator/machine-config-daemon-l6287" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 01 16:00:48 crc kubenswrapper[4949]: I1001 16:00:48.040080 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" containerID="cri-o://7dd8953e4b2c2c8892c46fcb9ba2ef1fa5099f63e2374f27e45351f899f750d3" gracePeriod=600 Oct 01 16:00:48 crc kubenswrapper[4949]: I1001 16:00:48.365148 4949 generic.go:334] "Generic (PLEG): container finished" podID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerID="7dd8953e4b2c2c8892c46fcb9ba2ef1fa5099f63e2374f27e45351f899f750d3" exitCode=0 Oct 01 16:00:48 crc kubenswrapper[4949]: I1001 16:00:48.365192 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" event={"ID":"0e15cd67-d4ad-49b8-96a6-da114105e558","Type":"ContainerDied","Data":"7dd8953e4b2c2c8892c46fcb9ba2ef1fa5099f63e2374f27e45351f899f750d3"} Oct 01 16:00:48 crc kubenswrapper[4949]: I1001 16:00:48.365467 4949 scope.go:117] "RemoveContainer" containerID="4d3a1844e3e942fdd712d174f5f7801debfe4a5d8b96ab72892f6da901567689" Oct 01 16:00:49 crc kubenswrapper[4949]: I1001 16:00:49.379929 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" event={"ID":"0e15cd67-d4ad-49b8-96a6-da114105e558","Type":"ContainerStarted","Data":"77e05f9b2cb8b18e2f508e8c0aeda04f4ef4751cece8ee8551cb51dbf8eabbd8"} Oct 01 16:01:00 crc kubenswrapper[4949]: I1001 16:01:00.145709 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29322241-shjwz"] Oct 01 16:01:00 crc kubenswrapper[4949]: I1001 16:01:00.147428 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29322241-shjwz" Oct 01 16:01:00 crc kubenswrapper[4949]: I1001 16:01:00.153391 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29322241-shjwz"] Oct 01 16:01:00 crc kubenswrapper[4949]: I1001 16:01:00.272485 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5972192f-ecd3-4cfa-8f79-c8a3874f9c65-config-data\") pod \"keystone-cron-29322241-shjwz\" (UID: \"5972192f-ecd3-4cfa-8f79-c8a3874f9c65\") " pod="openstack/keystone-cron-29322241-shjwz" Oct 01 16:01:00 crc kubenswrapper[4949]: I1001 16:01:00.272530 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5972192f-ecd3-4cfa-8f79-c8a3874f9c65-combined-ca-bundle\") pod \"keystone-cron-29322241-shjwz\" (UID: \"5972192f-ecd3-4cfa-8f79-c8a3874f9c65\") " pod="openstack/keystone-cron-29322241-shjwz" Oct 01 16:01:00 crc kubenswrapper[4949]: I1001 16:01:00.272584 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kzct9\" (UniqueName: \"kubernetes.io/projected/5972192f-ecd3-4cfa-8f79-c8a3874f9c65-kube-api-access-kzct9\") pod \"keystone-cron-29322241-shjwz\" (UID: \"5972192f-ecd3-4cfa-8f79-c8a3874f9c65\") " pod="openstack/keystone-cron-29322241-shjwz" Oct 01 16:01:00 crc kubenswrapper[4949]: I1001 16:01:00.272632 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5972192f-ecd3-4cfa-8f79-c8a3874f9c65-fernet-keys\") pod \"keystone-cron-29322241-shjwz\" (UID: \"5972192f-ecd3-4cfa-8f79-c8a3874f9c65\") " pod="openstack/keystone-cron-29322241-shjwz" Oct 01 16:01:00 crc kubenswrapper[4949]: I1001 16:01:00.373804 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5972192f-ecd3-4cfa-8f79-c8a3874f9c65-config-data\") pod \"keystone-cron-29322241-shjwz\" (UID: \"5972192f-ecd3-4cfa-8f79-c8a3874f9c65\") " pod="openstack/keystone-cron-29322241-shjwz" Oct 01 16:01:00 crc kubenswrapper[4949]: I1001 16:01:00.374116 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5972192f-ecd3-4cfa-8f79-c8a3874f9c65-combined-ca-bundle\") pod \"keystone-cron-29322241-shjwz\" (UID: \"5972192f-ecd3-4cfa-8f79-c8a3874f9c65\") " pod="openstack/keystone-cron-29322241-shjwz" Oct 01 16:01:00 crc kubenswrapper[4949]: I1001 16:01:00.374208 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kzct9\" (UniqueName: \"kubernetes.io/projected/5972192f-ecd3-4cfa-8f79-c8a3874f9c65-kube-api-access-kzct9\") pod \"keystone-cron-29322241-shjwz\" (UID: \"5972192f-ecd3-4cfa-8f79-c8a3874f9c65\") " pod="openstack/keystone-cron-29322241-shjwz" Oct 01 16:01:00 crc kubenswrapper[4949]: I1001 16:01:00.374275 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5972192f-ecd3-4cfa-8f79-c8a3874f9c65-fernet-keys\") pod \"keystone-cron-29322241-shjwz\" (UID: \"5972192f-ecd3-4cfa-8f79-c8a3874f9c65\") " pod="openstack/keystone-cron-29322241-shjwz" Oct 01 16:01:00 crc kubenswrapper[4949]: I1001 16:01:00.380711 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5972192f-ecd3-4cfa-8f79-c8a3874f9c65-combined-ca-bundle\") pod \"keystone-cron-29322241-shjwz\" (UID: \"5972192f-ecd3-4cfa-8f79-c8a3874f9c65\") " pod="openstack/keystone-cron-29322241-shjwz" Oct 01 16:01:00 crc kubenswrapper[4949]: I1001 16:01:00.380737 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5972192f-ecd3-4cfa-8f79-c8a3874f9c65-fernet-keys\") pod \"keystone-cron-29322241-shjwz\" (UID: \"5972192f-ecd3-4cfa-8f79-c8a3874f9c65\") " pod="openstack/keystone-cron-29322241-shjwz" Oct 01 16:01:00 crc kubenswrapper[4949]: I1001 16:01:00.382976 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5972192f-ecd3-4cfa-8f79-c8a3874f9c65-config-data\") pod \"keystone-cron-29322241-shjwz\" (UID: \"5972192f-ecd3-4cfa-8f79-c8a3874f9c65\") " pod="openstack/keystone-cron-29322241-shjwz" Oct 01 16:01:00 crc kubenswrapper[4949]: I1001 16:01:00.390631 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kzct9\" (UniqueName: \"kubernetes.io/projected/5972192f-ecd3-4cfa-8f79-c8a3874f9c65-kube-api-access-kzct9\") pod \"keystone-cron-29322241-shjwz\" (UID: \"5972192f-ecd3-4cfa-8f79-c8a3874f9c65\") " pod="openstack/keystone-cron-29322241-shjwz" Oct 01 16:01:00 crc kubenswrapper[4949]: I1001 16:01:00.466687 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29322241-shjwz" Oct 01 16:01:02 crc kubenswrapper[4949]: I1001 16:01:02.495220 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zpzx9" event={"ID":"cb75992b-3dfd-40ee-a759-9ef7b3372366","Type":"ContainerStarted","Data":"bb3e955ccf77df549efbaada0405675e70cd6157ed98d8870fe794cbb6b9b714"} Oct 01 16:01:02 crc kubenswrapper[4949]: I1001 16:01:02.510019 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-db-sync-zpzx9" podStartSLOduration=1.025180371 podStartE2EDuration="19.510002055s" podCreationTimestamp="2025-10-01 16:00:43 +0000 UTC" firstStartedPulling="2025-10-01 16:00:43.797491862 +0000 UTC m=+1143.103098053" lastFinishedPulling="2025-10-01 16:01:02.282313546 +0000 UTC m=+1161.587919737" observedRunningTime="2025-10-01 16:01:02.509599704 +0000 UTC m=+1161.815205905" watchObservedRunningTime="2025-10-01 16:01:02.510002055 +0000 UTC m=+1161.815608246" Oct 01 16:01:02 crc kubenswrapper[4949]: I1001 16:01:02.634031 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29322241-shjwz"] Oct 01 16:01:02 crc kubenswrapper[4949]: W1001 16:01:02.640733 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5972192f_ecd3_4cfa_8f79_c8a3874f9c65.slice/crio-3152037ef66c69d34a615e1304d58f3d4ec35c2a97ef0af31e743531a6601dcd WatchSource:0}: Error finding container 3152037ef66c69d34a615e1304d58f3d4ec35c2a97ef0af31e743531a6601dcd: Status 404 returned error can't find the container with id 3152037ef66c69d34a615e1304d58f3d4ec35c2a97ef0af31e743531a6601dcd Oct 01 16:01:03 crc kubenswrapper[4949]: I1001 16:01:03.506632 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29322241-shjwz" event={"ID":"5972192f-ecd3-4cfa-8f79-c8a3874f9c65","Type":"ContainerStarted","Data":"2bc21a8e28a6193de5274990c61d8a168c59cc4085082923ef0af931e779adea"} Oct 01 16:01:03 crc kubenswrapper[4949]: I1001 16:01:03.507031 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29322241-shjwz" event={"ID":"5972192f-ecd3-4cfa-8f79-c8a3874f9c65","Type":"ContainerStarted","Data":"3152037ef66c69d34a615e1304d58f3d4ec35c2a97ef0af31e743531a6601dcd"} Oct 01 16:01:03 crc kubenswrapper[4949]: I1001 16:01:03.536077 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29322241-shjwz" podStartSLOduration=3.536048847 podStartE2EDuration="3.536048847s" podCreationTimestamp="2025-10-01 16:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 16:01:03.521782539 +0000 UTC m=+1162.827388740" watchObservedRunningTime="2025-10-01 16:01:03.536048847 +0000 UTC m=+1162.841655068" Oct 01 16:01:05 crc kubenswrapper[4949]: I1001 16:01:05.531978 4949 generic.go:334] "Generic (PLEG): container finished" podID="5972192f-ecd3-4cfa-8f79-c8a3874f9c65" containerID="2bc21a8e28a6193de5274990c61d8a168c59cc4085082923ef0af931e779adea" exitCode=0 Oct 01 16:01:05 crc kubenswrapper[4949]: I1001 16:01:05.532648 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29322241-shjwz" event={"ID":"5972192f-ecd3-4cfa-8f79-c8a3874f9c65","Type":"ContainerDied","Data":"2bc21a8e28a6193de5274990c61d8a168c59cc4085082923ef0af931e779adea"} Oct 01 16:01:06 crc kubenswrapper[4949]: I1001 16:01:06.886016 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29322241-shjwz" Oct 01 16:01:06 crc kubenswrapper[4949]: I1001 16:01:06.993217 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5972192f-ecd3-4cfa-8f79-c8a3874f9c65-config-data\") pod \"5972192f-ecd3-4cfa-8f79-c8a3874f9c65\" (UID: \"5972192f-ecd3-4cfa-8f79-c8a3874f9c65\") " Oct 01 16:01:06 crc kubenswrapper[4949]: I1001 16:01:06.993296 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5972192f-ecd3-4cfa-8f79-c8a3874f9c65-combined-ca-bundle\") pod \"5972192f-ecd3-4cfa-8f79-c8a3874f9c65\" (UID: \"5972192f-ecd3-4cfa-8f79-c8a3874f9c65\") " Oct 01 16:01:06 crc kubenswrapper[4949]: I1001 16:01:06.993357 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5972192f-ecd3-4cfa-8f79-c8a3874f9c65-fernet-keys\") pod \"5972192f-ecd3-4cfa-8f79-c8a3874f9c65\" (UID: \"5972192f-ecd3-4cfa-8f79-c8a3874f9c65\") " Oct 01 16:01:06 crc kubenswrapper[4949]: I1001 16:01:06.993401 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kzct9\" (UniqueName: \"kubernetes.io/projected/5972192f-ecd3-4cfa-8f79-c8a3874f9c65-kube-api-access-kzct9\") pod \"5972192f-ecd3-4cfa-8f79-c8a3874f9c65\" (UID: \"5972192f-ecd3-4cfa-8f79-c8a3874f9c65\") " Oct 01 16:01:07 crc kubenswrapper[4949]: I1001 16:01:07.000793 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5972192f-ecd3-4cfa-8f79-c8a3874f9c65-kube-api-access-kzct9" (OuterVolumeSpecName: "kube-api-access-kzct9") pod "5972192f-ecd3-4cfa-8f79-c8a3874f9c65" (UID: "5972192f-ecd3-4cfa-8f79-c8a3874f9c65"). InnerVolumeSpecName "kube-api-access-kzct9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:01:07 crc kubenswrapper[4949]: I1001 16:01:07.000877 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5972192f-ecd3-4cfa-8f79-c8a3874f9c65-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "5972192f-ecd3-4cfa-8f79-c8a3874f9c65" (UID: "5972192f-ecd3-4cfa-8f79-c8a3874f9c65"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:01:07 crc kubenswrapper[4949]: I1001 16:01:07.021994 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5972192f-ecd3-4cfa-8f79-c8a3874f9c65-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "5972192f-ecd3-4cfa-8f79-c8a3874f9c65" (UID: "5972192f-ecd3-4cfa-8f79-c8a3874f9c65"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:01:07 crc kubenswrapper[4949]: I1001 16:01:07.054615 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5972192f-ecd3-4cfa-8f79-c8a3874f9c65-config-data" (OuterVolumeSpecName: "config-data") pod "5972192f-ecd3-4cfa-8f79-c8a3874f9c65" (UID: "5972192f-ecd3-4cfa-8f79-c8a3874f9c65"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:01:07 crc kubenswrapper[4949]: I1001 16:01:07.095393 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5972192f-ecd3-4cfa-8f79-c8a3874f9c65-config-data\") on node \"crc\" DevicePath \"\"" Oct 01 16:01:07 crc kubenswrapper[4949]: I1001 16:01:07.095606 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/5972192f-ecd3-4cfa-8f79-c8a3874f9c65-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 16:01:07 crc kubenswrapper[4949]: I1001 16:01:07.095695 4949 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/5972192f-ecd3-4cfa-8f79-c8a3874f9c65-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 01 16:01:07 crc kubenswrapper[4949]: I1001 16:01:07.095748 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kzct9\" (UniqueName: \"kubernetes.io/projected/5972192f-ecd3-4cfa-8f79-c8a3874f9c65-kube-api-access-kzct9\") on node \"crc\" DevicePath \"\"" Oct 01 16:01:07 crc kubenswrapper[4949]: I1001 16:01:07.558169 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29322241-shjwz" event={"ID":"5972192f-ecd3-4cfa-8f79-c8a3874f9c65","Type":"ContainerDied","Data":"3152037ef66c69d34a615e1304d58f3d4ec35c2a97ef0af31e743531a6601dcd"} Oct 01 16:01:07 crc kubenswrapper[4949]: I1001 16:01:07.558591 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3152037ef66c69d34a615e1304d58f3d4ec35c2a97ef0af31e743531a6601dcd" Oct 01 16:01:07 crc kubenswrapper[4949]: I1001 16:01:07.558214 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29322241-shjwz" Oct 01 16:01:10 crc kubenswrapper[4949]: I1001 16:01:10.657603 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 01 16:01:18 crc kubenswrapper[4949]: I1001 16:01:18.658927 4949 generic.go:334] "Generic (PLEG): container finished" podID="cb75992b-3dfd-40ee-a759-9ef7b3372366" containerID="bb3e955ccf77df549efbaada0405675e70cd6157ed98d8870fe794cbb6b9b714" exitCode=0 Oct 01 16:01:18 crc kubenswrapper[4949]: I1001 16:01:18.658973 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zpzx9" event={"ID":"cb75992b-3dfd-40ee-a759-9ef7b3372366","Type":"ContainerDied","Data":"bb3e955ccf77df549efbaada0405675e70cd6157ed98d8870fe794cbb6b9b714"} Oct 01 16:01:19 crc kubenswrapper[4949]: I1001 16:01:19.965474 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zpzx9" Oct 01 16:01:20 crc kubenswrapper[4949]: I1001 16:01:20.021269 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb75992b-3dfd-40ee-a759-9ef7b3372366-scripts\") pod \"cb75992b-3dfd-40ee-a759-9ef7b3372366\" (UID: \"cb75992b-3dfd-40ee-a759-9ef7b3372366\") " Oct 01 16:01:20 crc kubenswrapper[4949]: I1001 16:01:20.021344 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqtv7\" (UniqueName: \"kubernetes.io/projected/cb75992b-3dfd-40ee-a759-9ef7b3372366-kube-api-access-nqtv7\") pod \"cb75992b-3dfd-40ee-a759-9ef7b3372366\" (UID: \"cb75992b-3dfd-40ee-a759-9ef7b3372366\") " Oct 01 16:01:20 crc kubenswrapper[4949]: I1001 16:01:20.021367 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb75992b-3dfd-40ee-a759-9ef7b3372366-config-data\") pod \"cb75992b-3dfd-40ee-a759-9ef7b3372366\" (UID: \"cb75992b-3dfd-40ee-a759-9ef7b3372366\") " Oct 01 16:01:20 crc kubenswrapper[4949]: I1001 16:01:20.021502 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb75992b-3dfd-40ee-a759-9ef7b3372366-combined-ca-bundle\") pod \"cb75992b-3dfd-40ee-a759-9ef7b3372366\" (UID: \"cb75992b-3dfd-40ee-a759-9ef7b3372366\") " Oct 01 16:01:20 crc kubenswrapper[4949]: I1001 16:01:20.026557 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb75992b-3dfd-40ee-a759-9ef7b3372366-scripts" (OuterVolumeSpecName: "scripts") pod "cb75992b-3dfd-40ee-a759-9ef7b3372366" (UID: "cb75992b-3dfd-40ee-a759-9ef7b3372366"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:01:20 crc kubenswrapper[4949]: I1001 16:01:20.026836 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cb75992b-3dfd-40ee-a759-9ef7b3372366-kube-api-access-nqtv7" (OuterVolumeSpecName: "kube-api-access-nqtv7") pod "cb75992b-3dfd-40ee-a759-9ef7b3372366" (UID: "cb75992b-3dfd-40ee-a759-9ef7b3372366"). InnerVolumeSpecName "kube-api-access-nqtv7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:01:20 crc kubenswrapper[4949]: I1001 16:01:20.047554 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb75992b-3dfd-40ee-a759-9ef7b3372366-config-data" (OuterVolumeSpecName: "config-data") pod "cb75992b-3dfd-40ee-a759-9ef7b3372366" (UID: "cb75992b-3dfd-40ee-a759-9ef7b3372366"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:01:20 crc kubenswrapper[4949]: I1001 16:01:20.059350 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cb75992b-3dfd-40ee-a759-9ef7b3372366-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "cb75992b-3dfd-40ee-a759-9ef7b3372366" (UID: "cb75992b-3dfd-40ee-a759-9ef7b3372366"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:01:20 crc kubenswrapper[4949]: I1001 16:01:20.123387 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/cb75992b-3dfd-40ee-a759-9ef7b3372366-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 16:01:20 crc kubenswrapper[4949]: I1001 16:01:20.123416 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/cb75992b-3dfd-40ee-a759-9ef7b3372366-scripts\") on node \"crc\" DevicePath \"\"" Oct 01 16:01:20 crc kubenswrapper[4949]: I1001 16:01:20.123427 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nqtv7\" (UniqueName: \"kubernetes.io/projected/cb75992b-3dfd-40ee-a759-9ef7b3372366-kube-api-access-nqtv7\") on node \"crc\" DevicePath \"\"" Oct 01 16:01:20 crc kubenswrapper[4949]: I1001 16:01:20.123438 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/cb75992b-3dfd-40ee-a759-9ef7b3372366-config-data\") on node \"crc\" DevicePath \"\"" Oct 01 16:01:20 crc kubenswrapper[4949]: I1001 16:01:20.690263 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-db-sync-zpzx9" event={"ID":"cb75992b-3dfd-40ee-a759-9ef7b3372366","Type":"ContainerDied","Data":"1d60450b16f1e0e4db576e7822f9fa9de26105250df2de3bd0b5d0c8eed1b884"} Oct 01 16:01:20 crc kubenswrapper[4949]: I1001 16:01:20.690345 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1d60450b16f1e0e4db576e7822f9fa9de26105250df2de3bd0b5d0c8eed1b884" Oct 01 16:01:20 crc kubenswrapper[4949]: I1001 16:01:20.690481 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-db-sync-zpzx9" Oct 01 16:01:20 crc kubenswrapper[4949]: I1001 16:01:20.789358 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 01 16:01:20 crc kubenswrapper[4949]: E1001 16:01:20.789953 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cb75992b-3dfd-40ee-a759-9ef7b3372366" containerName="nova-cell0-conductor-db-sync" Oct 01 16:01:20 crc kubenswrapper[4949]: I1001 16:01:20.789970 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="cb75992b-3dfd-40ee-a759-9ef7b3372366" containerName="nova-cell0-conductor-db-sync" Oct 01 16:01:20 crc kubenswrapper[4949]: E1001 16:01:20.789992 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5972192f-ecd3-4cfa-8f79-c8a3874f9c65" containerName="keystone-cron" Oct 01 16:01:20 crc kubenswrapper[4949]: I1001 16:01:20.789999 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="5972192f-ecd3-4cfa-8f79-c8a3874f9c65" containerName="keystone-cron" Oct 01 16:01:20 crc kubenswrapper[4949]: I1001 16:01:20.790173 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="5972192f-ecd3-4cfa-8f79-c8a3874f9c65" containerName="keystone-cron" Oct 01 16:01:20 crc kubenswrapper[4949]: I1001 16:01:20.790192 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="cb75992b-3dfd-40ee-a759-9ef7b3372366" containerName="nova-cell0-conductor-db-sync" Oct 01 16:01:20 crc kubenswrapper[4949]: I1001 16:01:20.790804 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 01 16:01:20 crc kubenswrapper[4949]: I1001 16:01:20.795621 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-nova-dockercfg-5wkfn" Oct 01 16:01:20 crc kubenswrapper[4949]: I1001 16:01:20.795783 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-conductor-config-data" Oct 01 16:01:20 crc kubenswrapper[4949]: I1001 16:01:20.809131 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 01 16:01:20 crc kubenswrapper[4949]: I1001 16:01:20.836117 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rb5lf\" (UniqueName: \"kubernetes.io/projected/f6a7962e-d000-4077-aeb6-fbc55876a90d-kube-api-access-rb5lf\") pod \"nova-cell0-conductor-0\" (UID: \"f6a7962e-d000-4077-aeb6-fbc55876a90d\") " pod="openstack/nova-cell0-conductor-0" Oct 01 16:01:20 crc kubenswrapper[4949]: I1001 16:01:20.836245 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6a7962e-d000-4077-aeb6-fbc55876a90d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f6a7962e-d000-4077-aeb6-fbc55876a90d\") " pod="openstack/nova-cell0-conductor-0" Oct 01 16:01:20 crc kubenswrapper[4949]: I1001 16:01:20.836276 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6a7962e-d000-4077-aeb6-fbc55876a90d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f6a7962e-d000-4077-aeb6-fbc55876a90d\") " pod="openstack/nova-cell0-conductor-0" Oct 01 16:01:20 crc kubenswrapper[4949]: I1001 16:01:20.937645 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6a7962e-d000-4077-aeb6-fbc55876a90d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f6a7962e-d000-4077-aeb6-fbc55876a90d\") " pod="openstack/nova-cell0-conductor-0" Oct 01 16:01:20 crc kubenswrapper[4949]: I1001 16:01:20.937736 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6a7962e-d000-4077-aeb6-fbc55876a90d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f6a7962e-d000-4077-aeb6-fbc55876a90d\") " pod="openstack/nova-cell0-conductor-0" Oct 01 16:01:20 crc kubenswrapper[4949]: I1001 16:01:20.937880 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rb5lf\" (UniqueName: \"kubernetes.io/projected/f6a7962e-d000-4077-aeb6-fbc55876a90d-kube-api-access-rb5lf\") pod \"nova-cell0-conductor-0\" (UID: \"f6a7962e-d000-4077-aeb6-fbc55876a90d\") " pod="openstack/nova-cell0-conductor-0" Oct 01 16:01:20 crc kubenswrapper[4949]: I1001 16:01:20.945638 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f6a7962e-d000-4077-aeb6-fbc55876a90d-config-data\") pod \"nova-cell0-conductor-0\" (UID: \"f6a7962e-d000-4077-aeb6-fbc55876a90d\") " pod="openstack/nova-cell0-conductor-0" Oct 01 16:01:20 crc kubenswrapper[4949]: I1001 16:01:20.948946 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f6a7962e-d000-4077-aeb6-fbc55876a90d-combined-ca-bundle\") pod \"nova-cell0-conductor-0\" (UID: \"f6a7962e-d000-4077-aeb6-fbc55876a90d\") " pod="openstack/nova-cell0-conductor-0" Oct 01 16:01:20 crc kubenswrapper[4949]: I1001 16:01:20.971716 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rb5lf\" (UniqueName: \"kubernetes.io/projected/f6a7962e-d000-4077-aeb6-fbc55876a90d-kube-api-access-rb5lf\") pod \"nova-cell0-conductor-0\" (UID: \"f6a7962e-d000-4077-aeb6-fbc55876a90d\") " pod="openstack/nova-cell0-conductor-0" Oct 01 16:01:21 crc kubenswrapper[4949]: I1001 16:01:21.116928 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-conductor-0" Oct 01 16:01:21 crc kubenswrapper[4949]: I1001 16:01:21.551050 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-conductor-0"] Oct 01 16:01:21 crc kubenswrapper[4949]: I1001 16:01:21.708540 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"f6a7962e-d000-4077-aeb6-fbc55876a90d","Type":"ContainerStarted","Data":"1a75522fa3d997bc71f7cc27177908ba7f1067b31ada9c40fc8d0988a1708e91"} Oct 01 16:01:22 crc kubenswrapper[4949]: I1001 16:01:22.721904 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-conductor-0" event={"ID":"f6a7962e-d000-4077-aeb6-fbc55876a90d","Type":"ContainerStarted","Data":"8abc4c96b08f8211d415c31fbd0aed0751b93280f323e7aa4337f8ac8d7d908c"} Oct 01 16:01:22 crc kubenswrapper[4949]: I1001 16:01:22.722772 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell0-conductor-0" Oct 01 16:01:22 crc kubenswrapper[4949]: I1001 16:01:22.744432 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-conductor-0" podStartSLOduration=2.744408896 podStartE2EDuration="2.744408896s" podCreationTimestamp="2025-10-01 16:01:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 16:01:22.742699418 +0000 UTC m=+1182.048305659" watchObservedRunningTime="2025-10-01 16:01:22.744408896 +0000 UTC m=+1182.050015087" Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.150476 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell0-conductor-0" Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.569521 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell0-cell-mapping-829qg"] Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.570899 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-829qg" Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.574062 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-config-data" Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.577009 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell0-manage-scripts" Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.580237 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-829qg"] Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.632740 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sd27\" (UniqueName: \"kubernetes.io/projected/0a0d0f6b-78d8-4295-8842-0b95d1081339-kube-api-access-7sd27\") pod \"nova-cell0-cell-mapping-829qg\" (UID: \"0a0d0f6b-78d8-4295-8842-0b95d1081339\") " pod="openstack/nova-cell0-cell-mapping-829qg" Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.632819 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a0d0f6b-78d8-4295-8842-0b95d1081339-scripts\") pod \"nova-cell0-cell-mapping-829qg\" (UID: \"0a0d0f6b-78d8-4295-8842-0b95d1081339\") " pod="openstack/nova-cell0-cell-mapping-829qg" Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.632847 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a0d0f6b-78d8-4295-8842-0b95d1081339-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-829qg\" (UID: \"0a0d0f6b-78d8-4295-8842-0b95d1081339\") " pod="openstack/nova-cell0-cell-mapping-829qg" Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.632910 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a0d0f6b-78d8-4295-8842-0b95d1081339-config-data\") pod \"nova-cell0-cell-mapping-829qg\" (UID: \"0a0d0f6b-78d8-4295-8842-0b95d1081339\") " pod="openstack/nova-cell0-cell-mapping-829qg" Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.732629 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.733790 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sd27\" (UniqueName: \"kubernetes.io/projected/0a0d0f6b-78d8-4295-8842-0b95d1081339-kube-api-access-7sd27\") pod \"nova-cell0-cell-mapping-829qg\" (UID: \"0a0d0f6b-78d8-4295-8842-0b95d1081339\") " pod="openstack/nova-cell0-cell-mapping-829qg" Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.733876 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a0d0f6b-78d8-4295-8842-0b95d1081339-scripts\") pod \"nova-cell0-cell-mapping-829qg\" (UID: \"0a0d0f6b-78d8-4295-8842-0b95d1081339\") " pod="openstack/nova-cell0-cell-mapping-829qg" Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.733903 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a0d0f6b-78d8-4295-8842-0b95d1081339-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-829qg\" (UID: \"0a0d0f6b-78d8-4295-8842-0b95d1081339\") " pod="openstack/nova-cell0-cell-mapping-829qg" Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.733938 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a0d0f6b-78d8-4295-8842-0b95d1081339-config-data\") pod \"nova-cell0-cell-mapping-829qg\" (UID: \"0a0d0f6b-78d8-4295-8842-0b95d1081339\") " pod="openstack/nova-cell0-cell-mapping-829qg" Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.733940 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.741381 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.744879 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a0d0f6b-78d8-4295-8842-0b95d1081339-scripts\") pod \"nova-cell0-cell-mapping-829qg\" (UID: \"0a0d0f6b-78d8-4295-8842-0b95d1081339\") " pod="openstack/nova-cell0-cell-mapping-829qg" Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.761474 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.765995 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a0d0f6b-78d8-4295-8842-0b95d1081339-config-data\") pod \"nova-cell0-cell-mapping-829qg\" (UID: \"0a0d0f6b-78d8-4295-8842-0b95d1081339\") " pod="openstack/nova-cell0-cell-mapping-829qg" Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.766341 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a0d0f6b-78d8-4295-8842-0b95d1081339-combined-ca-bundle\") pod \"nova-cell0-cell-mapping-829qg\" (UID: \"0a0d0f6b-78d8-4295-8842-0b95d1081339\") " pod="openstack/nova-cell0-cell-mapping-829qg" Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.767974 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sd27\" (UniqueName: \"kubernetes.io/projected/0a0d0f6b-78d8-4295-8842-0b95d1081339-kube-api-access-7sd27\") pod \"nova-cell0-cell-mapping-829qg\" (UID: \"0a0d0f6b-78d8-4295-8842-0b95d1081339\") " pod="openstack/nova-cell0-cell-mapping-829qg" Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.829317 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.830635 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.833531 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.835440 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcb99768-4ca7-4116-86f9-118cbcee56f4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"dcb99768-4ca7-4116-86f9-118cbcee56f4\") " pod="openstack/nova-cell1-novncproxy-0" Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.835501 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bca9c47-f575-421c-8560-0e9959ea3031-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3bca9c47-f575-421c-8560-0e9959ea3031\") " pod="openstack/nova-metadata-0" Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.835553 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brzng\" (UniqueName: \"kubernetes.io/projected/dcb99768-4ca7-4116-86f9-118cbcee56f4-kube-api-access-brzng\") pod \"nova-cell1-novncproxy-0\" (UID: \"dcb99768-4ca7-4116-86f9-118cbcee56f4\") " pod="openstack/nova-cell1-novncproxy-0" Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.835584 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcb99768-4ca7-4116-86f9-118cbcee56f4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"dcb99768-4ca7-4116-86f9-118cbcee56f4\") " pod="openstack/nova-cell1-novncproxy-0" Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.835609 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgtph\" (UniqueName: \"kubernetes.io/projected/3bca9c47-f575-421c-8560-0e9959ea3031-kube-api-access-zgtph\") pod \"nova-metadata-0\" (UID: \"3bca9c47-f575-421c-8560-0e9959ea3031\") " pod="openstack/nova-metadata-0" Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.835645 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bca9c47-f575-421c-8560-0e9959ea3031-config-data\") pod \"nova-metadata-0\" (UID: \"3bca9c47-f575-421c-8560-0e9959ea3031\") " pod="openstack/nova-metadata-0" Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.835671 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bca9c47-f575-421c-8560-0e9959ea3031-logs\") pod \"nova-metadata-0\" (UID: \"3bca9c47-f575-421c-8560-0e9959ea3031\") " pod="openstack/nova-metadata-0" Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.878662 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.896822 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.898027 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.899088 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-829qg" Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.909242 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.936888 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bca9c47-f575-421c-8560-0e9959ea3031-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3bca9c47-f575-421c-8560-0e9959ea3031\") " pod="openstack/nova-metadata-0" Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.936967 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd5q4\" (UniqueName: \"kubernetes.io/projected/4222e1cd-2c24-403c-99cb-3bce6a0e8881-kube-api-access-vd5q4\") pod \"nova-scheduler-0\" (UID: \"4222e1cd-2c24-403c-99cb-3bce6a0e8881\") " pod="openstack/nova-scheduler-0" Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.937000 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brzng\" (UniqueName: \"kubernetes.io/projected/dcb99768-4ca7-4116-86f9-118cbcee56f4-kube-api-access-brzng\") pod \"nova-cell1-novncproxy-0\" (UID: \"dcb99768-4ca7-4116-86f9-118cbcee56f4\") " pod="openstack/nova-cell1-novncproxy-0" Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.937034 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcb99768-4ca7-4116-86f9-118cbcee56f4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"dcb99768-4ca7-4116-86f9-118cbcee56f4\") " pod="openstack/nova-cell1-novncproxy-0" Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.937055 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgtph\" (UniqueName: \"kubernetes.io/projected/3bca9c47-f575-421c-8560-0e9959ea3031-kube-api-access-zgtph\") pod \"nova-metadata-0\" (UID: \"3bca9c47-f575-421c-8560-0e9959ea3031\") " pod="openstack/nova-metadata-0" Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.937090 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bca9c47-f575-421c-8560-0e9959ea3031-config-data\") pod \"nova-metadata-0\" (UID: \"3bca9c47-f575-421c-8560-0e9959ea3031\") " pod="openstack/nova-metadata-0" Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.937135 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bca9c47-f575-421c-8560-0e9959ea3031-logs\") pod \"nova-metadata-0\" (UID: \"3bca9c47-f575-421c-8560-0e9959ea3031\") " pod="openstack/nova-metadata-0" Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.937283 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4222e1cd-2c24-403c-99cb-3bce6a0e8881-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4222e1cd-2c24-403c-99cb-3bce6a0e8881\") " pod="openstack/nova-scheduler-0" Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.937323 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcb99768-4ca7-4116-86f9-118cbcee56f4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"dcb99768-4ca7-4116-86f9-118cbcee56f4\") " pod="openstack/nova-cell1-novncproxy-0" Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.937351 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4222e1cd-2c24-403c-99cb-3bce6a0e8881-config-data\") pod \"nova-scheduler-0\" (UID: \"4222e1cd-2c24-403c-99cb-3bce6a0e8881\") " pod="openstack/nova-scheduler-0" Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.942338 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bca9c47-f575-421c-8560-0e9959ea3031-logs\") pod \"nova-metadata-0\" (UID: \"3bca9c47-f575-421c-8560-0e9959ea3031\") " pod="openstack/nova-metadata-0" Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.956994 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcb99768-4ca7-4116-86f9-118cbcee56f4-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"dcb99768-4ca7-4116-86f9-118cbcee56f4\") " pod="openstack/nova-cell1-novncproxy-0" Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.957721 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcb99768-4ca7-4116-86f9-118cbcee56f4-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"dcb99768-4ca7-4116-86f9-118cbcee56f4\") " pod="openstack/nova-cell1-novncproxy-0" Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.965891 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bca9c47-f575-421c-8560-0e9959ea3031-config-data\") pod \"nova-metadata-0\" (UID: \"3bca9c47-f575-421c-8560-0e9959ea3031\") " pod="openstack/nova-metadata-0" Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.968535 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.969025 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgtph\" (UniqueName: \"kubernetes.io/projected/3bca9c47-f575-421c-8560-0e9959ea3031-kube-api-access-zgtph\") pod \"nova-metadata-0\" (UID: \"3bca9c47-f575-421c-8560-0e9959ea3031\") " pod="openstack/nova-metadata-0" Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.974888 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brzng\" (UniqueName: \"kubernetes.io/projected/dcb99768-4ca7-4116-86f9-118cbcee56f4-kube-api-access-brzng\") pod \"nova-cell1-novncproxy-0\" (UID: \"dcb99768-4ca7-4116-86f9-118cbcee56f4\") " pod="openstack/nova-cell1-novncproxy-0" Oct 01 16:01:26 crc kubenswrapper[4949]: I1001 16:01:26.982617 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bca9c47-f575-421c-8560-0e9959ea3031-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"3bca9c47-f575-421c-8560-0e9959ea3031\") " pod="openstack/nova-metadata-0" Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.036986 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.038442 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.055737 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4222e1cd-2c24-403c-99cb-3bce6a0e8881-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4222e1cd-2c24-403c-99cb-3bce6a0e8881\") " pod="openstack/nova-scheduler-0" Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.055954 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4222e1cd-2c24-403c-99cb-3bce6a0e8881-config-data\") pod \"nova-scheduler-0\" (UID: \"4222e1cd-2c24-403c-99cb-3bce6a0e8881\") " pod="openstack/nova-scheduler-0" Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.056014 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.064752 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vd5q4\" (UniqueName: \"kubernetes.io/projected/4222e1cd-2c24-403c-99cb-3bce6a0e8881-kube-api-access-vd5q4\") pod \"nova-scheduler-0\" (UID: \"4222e1cd-2c24-403c-99cb-3bce6a0e8881\") " pod="openstack/nova-scheduler-0" Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.083974 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-v47vk"] Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.125084 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4222e1cd-2c24-403c-99cb-3bce6a0e8881-config-data\") pod \"nova-scheduler-0\" (UID: \"4222e1cd-2c24-403c-99cb-3bce6a0e8881\") " pod="openstack/nova-scheduler-0" Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.128979 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-v47vk" Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.130661 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4222e1cd-2c24-403c-99cb-3bce6a0e8881-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"4222e1cd-2c24-403c-99cb-3bce6a0e8881\") " pod="openstack/nova-scheduler-0" Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.133898 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vd5q4\" (UniqueName: \"kubernetes.io/projected/4222e1cd-2c24-403c-99cb-3bce6a0e8881-kube-api-access-vd5q4\") pod \"nova-scheduler-0\" (UID: \"4222e1cd-2c24-403c-99cb-3bce6a0e8881\") " pod="openstack/nova-scheduler-0" Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.134213 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-v47vk"] Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.146900 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.156561 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.166076 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.195638 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f13ebe6-cca0-46cf-a0c9-aa20e2abc385-dns-svc\") pod \"dnsmasq-dns-566b5b7845-v47vk\" (UID: \"4f13ebe6-cca0-46cf-a0c9-aa20e2abc385\") " pod="openstack/dnsmasq-dns-566b5b7845-v47vk" Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.195721 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b84835e-7891-4f3f-b33f-c1af53dfb7ae-logs\") pod \"nova-api-0\" (UID: \"8b84835e-7891-4f3f-b33f-c1af53dfb7ae\") " pod="openstack/nova-api-0" Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.195755 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2snp6\" (UniqueName: \"kubernetes.io/projected/4f13ebe6-cca0-46cf-a0c9-aa20e2abc385-kube-api-access-2snp6\") pod \"dnsmasq-dns-566b5b7845-v47vk\" (UID: \"4f13ebe6-cca0-46cf-a0c9-aa20e2abc385\") " pod="openstack/dnsmasq-dns-566b5b7845-v47vk" Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.195980 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f13ebe6-cca0-46cf-a0c9-aa20e2abc385-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-v47vk\" (UID: \"4f13ebe6-cca0-46cf-a0c9-aa20e2abc385\") " pod="openstack/dnsmasq-dns-566b5b7845-v47vk" Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.196020 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f13ebe6-cca0-46cf-a0c9-aa20e2abc385-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-v47vk\" (UID: \"4f13ebe6-cca0-46cf-a0c9-aa20e2abc385\") " pod="openstack/dnsmasq-dns-566b5b7845-v47vk" Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.196060 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b84835e-7891-4f3f-b33f-c1af53dfb7ae-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8b84835e-7891-4f3f-b33f-c1af53dfb7ae\") " pod="openstack/nova-api-0" Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.196096 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vfxgj\" (UniqueName: \"kubernetes.io/projected/8b84835e-7891-4f3f-b33f-c1af53dfb7ae-kube-api-access-vfxgj\") pod \"nova-api-0\" (UID: \"8b84835e-7891-4f3f-b33f-c1af53dfb7ae\") " pod="openstack/nova-api-0" Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.196291 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b84835e-7891-4f3f-b33f-c1af53dfb7ae-config-data\") pod \"nova-api-0\" (UID: \"8b84835e-7891-4f3f-b33f-c1af53dfb7ae\") " pod="openstack/nova-api-0" Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.196381 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f13ebe6-cca0-46cf-a0c9-aa20e2abc385-config\") pod \"dnsmasq-dns-566b5b7845-v47vk\" (UID: \"4f13ebe6-cca0-46cf-a0c9-aa20e2abc385\") " pod="openstack/dnsmasq-dns-566b5b7845-v47vk" Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.298386 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vfxgj\" (UniqueName: \"kubernetes.io/projected/8b84835e-7891-4f3f-b33f-c1af53dfb7ae-kube-api-access-vfxgj\") pod \"nova-api-0\" (UID: \"8b84835e-7891-4f3f-b33f-c1af53dfb7ae\") " pod="openstack/nova-api-0" Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.298470 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b84835e-7891-4f3f-b33f-c1af53dfb7ae-config-data\") pod \"nova-api-0\" (UID: \"8b84835e-7891-4f3f-b33f-c1af53dfb7ae\") " pod="openstack/nova-api-0" Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.298513 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f13ebe6-cca0-46cf-a0c9-aa20e2abc385-config\") pod \"dnsmasq-dns-566b5b7845-v47vk\" (UID: \"4f13ebe6-cca0-46cf-a0c9-aa20e2abc385\") " pod="openstack/dnsmasq-dns-566b5b7845-v47vk" Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.298550 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f13ebe6-cca0-46cf-a0c9-aa20e2abc385-dns-svc\") pod \"dnsmasq-dns-566b5b7845-v47vk\" (UID: \"4f13ebe6-cca0-46cf-a0c9-aa20e2abc385\") " pod="openstack/dnsmasq-dns-566b5b7845-v47vk" Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.298605 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b84835e-7891-4f3f-b33f-c1af53dfb7ae-logs\") pod \"nova-api-0\" (UID: \"8b84835e-7891-4f3f-b33f-c1af53dfb7ae\") " pod="openstack/nova-api-0" Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.298633 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2snp6\" (UniqueName: \"kubernetes.io/projected/4f13ebe6-cca0-46cf-a0c9-aa20e2abc385-kube-api-access-2snp6\") pod \"dnsmasq-dns-566b5b7845-v47vk\" (UID: \"4f13ebe6-cca0-46cf-a0c9-aa20e2abc385\") " pod="openstack/dnsmasq-dns-566b5b7845-v47vk" Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.298693 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f13ebe6-cca0-46cf-a0c9-aa20e2abc385-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-v47vk\" (UID: \"4f13ebe6-cca0-46cf-a0c9-aa20e2abc385\") " pod="openstack/dnsmasq-dns-566b5b7845-v47vk" Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.298723 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f13ebe6-cca0-46cf-a0c9-aa20e2abc385-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-v47vk\" (UID: \"4f13ebe6-cca0-46cf-a0c9-aa20e2abc385\") " pod="openstack/dnsmasq-dns-566b5b7845-v47vk" Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.298756 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b84835e-7891-4f3f-b33f-c1af53dfb7ae-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8b84835e-7891-4f3f-b33f-c1af53dfb7ae\") " pod="openstack/nova-api-0" Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.302983 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f13ebe6-cca0-46cf-a0c9-aa20e2abc385-config\") pod \"dnsmasq-dns-566b5b7845-v47vk\" (UID: \"4f13ebe6-cca0-46cf-a0c9-aa20e2abc385\") " pod="openstack/dnsmasq-dns-566b5b7845-v47vk" Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.303248 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f13ebe6-cca0-46cf-a0c9-aa20e2abc385-dns-svc\") pod \"dnsmasq-dns-566b5b7845-v47vk\" (UID: \"4f13ebe6-cca0-46cf-a0c9-aa20e2abc385\") " pod="openstack/dnsmasq-dns-566b5b7845-v47vk" Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.303261 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f13ebe6-cca0-46cf-a0c9-aa20e2abc385-ovsdbserver-nb\") pod \"dnsmasq-dns-566b5b7845-v47vk\" (UID: \"4f13ebe6-cca0-46cf-a0c9-aa20e2abc385\") " pod="openstack/dnsmasq-dns-566b5b7845-v47vk" Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.303593 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b84835e-7891-4f3f-b33f-c1af53dfb7ae-logs\") pod \"nova-api-0\" (UID: \"8b84835e-7891-4f3f-b33f-c1af53dfb7ae\") " pod="openstack/nova-api-0" Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.304322 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b84835e-7891-4f3f-b33f-c1af53dfb7ae-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"8b84835e-7891-4f3f-b33f-c1af53dfb7ae\") " pod="openstack/nova-api-0" Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.305681 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f13ebe6-cca0-46cf-a0c9-aa20e2abc385-ovsdbserver-sb\") pod \"dnsmasq-dns-566b5b7845-v47vk\" (UID: \"4f13ebe6-cca0-46cf-a0c9-aa20e2abc385\") " pod="openstack/dnsmasq-dns-566b5b7845-v47vk" Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.316152 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b84835e-7891-4f3f-b33f-c1af53dfb7ae-config-data\") pod \"nova-api-0\" (UID: \"8b84835e-7891-4f3f-b33f-c1af53dfb7ae\") " pod="openstack/nova-api-0" Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.319680 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2snp6\" (UniqueName: \"kubernetes.io/projected/4f13ebe6-cca0-46cf-a0c9-aa20e2abc385-kube-api-access-2snp6\") pod \"dnsmasq-dns-566b5b7845-v47vk\" (UID: \"4f13ebe6-cca0-46cf-a0c9-aa20e2abc385\") " pod="openstack/dnsmasq-dns-566b5b7845-v47vk" Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.320050 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vfxgj\" (UniqueName: \"kubernetes.io/projected/8b84835e-7891-4f3f-b33f-c1af53dfb7ae-kube-api-access-vfxgj\") pod \"nova-api-0\" (UID: \"8b84835e-7891-4f3f-b33f-c1af53dfb7ae\") " pod="openstack/nova-api-0" Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.335524 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.436885 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.527952 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-v47vk" Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.529718 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell0-cell-mapping-829qg"] Oct 01 16:01:27 crc kubenswrapper[4949]: W1001 16:01:27.575149 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0a0d0f6b_78d8_4295_8842_0b95d1081339.slice/crio-367826da8ebd37e6b83c4b433f7d79ebb25859bf68a4dedc7cdd5eed8467cdac WatchSource:0}: Error finding container 367826da8ebd37e6b83c4b433f7d79ebb25859bf68a4dedc7cdd5eed8467cdac: Status 404 returned error can't find the container with id 367826da8ebd37e6b83c4b433f7d79ebb25859bf68a4dedc7cdd5eed8467cdac Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.581964 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.692032 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 01 16:01:27 crc kubenswrapper[4949]: W1001 16:01:27.710429 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4222e1cd_2c24_403c_99cb_3bce6a0e8881.slice/crio-62690b5a64b1473064babf5898c450d149689717e716005f01c51a5c7970b165 WatchSource:0}: Error finding container 62690b5a64b1473064babf5898c450d149689717e716005f01c51a5c7970b165: Status 404 returned error can't find the container with id 62690b5a64b1473064babf5898c450d149689717e716005f01c51a5c7970b165 Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.749820 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.776328 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4222e1cd-2c24-403c-99cb-3bce6a0e8881","Type":"ContainerStarted","Data":"62690b5a64b1473064babf5898c450d149689717e716005f01c51a5c7970b165"} Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.778030 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3bca9c47-f575-421c-8560-0e9959ea3031","Type":"ContainerStarted","Data":"7d2060b8a2765b310beb7bde9d430c0e7e8ae6864c345f21ea4b2fc289218fe6"} Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.779298 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"dcb99768-4ca7-4116-86f9-118cbcee56f4","Type":"ContainerStarted","Data":"e7e1b515bab802d6d44d012b4e30311a4758b03cc052c5a0f1a356fc134ef8c0"} Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.780297 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-829qg" event={"ID":"0a0d0f6b-78d8-4295-8842-0b95d1081339","Type":"ContainerStarted","Data":"367826da8ebd37e6b83c4b433f7d79ebb25859bf68a4dedc7cdd5eed8467cdac"} Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.817710 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-db-sync-qt9wv"] Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.819064 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-qt9wv" Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.826043 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-scripts" Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.828080 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.842498 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-qt9wv"] Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.916316 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6r27w\" (UniqueName: \"kubernetes.io/projected/a6a3d350-3e49-470b-80e2-0fe197b477e8-kube-api-access-6r27w\") pod \"nova-cell1-conductor-db-sync-qt9wv\" (UID: \"a6a3d350-3e49-470b-80e2-0fe197b477e8\") " pod="openstack/nova-cell1-conductor-db-sync-qt9wv" Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.916601 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6a3d350-3e49-470b-80e2-0fe197b477e8-config-data\") pod \"nova-cell1-conductor-db-sync-qt9wv\" (UID: \"a6a3d350-3e49-470b-80e2-0fe197b477e8\") " pod="openstack/nova-cell1-conductor-db-sync-qt9wv" Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.916639 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6a3d350-3e49-470b-80e2-0fe197b477e8-scripts\") pod \"nova-cell1-conductor-db-sync-qt9wv\" (UID: \"a6a3d350-3e49-470b-80e2-0fe197b477e8\") " pod="openstack/nova-cell1-conductor-db-sync-qt9wv" Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.916663 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6a3d350-3e49-470b-80e2-0fe197b477e8-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-qt9wv\" (UID: \"a6a3d350-3e49-470b-80e2-0fe197b477e8\") " pod="openstack/nova-cell1-conductor-db-sync-qt9wv" Oct 01 16:01:27 crc kubenswrapper[4949]: I1001 16:01:27.999612 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 01 16:01:28 crc kubenswrapper[4949]: W1001 16:01:28.003094 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b84835e_7891_4f3f_b33f_c1af53dfb7ae.slice/crio-a255eef98a3219b6e5ad1fb0c50a43419b1efb3c85acf536255cf7f1888c1fbb WatchSource:0}: Error finding container a255eef98a3219b6e5ad1fb0c50a43419b1efb3c85acf536255cf7f1888c1fbb: Status 404 returned error can't find the container with id a255eef98a3219b6e5ad1fb0c50a43419b1efb3c85acf536255cf7f1888c1fbb Oct 01 16:01:28 crc kubenswrapper[4949]: I1001 16:01:28.018094 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6a3d350-3e49-470b-80e2-0fe197b477e8-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-qt9wv\" (UID: \"a6a3d350-3e49-470b-80e2-0fe197b477e8\") " pod="openstack/nova-cell1-conductor-db-sync-qt9wv" Oct 01 16:01:28 crc kubenswrapper[4949]: I1001 16:01:28.018277 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6r27w\" (UniqueName: \"kubernetes.io/projected/a6a3d350-3e49-470b-80e2-0fe197b477e8-kube-api-access-6r27w\") pod \"nova-cell1-conductor-db-sync-qt9wv\" (UID: \"a6a3d350-3e49-470b-80e2-0fe197b477e8\") " pod="openstack/nova-cell1-conductor-db-sync-qt9wv" Oct 01 16:01:28 crc kubenswrapper[4949]: I1001 16:01:28.018301 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6a3d350-3e49-470b-80e2-0fe197b477e8-config-data\") pod \"nova-cell1-conductor-db-sync-qt9wv\" (UID: \"a6a3d350-3e49-470b-80e2-0fe197b477e8\") " pod="openstack/nova-cell1-conductor-db-sync-qt9wv" Oct 01 16:01:28 crc kubenswrapper[4949]: I1001 16:01:28.018336 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6a3d350-3e49-470b-80e2-0fe197b477e8-scripts\") pod \"nova-cell1-conductor-db-sync-qt9wv\" (UID: \"a6a3d350-3e49-470b-80e2-0fe197b477e8\") " pod="openstack/nova-cell1-conductor-db-sync-qt9wv" Oct 01 16:01:28 crc kubenswrapper[4949]: I1001 16:01:28.041408 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6a3d350-3e49-470b-80e2-0fe197b477e8-config-data\") pod \"nova-cell1-conductor-db-sync-qt9wv\" (UID: \"a6a3d350-3e49-470b-80e2-0fe197b477e8\") " pod="openstack/nova-cell1-conductor-db-sync-qt9wv" Oct 01 16:01:28 crc kubenswrapper[4949]: I1001 16:01:28.045031 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6a3d350-3e49-470b-80e2-0fe197b477e8-combined-ca-bundle\") pod \"nova-cell1-conductor-db-sync-qt9wv\" (UID: \"a6a3d350-3e49-470b-80e2-0fe197b477e8\") " pod="openstack/nova-cell1-conductor-db-sync-qt9wv" Oct 01 16:01:28 crc kubenswrapper[4949]: I1001 16:01:28.054457 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6r27w\" (UniqueName: \"kubernetes.io/projected/a6a3d350-3e49-470b-80e2-0fe197b477e8-kube-api-access-6r27w\") pod \"nova-cell1-conductor-db-sync-qt9wv\" (UID: \"a6a3d350-3e49-470b-80e2-0fe197b477e8\") " pod="openstack/nova-cell1-conductor-db-sync-qt9wv" Oct 01 16:01:28 crc kubenswrapper[4949]: I1001 16:01:28.054808 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6a3d350-3e49-470b-80e2-0fe197b477e8-scripts\") pod \"nova-cell1-conductor-db-sync-qt9wv\" (UID: \"a6a3d350-3e49-470b-80e2-0fe197b477e8\") " pod="openstack/nova-cell1-conductor-db-sync-qt9wv" Oct 01 16:01:28 crc kubenswrapper[4949]: I1001 16:01:28.178638 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-qt9wv" Oct 01 16:01:28 crc kubenswrapper[4949]: I1001 16:01:28.186828 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-v47vk"] Oct 01 16:01:28 crc kubenswrapper[4949]: I1001 16:01:28.708783 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-qt9wv"] Oct 01 16:01:28 crc kubenswrapper[4949]: I1001 16:01:28.793884 4949 generic.go:334] "Generic (PLEG): container finished" podID="4f13ebe6-cca0-46cf-a0c9-aa20e2abc385" containerID="68c9a01d9d3ee3c696025febb9d560b621b39d962eb04d4034bafee6e73cd0f5" exitCode=0 Oct 01 16:01:28 crc kubenswrapper[4949]: I1001 16:01:28.793963 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-v47vk" event={"ID":"4f13ebe6-cca0-46cf-a0c9-aa20e2abc385","Type":"ContainerDied","Data":"68c9a01d9d3ee3c696025febb9d560b621b39d962eb04d4034bafee6e73cd0f5"} Oct 01 16:01:28 crc kubenswrapper[4949]: I1001 16:01:28.793993 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-v47vk" event={"ID":"4f13ebe6-cca0-46cf-a0c9-aa20e2abc385","Type":"ContainerStarted","Data":"8b5b9e5f714e6ed7ee3de556cb7fdfbbc0701dcdf4d483ce88c9b4e28709f4fe"} Oct 01 16:01:28 crc kubenswrapper[4949]: I1001 16:01:28.798553 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-829qg" event={"ID":"0a0d0f6b-78d8-4295-8842-0b95d1081339","Type":"ContainerStarted","Data":"1ec3230cb7f5e4c4bce0c2fffc9fc916ee7ba4ddce52fc33da18ad1adddcd8df"} Oct 01 16:01:28 crc kubenswrapper[4949]: I1001 16:01:28.810679 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-qt9wv" event={"ID":"a6a3d350-3e49-470b-80e2-0fe197b477e8","Type":"ContainerStarted","Data":"c11b3c8580d8dc50702cfef94da8378e35301891ac3baff83ccce0eff8d39c01"} Oct 01 16:01:28 crc kubenswrapper[4949]: I1001 16:01:28.812416 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8b84835e-7891-4f3f-b33f-c1af53dfb7ae","Type":"ContainerStarted","Data":"a255eef98a3219b6e5ad1fb0c50a43419b1efb3c85acf536255cf7f1888c1fbb"} Oct 01 16:01:28 crc kubenswrapper[4949]: I1001 16:01:28.837803 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell0-cell-mapping-829qg" podStartSLOduration=2.837789514 podStartE2EDuration="2.837789514s" podCreationTimestamp="2025-10-01 16:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 16:01:28.831941281 +0000 UTC m=+1188.137547472" watchObservedRunningTime="2025-10-01 16:01:28.837789514 +0000 UTC m=+1188.143395705" Oct 01 16:01:29 crc kubenswrapper[4949]: I1001 16:01:29.826080 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-qt9wv" event={"ID":"a6a3d350-3e49-470b-80e2-0fe197b477e8","Type":"ContainerStarted","Data":"e36a5d7778e966b1cc2376c784f247789fdb36fe0d18a75d659b29bd9b9ee4a7"} Oct 01 16:01:29 crc kubenswrapper[4949]: I1001 16:01:29.832563 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-v47vk" event={"ID":"4f13ebe6-cca0-46cf-a0c9-aa20e2abc385","Type":"ContainerStarted","Data":"c10c04897d2fcea21ba258dbd462d7f62a5343064bda9b186b8c5869e7bb2c92"} Oct 01 16:01:29 crc kubenswrapper[4949]: I1001 16:01:29.832865 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-566b5b7845-v47vk" Oct 01 16:01:29 crc kubenswrapper[4949]: I1001 16:01:29.844403 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-db-sync-qt9wv" podStartSLOduration=2.844369223 podStartE2EDuration="2.844369223s" podCreationTimestamp="2025-10-01 16:01:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 16:01:29.843368756 +0000 UTC m=+1189.148974947" watchObservedRunningTime="2025-10-01 16:01:29.844369223 +0000 UTC m=+1189.149975414" Oct 01 16:01:29 crc kubenswrapper[4949]: I1001 16:01:29.875239 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-566b5b7845-v47vk" podStartSLOduration=3.875221923 podStartE2EDuration="3.875221923s" podCreationTimestamp="2025-10-01 16:01:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 16:01:29.867851629 +0000 UTC m=+1189.173457840" watchObservedRunningTime="2025-10-01 16:01:29.875221923 +0000 UTC m=+1189.180828114" Oct 01 16:01:30 crc kubenswrapper[4949]: I1001 16:01:30.986289 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 01 16:01:31 crc kubenswrapper[4949]: I1001 16:01:31.002780 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 01 16:01:31 crc kubenswrapper[4949]: I1001 16:01:31.862058 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4222e1cd-2c24-403c-99cb-3bce6a0e8881","Type":"ContainerStarted","Data":"e2a82b092beaa99eb673529b3439ec6bbde4cb8aad54ba2831af17252204c568"} Oct 01 16:01:31 crc kubenswrapper[4949]: I1001 16:01:31.866663 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3bca9c47-f575-421c-8560-0e9959ea3031","Type":"ContainerStarted","Data":"a67fdc8fb1e78cbec010fc0ab38a3ff652430fa3a12eb648f2cda39f56f88f06"} Oct 01 16:01:31 crc kubenswrapper[4949]: I1001 16:01:31.866714 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3bca9c47-f575-421c-8560-0e9959ea3031","Type":"ContainerStarted","Data":"f70149aab3e5b6d70b594a1f755f3f6ae7e93d7b17c5b7ec957f4817edd81835"} Oct 01 16:01:31 crc kubenswrapper[4949]: I1001 16:01:31.866822 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3bca9c47-f575-421c-8560-0e9959ea3031" containerName="nova-metadata-log" containerID="cri-o://f70149aab3e5b6d70b594a1f755f3f6ae7e93d7b17c5b7ec957f4817edd81835" gracePeriod=30 Oct 01 16:01:31 crc kubenswrapper[4949]: I1001 16:01:31.866940 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="3bca9c47-f575-421c-8560-0e9959ea3031" containerName="nova-metadata-metadata" containerID="cri-o://a67fdc8fb1e78cbec010fc0ab38a3ff652430fa3a12eb648f2cda39f56f88f06" gracePeriod=30 Oct 01 16:01:31 crc kubenswrapper[4949]: I1001 16:01:31.869932 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8b84835e-7891-4f3f-b33f-c1af53dfb7ae","Type":"ContainerStarted","Data":"1b229003ee5c6f37a59b4cddce6874dd34d70c5d85dca00a9e9c53f319d4c824"} Oct 01 16:01:31 crc kubenswrapper[4949]: I1001 16:01:31.869969 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8b84835e-7891-4f3f-b33f-c1af53dfb7ae","Type":"ContainerStarted","Data":"7e40e66bb3034b5bbe865ab775e60fe057144a60b37c318a0c8db1c50d2a75e6"} Oct 01 16:01:31 crc kubenswrapper[4949]: I1001 16:01:31.872335 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"dcb99768-4ca7-4116-86f9-118cbcee56f4","Type":"ContainerStarted","Data":"752b9bf1705fca9fe2325f6cfbe7ce3a7bab632879d1cf95ebc09de622838b7a"} Oct 01 16:01:31 crc kubenswrapper[4949]: I1001 16:01:31.872445 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-cell1-novncproxy-0" podUID="dcb99768-4ca7-4116-86f9-118cbcee56f4" containerName="nova-cell1-novncproxy-novncproxy" containerID="cri-o://752b9bf1705fca9fe2325f6cfbe7ce3a7bab632879d1cf95ebc09de622838b7a" gracePeriod=30 Oct 01 16:01:31 crc kubenswrapper[4949]: I1001 16:01:31.882449 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.4791160420000002 podStartE2EDuration="5.882430016s" podCreationTimestamp="2025-10-01 16:01:26 +0000 UTC" firstStartedPulling="2025-10-01 16:01:27.724411466 +0000 UTC m=+1187.030017657" lastFinishedPulling="2025-10-01 16:01:31.12772544 +0000 UTC m=+1190.433331631" observedRunningTime="2025-10-01 16:01:31.878524137 +0000 UTC m=+1191.184130348" watchObservedRunningTime="2025-10-01 16:01:31.882430016 +0000 UTC m=+1191.188036197" Oct 01 16:01:31 crc kubenswrapper[4949]: I1001 16:01:31.946326 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.566622094 podStartE2EDuration="5.946297128s" podCreationTimestamp="2025-10-01 16:01:26 +0000 UTC" firstStartedPulling="2025-10-01 16:01:27.748052246 +0000 UTC m=+1187.053658437" lastFinishedPulling="2025-10-01 16:01:31.12772728 +0000 UTC m=+1190.433333471" observedRunningTime="2025-10-01 16:01:31.909505361 +0000 UTC m=+1191.215111542" watchObservedRunningTime="2025-10-01 16:01:31.946297128 +0000 UTC m=+1191.251903329" Oct 01 16:01:31 crc kubenswrapper[4949]: I1001 16:01:31.963205 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.4499773400000002 podStartE2EDuration="5.963188239s" podCreationTimestamp="2025-10-01 16:01:26 +0000 UTC" firstStartedPulling="2025-10-01 16:01:27.617967558 +0000 UTC m=+1186.923573749" lastFinishedPulling="2025-10-01 16:01:31.131178457 +0000 UTC m=+1190.436784648" observedRunningTime="2025-10-01 16:01:31.935520967 +0000 UTC m=+1191.241127188" watchObservedRunningTime="2025-10-01 16:01:31.963188239 +0000 UTC m=+1191.268794430" Oct 01 16:01:32 crc kubenswrapper[4949]: I1001 16:01:32.156916 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Oct 01 16:01:32 crc kubenswrapper[4949]: I1001 16:01:32.170206 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 01 16:01:32 crc kubenswrapper[4949]: I1001 16:01:32.170252 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 01 16:01:32 crc kubenswrapper[4949]: I1001 16:01:32.336640 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 01 16:01:32 crc kubenswrapper[4949]: I1001 16:01:32.885998 4949 generic.go:334] "Generic (PLEG): container finished" podID="3bca9c47-f575-421c-8560-0e9959ea3031" containerID="f70149aab3e5b6d70b594a1f755f3f6ae7e93d7b17c5b7ec957f4817edd81835" exitCode=143 Oct 01 16:01:32 crc kubenswrapper[4949]: I1001 16:01:32.887352 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3bca9c47-f575-421c-8560-0e9959ea3031","Type":"ContainerDied","Data":"f70149aab3e5b6d70b594a1f755f3f6ae7e93d7b17c5b7ec957f4817edd81835"} Oct 01 16:01:35 crc kubenswrapper[4949]: I1001 16:01:35.917399 4949 generic.go:334] "Generic (PLEG): container finished" podID="0a0d0f6b-78d8-4295-8842-0b95d1081339" containerID="1ec3230cb7f5e4c4bce0c2fffc9fc916ee7ba4ddce52fc33da18ad1adddcd8df" exitCode=0 Oct 01 16:01:35 crc kubenswrapper[4949]: I1001 16:01:35.917520 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-829qg" event={"ID":"0a0d0f6b-78d8-4295-8842-0b95d1081339","Type":"ContainerDied","Data":"1ec3230cb7f5e4c4bce0c2fffc9fc916ee7ba4ddce52fc33da18ad1adddcd8df"} Oct 01 16:01:35 crc kubenswrapper[4949]: I1001 16:01:35.933561 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=6.810809416 podStartE2EDuration="9.933541775s" podCreationTimestamp="2025-10-01 16:01:26 +0000 UTC" firstStartedPulling="2025-10-01 16:01:28.005333681 +0000 UTC m=+1187.310939882" lastFinishedPulling="2025-10-01 16:01:31.12806605 +0000 UTC m=+1190.433672241" observedRunningTime="2025-10-01 16:01:31.960341169 +0000 UTC m=+1191.265947360" watchObservedRunningTime="2025-10-01 16:01:35.933541775 +0000 UTC m=+1195.239147966" Oct 01 16:01:36 crc kubenswrapper[4949]: I1001 16:01:36.927772 4949 generic.go:334] "Generic (PLEG): container finished" podID="a6a3d350-3e49-470b-80e2-0fe197b477e8" containerID="e36a5d7778e966b1cc2376c784f247789fdb36fe0d18a75d659b29bd9b9ee4a7" exitCode=0 Oct 01 16:01:36 crc kubenswrapper[4949]: I1001 16:01:36.927856 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-qt9wv" event={"ID":"a6a3d350-3e49-470b-80e2-0fe197b477e8","Type":"ContainerDied","Data":"e36a5d7778e966b1cc2376c784f247789fdb36fe0d18a75d659b29bd9b9ee4a7"} Oct 01 16:01:37 crc kubenswrapper[4949]: I1001 16:01:37.275187 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-829qg" Oct 01 16:01:37 crc kubenswrapper[4949]: I1001 16:01:37.291239 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7sd27\" (UniqueName: \"kubernetes.io/projected/0a0d0f6b-78d8-4295-8842-0b95d1081339-kube-api-access-7sd27\") pod \"0a0d0f6b-78d8-4295-8842-0b95d1081339\" (UID: \"0a0d0f6b-78d8-4295-8842-0b95d1081339\") " Oct 01 16:01:37 crc kubenswrapper[4949]: I1001 16:01:37.291347 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a0d0f6b-78d8-4295-8842-0b95d1081339-combined-ca-bundle\") pod \"0a0d0f6b-78d8-4295-8842-0b95d1081339\" (UID: \"0a0d0f6b-78d8-4295-8842-0b95d1081339\") " Oct 01 16:01:37 crc kubenswrapper[4949]: I1001 16:01:37.291378 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a0d0f6b-78d8-4295-8842-0b95d1081339-scripts\") pod \"0a0d0f6b-78d8-4295-8842-0b95d1081339\" (UID: \"0a0d0f6b-78d8-4295-8842-0b95d1081339\") " Oct 01 16:01:37 crc kubenswrapper[4949]: I1001 16:01:37.291473 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a0d0f6b-78d8-4295-8842-0b95d1081339-config-data\") pod \"0a0d0f6b-78d8-4295-8842-0b95d1081339\" (UID: \"0a0d0f6b-78d8-4295-8842-0b95d1081339\") " Oct 01 16:01:37 crc kubenswrapper[4949]: I1001 16:01:37.301617 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a0d0f6b-78d8-4295-8842-0b95d1081339-scripts" (OuterVolumeSpecName: "scripts") pod "0a0d0f6b-78d8-4295-8842-0b95d1081339" (UID: "0a0d0f6b-78d8-4295-8842-0b95d1081339"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:01:37 crc kubenswrapper[4949]: I1001 16:01:37.303632 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0a0d0f6b-78d8-4295-8842-0b95d1081339-kube-api-access-7sd27" (OuterVolumeSpecName: "kube-api-access-7sd27") pod "0a0d0f6b-78d8-4295-8842-0b95d1081339" (UID: "0a0d0f6b-78d8-4295-8842-0b95d1081339"). InnerVolumeSpecName "kube-api-access-7sd27". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:01:37 crc kubenswrapper[4949]: I1001 16:01:37.319463 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a0d0f6b-78d8-4295-8842-0b95d1081339-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "0a0d0f6b-78d8-4295-8842-0b95d1081339" (UID: "0a0d0f6b-78d8-4295-8842-0b95d1081339"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:01:37 crc kubenswrapper[4949]: I1001 16:01:37.325035 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0a0d0f6b-78d8-4295-8842-0b95d1081339-config-data" (OuterVolumeSpecName: "config-data") pod "0a0d0f6b-78d8-4295-8842-0b95d1081339" (UID: "0a0d0f6b-78d8-4295-8842-0b95d1081339"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:01:37 crc kubenswrapper[4949]: I1001 16:01:37.336640 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 01 16:01:37 crc kubenswrapper[4949]: I1001 16:01:37.369085 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 01 16:01:37 crc kubenswrapper[4949]: I1001 16:01:37.394283 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7sd27\" (UniqueName: \"kubernetes.io/projected/0a0d0f6b-78d8-4295-8842-0b95d1081339-kube-api-access-7sd27\") on node \"crc\" DevicePath \"\"" Oct 01 16:01:37 crc kubenswrapper[4949]: I1001 16:01:37.394327 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0a0d0f6b-78d8-4295-8842-0b95d1081339-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 16:01:37 crc kubenswrapper[4949]: I1001 16:01:37.394336 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0a0d0f6b-78d8-4295-8842-0b95d1081339-scripts\") on node \"crc\" DevicePath \"\"" Oct 01 16:01:37 crc kubenswrapper[4949]: I1001 16:01:37.394345 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0a0d0f6b-78d8-4295-8842-0b95d1081339-config-data\") on node \"crc\" DevicePath \"\"" Oct 01 16:01:37 crc kubenswrapper[4949]: I1001 16:01:37.438043 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 01 16:01:37 crc kubenswrapper[4949]: I1001 16:01:37.438151 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 01 16:01:37 crc kubenswrapper[4949]: I1001 16:01:37.531289 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-566b5b7845-v47vk" Oct 01 16:01:37 crc kubenswrapper[4949]: I1001 16:01:37.594273 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-dtzdc"] Oct 01 16:01:37 crc kubenswrapper[4949]: I1001 16:01:37.594508 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6d97fcdd8f-dtzdc" podUID="93330a80-b373-43e8-88f3-26a188281912" containerName="dnsmasq-dns" containerID="cri-o://691a1d62aebef81b6c2bcbd9a34d47bb7ce5fe7c350f34f7dd607386a65d27b2" gracePeriod=10 Oct 01 16:01:37 crc kubenswrapper[4949]: I1001 16:01:37.939011 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell0-cell-mapping-829qg" event={"ID":"0a0d0f6b-78d8-4295-8842-0b95d1081339","Type":"ContainerDied","Data":"367826da8ebd37e6b83c4b433f7d79ebb25859bf68a4dedc7cdd5eed8467cdac"} Oct 01 16:01:37 crc kubenswrapper[4949]: I1001 16:01:37.939055 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="367826da8ebd37e6b83c4b433f7d79ebb25859bf68a4dedc7cdd5eed8467cdac" Oct 01 16:01:37 crc kubenswrapper[4949]: I1001 16:01:37.939162 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell0-cell-mapping-829qg" Oct 01 16:01:37 crc kubenswrapper[4949]: I1001 16:01:37.946502 4949 generic.go:334] "Generic (PLEG): container finished" podID="93330a80-b373-43e8-88f3-26a188281912" containerID="691a1d62aebef81b6c2bcbd9a34d47bb7ce5fe7c350f34f7dd607386a65d27b2" exitCode=0 Oct 01 16:01:37 crc kubenswrapper[4949]: I1001 16:01:37.946744 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-dtzdc" event={"ID":"93330a80-b373-43e8-88f3-26a188281912","Type":"ContainerDied","Data":"691a1d62aebef81b6c2bcbd9a34d47bb7ce5fe7c350f34f7dd607386a65d27b2"} Oct 01 16:01:38 crc kubenswrapper[4949]: I1001 16:01:38.008323 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 01 16:01:38 crc kubenswrapper[4949]: I1001 16:01:38.074495 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-dtzdc" Oct 01 16:01:38 crc kubenswrapper[4949]: I1001 16:01:38.075729 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 01 16:01:38 crc kubenswrapper[4949]: I1001 16:01:38.086950 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 01 16:01:38 crc kubenswrapper[4949]: I1001 16:01:38.087164 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8b84835e-7891-4f3f-b33f-c1af53dfb7ae" containerName="nova-api-log" containerID="cri-o://7e40e66bb3034b5bbe865ab775e60fe057144a60b37c318a0c8db1c50d2a75e6" gracePeriod=30 Oct 01 16:01:38 crc kubenswrapper[4949]: I1001 16:01:38.087298 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="8b84835e-7891-4f3f-b33f-c1af53dfb7ae" containerName="nova-api-api" containerID="cri-o://1b229003ee5c6f37a59b4cddce6874dd34d70c5d85dca00a9e9c53f319d4c824" gracePeriod=30 Oct 01 16:01:38 crc kubenswrapper[4949]: I1001 16:01:38.114688 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8b84835e-7891-4f3f-b33f-c1af53dfb7ae" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.175:8774/\": EOF" Oct 01 16:01:38 crc kubenswrapper[4949]: I1001 16:01:38.115190 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="8b84835e-7891-4f3f-b33f-c1af53dfb7ae" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.175:8774/\": EOF" Oct 01 16:01:38 crc kubenswrapper[4949]: I1001 16:01:38.116539 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93330a80-b373-43e8-88f3-26a188281912-ovsdbserver-nb\") pod \"93330a80-b373-43e8-88f3-26a188281912\" (UID: \"93330a80-b373-43e8-88f3-26a188281912\") " Oct 01 16:01:38 crc kubenswrapper[4949]: I1001 16:01:38.116587 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4vzgj\" (UniqueName: \"kubernetes.io/projected/93330a80-b373-43e8-88f3-26a188281912-kube-api-access-4vzgj\") pod \"93330a80-b373-43e8-88f3-26a188281912\" (UID: \"93330a80-b373-43e8-88f3-26a188281912\") " Oct 01 16:01:38 crc kubenswrapper[4949]: I1001 16:01:38.116686 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93330a80-b373-43e8-88f3-26a188281912-ovsdbserver-sb\") pod \"93330a80-b373-43e8-88f3-26a188281912\" (UID: \"93330a80-b373-43e8-88f3-26a188281912\") " Oct 01 16:01:38 crc kubenswrapper[4949]: I1001 16:01:38.116725 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93330a80-b373-43e8-88f3-26a188281912-dns-svc\") pod \"93330a80-b373-43e8-88f3-26a188281912\" (UID: \"93330a80-b373-43e8-88f3-26a188281912\") " Oct 01 16:01:38 crc kubenswrapper[4949]: I1001 16:01:38.116756 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93330a80-b373-43e8-88f3-26a188281912-config\") pod \"93330a80-b373-43e8-88f3-26a188281912\" (UID: \"93330a80-b373-43e8-88f3-26a188281912\") " Oct 01 16:01:38 crc kubenswrapper[4949]: I1001 16:01:38.129437 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/93330a80-b373-43e8-88f3-26a188281912-kube-api-access-4vzgj" (OuterVolumeSpecName: "kube-api-access-4vzgj") pod "93330a80-b373-43e8-88f3-26a188281912" (UID: "93330a80-b373-43e8-88f3-26a188281912"). InnerVolumeSpecName "kube-api-access-4vzgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:01:38 crc kubenswrapper[4949]: I1001 16:01:38.178767 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93330a80-b373-43e8-88f3-26a188281912-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "93330a80-b373-43e8-88f3-26a188281912" (UID: "93330a80-b373-43e8-88f3-26a188281912"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 16:01:38 crc kubenswrapper[4949]: I1001 16:01:38.196456 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93330a80-b373-43e8-88f3-26a188281912-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "93330a80-b373-43e8-88f3-26a188281912" (UID: "93330a80-b373-43e8-88f3-26a188281912"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 16:01:38 crc kubenswrapper[4949]: I1001 16:01:38.219037 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/93330a80-b373-43e8-88f3-26a188281912-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 01 16:01:38 crc kubenswrapper[4949]: I1001 16:01:38.219240 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4vzgj\" (UniqueName: \"kubernetes.io/projected/93330a80-b373-43e8-88f3-26a188281912-kube-api-access-4vzgj\") on node \"crc\" DevicePath \"\"" Oct 01 16:01:38 crc kubenswrapper[4949]: I1001 16:01:38.219348 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/93330a80-b373-43e8-88f3-26a188281912-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 01 16:01:38 crc kubenswrapper[4949]: I1001 16:01:38.220868 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93330a80-b373-43e8-88f3-26a188281912-config" (OuterVolumeSpecName: "config") pod "93330a80-b373-43e8-88f3-26a188281912" (UID: "93330a80-b373-43e8-88f3-26a188281912"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 16:01:38 crc kubenswrapper[4949]: I1001 16:01:38.253500 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/93330a80-b373-43e8-88f3-26a188281912-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "93330a80-b373-43e8-88f3-26a188281912" (UID: "93330a80-b373-43e8-88f3-26a188281912"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 16:01:38 crc kubenswrapper[4949]: I1001 16:01:38.320754 4949 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/93330a80-b373-43e8-88f3-26a188281912-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 01 16:01:38 crc kubenswrapper[4949]: I1001 16:01:38.320793 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/93330a80-b373-43e8-88f3-26a188281912-config\") on node \"crc\" DevicePath \"\"" Oct 01 16:01:38 crc kubenswrapper[4949]: I1001 16:01:38.343068 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-qt9wv" Oct 01 16:01:38 crc kubenswrapper[4949]: I1001 16:01:38.522971 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6a3d350-3e49-470b-80e2-0fe197b477e8-scripts\") pod \"a6a3d350-3e49-470b-80e2-0fe197b477e8\" (UID: \"a6a3d350-3e49-470b-80e2-0fe197b477e8\") " Oct 01 16:01:38 crc kubenswrapper[4949]: I1001 16:01:38.523107 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6a3d350-3e49-470b-80e2-0fe197b477e8-config-data\") pod \"a6a3d350-3e49-470b-80e2-0fe197b477e8\" (UID: \"a6a3d350-3e49-470b-80e2-0fe197b477e8\") " Oct 01 16:01:38 crc kubenswrapper[4949]: I1001 16:01:38.523166 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6r27w\" (UniqueName: \"kubernetes.io/projected/a6a3d350-3e49-470b-80e2-0fe197b477e8-kube-api-access-6r27w\") pod \"a6a3d350-3e49-470b-80e2-0fe197b477e8\" (UID: \"a6a3d350-3e49-470b-80e2-0fe197b477e8\") " Oct 01 16:01:38 crc kubenswrapper[4949]: I1001 16:01:38.523282 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6a3d350-3e49-470b-80e2-0fe197b477e8-combined-ca-bundle\") pod \"a6a3d350-3e49-470b-80e2-0fe197b477e8\" (UID: \"a6a3d350-3e49-470b-80e2-0fe197b477e8\") " Oct 01 16:01:38 crc kubenswrapper[4949]: I1001 16:01:38.527274 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6a3d350-3e49-470b-80e2-0fe197b477e8-kube-api-access-6r27w" (OuterVolumeSpecName: "kube-api-access-6r27w") pod "a6a3d350-3e49-470b-80e2-0fe197b477e8" (UID: "a6a3d350-3e49-470b-80e2-0fe197b477e8"). InnerVolumeSpecName "kube-api-access-6r27w". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:01:38 crc kubenswrapper[4949]: I1001 16:01:38.529339 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6a3d350-3e49-470b-80e2-0fe197b477e8-scripts" (OuterVolumeSpecName: "scripts") pod "a6a3d350-3e49-470b-80e2-0fe197b477e8" (UID: "a6a3d350-3e49-470b-80e2-0fe197b477e8"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:01:38 crc kubenswrapper[4949]: I1001 16:01:38.552489 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6a3d350-3e49-470b-80e2-0fe197b477e8-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "a6a3d350-3e49-470b-80e2-0fe197b477e8" (UID: "a6a3d350-3e49-470b-80e2-0fe197b477e8"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:01:38 crc kubenswrapper[4949]: I1001 16:01:38.555594 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6a3d350-3e49-470b-80e2-0fe197b477e8-config-data" (OuterVolumeSpecName: "config-data") pod "a6a3d350-3e49-470b-80e2-0fe197b477e8" (UID: "a6a3d350-3e49-470b-80e2-0fe197b477e8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:01:38 crc kubenswrapper[4949]: I1001 16:01:38.625359 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6a3d350-3e49-470b-80e2-0fe197b477e8-scripts\") on node \"crc\" DevicePath \"\"" Oct 01 16:01:38 crc kubenswrapper[4949]: I1001 16:01:38.625405 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6a3d350-3e49-470b-80e2-0fe197b477e8-config-data\") on node \"crc\" DevicePath \"\"" Oct 01 16:01:38 crc kubenswrapper[4949]: I1001 16:01:38.625417 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6r27w\" (UniqueName: \"kubernetes.io/projected/a6a3d350-3e49-470b-80e2-0fe197b477e8-kube-api-access-6r27w\") on node \"crc\" DevicePath \"\"" Oct 01 16:01:38 crc kubenswrapper[4949]: I1001 16:01:38.625429 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6a3d350-3e49-470b-80e2-0fe197b477e8-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 16:01:38 crc kubenswrapper[4949]: I1001 16:01:38.956260 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-db-sync-qt9wv" Oct 01 16:01:38 crc kubenswrapper[4949]: I1001 16:01:38.956254 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-db-sync-qt9wv" event={"ID":"a6a3d350-3e49-470b-80e2-0fe197b477e8","Type":"ContainerDied","Data":"c11b3c8580d8dc50702cfef94da8378e35301891ac3baff83ccce0eff8d39c01"} Oct 01 16:01:38 crc kubenswrapper[4949]: I1001 16:01:38.956668 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c11b3c8580d8dc50702cfef94da8378e35301891ac3baff83ccce0eff8d39c01" Oct 01 16:01:38 crc kubenswrapper[4949]: I1001 16:01:38.958280 4949 generic.go:334] "Generic (PLEG): container finished" podID="8b84835e-7891-4f3f-b33f-c1af53dfb7ae" containerID="7e40e66bb3034b5bbe865ab775e60fe057144a60b37c318a0c8db1c50d2a75e6" exitCode=143 Oct 01 16:01:38 crc kubenswrapper[4949]: I1001 16:01:38.958366 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8b84835e-7891-4f3f-b33f-c1af53dfb7ae","Type":"ContainerDied","Data":"7e40e66bb3034b5bbe865ab775e60fe057144a60b37c318a0c8db1c50d2a75e6"} Oct 01 16:01:38 crc kubenswrapper[4949]: I1001 16:01:38.961322 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6d97fcdd8f-dtzdc" Oct 01 16:01:38 crc kubenswrapper[4949]: I1001 16:01:38.966959 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6d97fcdd8f-dtzdc" event={"ID":"93330a80-b373-43e8-88f3-26a188281912","Type":"ContainerDied","Data":"c03d056cab5d0c74b72b9a9ad56a1ea5b6d0bc0f5bb87a26e3ff079dee3c25c4"} Oct 01 16:01:38 crc kubenswrapper[4949]: I1001 16:01:38.967020 4949 scope.go:117] "RemoveContainer" containerID="691a1d62aebef81b6c2bcbd9a34d47bb7ce5fe7c350f34f7dd607386a65d27b2" Oct 01 16:01:38 crc kubenswrapper[4949]: I1001 16:01:38.993308 4949 scope.go:117] "RemoveContainer" containerID="18571e36d94e945585e8160c7d69ade60b528958bbfaa474d40219750698ebc8" Oct 01 16:01:39 crc kubenswrapper[4949]: I1001 16:01:39.022239 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-dtzdc"] Oct 01 16:01:39 crc kubenswrapper[4949]: I1001 16:01:39.031830 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6d97fcdd8f-dtzdc"] Oct 01 16:01:39 crc kubenswrapper[4949]: I1001 16:01:39.060209 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 01 16:01:39 crc kubenswrapper[4949]: E1001 16:01:39.061004 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0a0d0f6b-78d8-4295-8842-0b95d1081339" containerName="nova-manage" Oct 01 16:01:39 crc kubenswrapper[4949]: I1001 16:01:39.061022 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="0a0d0f6b-78d8-4295-8842-0b95d1081339" containerName="nova-manage" Oct 01 16:01:39 crc kubenswrapper[4949]: E1001 16:01:39.061065 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6a3d350-3e49-470b-80e2-0fe197b477e8" containerName="nova-cell1-conductor-db-sync" Oct 01 16:01:39 crc kubenswrapper[4949]: I1001 16:01:39.061073 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6a3d350-3e49-470b-80e2-0fe197b477e8" containerName="nova-cell1-conductor-db-sync" Oct 01 16:01:39 crc kubenswrapper[4949]: E1001 16:01:39.061104 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93330a80-b373-43e8-88f3-26a188281912" containerName="dnsmasq-dns" Oct 01 16:01:39 crc kubenswrapper[4949]: I1001 16:01:39.061111 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="93330a80-b373-43e8-88f3-26a188281912" containerName="dnsmasq-dns" Oct 01 16:01:39 crc kubenswrapper[4949]: E1001 16:01:39.061139 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="93330a80-b373-43e8-88f3-26a188281912" containerName="init" Oct 01 16:01:39 crc kubenswrapper[4949]: I1001 16:01:39.061146 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="93330a80-b373-43e8-88f3-26a188281912" containerName="init" Oct 01 16:01:39 crc kubenswrapper[4949]: I1001 16:01:39.061550 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="0a0d0f6b-78d8-4295-8842-0b95d1081339" containerName="nova-manage" Oct 01 16:01:39 crc kubenswrapper[4949]: I1001 16:01:39.061585 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6a3d350-3e49-470b-80e2-0fe197b477e8" containerName="nova-cell1-conductor-db-sync" Oct 01 16:01:39 crc kubenswrapper[4949]: I1001 16:01:39.061630 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="93330a80-b373-43e8-88f3-26a188281912" containerName="dnsmasq-dns" Oct 01 16:01:39 crc kubenswrapper[4949]: I1001 16:01:39.065379 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 01 16:01:39 crc kubenswrapper[4949]: I1001 16:01:39.069039 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 01 16:01:39 crc kubenswrapper[4949]: I1001 16:01:39.074986 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-conductor-config-data" Oct 01 16:01:39 crc kubenswrapper[4949]: I1001 16:01:39.235584 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r7kgj\" (UniqueName: \"kubernetes.io/projected/dffab7bc-2015-4dbe-9c81-b3e61eeface6-kube-api-access-r7kgj\") pod \"nova-cell1-conductor-0\" (UID: \"dffab7bc-2015-4dbe-9c81-b3e61eeface6\") " pod="openstack/nova-cell1-conductor-0" Oct 01 16:01:39 crc kubenswrapper[4949]: I1001 16:01:39.235651 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dffab7bc-2015-4dbe-9c81-b3e61eeface6-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"dffab7bc-2015-4dbe-9c81-b3e61eeface6\") " pod="openstack/nova-cell1-conductor-0" Oct 01 16:01:39 crc kubenswrapper[4949]: I1001 16:01:39.235670 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dffab7bc-2015-4dbe-9c81-b3e61eeface6-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"dffab7bc-2015-4dbe-9c81-b3e61eeface6\") " pod="openstack/nova-cell1-conductor-0" Oct 01 16:01:39 crc kubenswrapper[4949]: I1001 16:01:39.337595 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r7kgj\" (UniqueName: \"kubernetes.io/projected/dffab7bc-2015-4dbe-9c81-b3e61eeface6-kube-api-access-r7kgj\") pod \"nova-cell1-conductor-0\" (UID: \"dffab7bc-2015-4dbe-9c81-b3e61eeface6\") " pod="openstack/nova-cell1-conductor-0" Oct 01 16:01:39 crc kubenswrapper[4949]: I1001 16:01:39.337686 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dffab7bc-2015-4dbe-9c81-b3e61eeface6-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"dffab7bc-2015-4dbe-9c81-b3e61eeface6\") " pod="openstack/nova-cell1-conductor-0" Oct 01 16:01:39 crc kubenswrapper[4949]: I1001 16:01:39.337721 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dffab7bc-2015-4dbe-9c81-b3e61eeface6-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"dffab7bc-2015-4dbe-9c81-b3e61eeface6\") " pod="openstack/nova-cell1-conductor-0" Oct 01 16:01:39 crc kubenswrapper[4949]: I1001 16:01:39.342735 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dffab7bc-2015-4dbe-9c81-b3e61eeface6-config-data\") pod \"nova-cell1-conductor-0\" (UID: \"dffab7bc-2015-4dbe-9c81-b3e61eeface6\") " pod="openstack/nova-cell1-conductor-0" Oct 01 16:01:39 crc kubenswrapper[4949]: I1001 16:01:39.342741 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dffab7bc-2015-4dbe-9c81-b3e61eeface6-combined-ca-bundle\") pod \"nova-cell1-conductor-0\" (UID: \"dffab7bc-2015-4dbe-9c81-b3e61eeface6\") " pod="openstack/nova-cell1-conductor-0" Oct 01 16:01:39 crc kubenswrapper[4949]: I1001 16:01:39.357679 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r7kgj\" (UniqueName: \"kubernetes.io/projected/dffab7bc-2015-4dbe-9c81-b3e61eeface6-kube-api-access-r7kgj\") pod \"nova-cell1-conductor-0\" (UID: \"dffab7bc-2015-4dbe-9c81-b3e61eeface6\") " pod="openstack/nova-cell1-conductor-0" Oct 01 16:01:39 crc kubenswrapper[4949]: I1001 16:01:39.404623 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-conductor-0" Oct 01 16:01:39 crc kubenswrapper[4949]: I1001 16:01:39.633684 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="93330a80-b373-43e8-88f3-26a188281912" path="/var/lib/kubelet/pods/93330a80-b373-43e8-88f3-26a188281912/volumes" Oct 01 16:01:39 crc kubenswrapper[4949]: I1001 16:01:39.883884 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-conductor-0"] Oct 01 16:01:39 crc kubenswrapper[4949]: W1001 16:01:39.885298 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poddffab7bc_2015_4dbe_9c81_b3e61eeface6.slice/crio-8bd7ba3c57192f7cbb2b06f089d2f7441e02a1ce0731c5feabc2f9ef2f561fb0 WatchSource:0}: Error finding container 8bd7ba3c57192f7cbb2b06f089d2f7441e02a1ce0731c5feabc2f9ef2f561fb0: Status 404 returned error can't find the container with id 8bd7ba3c57192f7cbb2b06f089d2f7441e02a1ce0731c5feabc2f9ef2f561fb0 Oct 01 16:01:39 crc kubenswrapper[4949]: I1001 16:01:39.974141 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"dffab7bc-2015-4dbe-9c81-b3e61eeface6","Type":"ContainerStarted","Data":"8bd7ba3c57192f7cbb2b06f089d2f7441e02a1ce0731c5feabc2f9ef2f561fb0"} Oct 01 16:01:39 crc kubenswrapper[4949]: I1001 16:01:39.975960 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="4222e1cd-2c24-403c-99cb-3bce6a0e8881" containerName="nova-scheduler-scheduler" containerID="cri-o://e2a82b092beaa99eb673529b3439ec6bbde4cb8aad54ba2831af17252204c568" gracePeriod=30 Oct 01 16:01:40 crc kubenswrapper[4949]: I1001 16:01:40.986049 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-conductor-0" event={"ID":"dffab7bc-2015-4dbe-9c81-b3e61eeface6","Type":"ContainerStarted","Data":"7f5eb03a16708a172c0f6da2e5a2e21f114f88dd6cb860da9ef24bba97bee922"} Oct 01 16:01:40 crc kubenswrapper[4949]: I1001 16:01:40.986488 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-conductor-0" Oct 01 16:01:41 crc kubenswrapper[4949]: I1001 16:01:41.007739 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-conductor-0" podStartSLOduration=2.007718741 podStartE2EDuration="2.007718741s" podCreationTimestamp="2025-10-01 16:01:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 16:01:41.003787342 +0000 UTC m=+1200.309393533" watchObservedRunningTime="2025-10-01 16:01:41.007718741 +0000 UTC m=+1200.313324932" Oct 01 16:01:42 crc kubenswrapper[4949]: E1001 16:01:42.339282 4949 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e2a82b092beaa99eb673529b3439ec6bbde4cb8aad54ba2831af17252204c568" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 01 16:01:42 crc kubenswrapper[4949]: E1001 16:01:42.342058 4949 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e2a82b092beaa99eb673529b3439ec6bbde4cb8aad54ba2831af17252204c568" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 01 16:01:42 crc kubenswrapper[4949]: E1001 16:01:42.343722 4949 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="e2a82b092beaa99eb673529b3439ec6bbde4cb8aad54ba2831af17252204c568" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 01 16:01:42 crc kubenswrapper[4949]: E1001 16:01:42.343784 4949 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="4222e1cd-2c24-403c-99cb-3bce6a0e8881" containerName="nova-scheduler-scheduler" Oct 01 16:01:43 crc kubenswrapper[4949]: I1001 16:01:43.461981 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 01 16:01:43 crc kubenswrapper[4949]: I1001 16:01:43.622507 4949 scope.go:117] "RemoveContainer" containerID="524e22d9cbe9a7cc4ad23b6a8a6d56f2af27269655ead93571cd7bf2dbf22a9d" Oct 01 16:01:43 crc kubenswrapper[4949]: I1001 16:01:43.632066 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4222e1cd-2c24-403c-99cb-3bce6a0e8881-combined-ca-bundle\") pod \"4222e1cd-2c24-403c-99cb-3bce6a0e8881\" (UID: \"4222e1cd-2c24-403c-99cb-3bce6a0e8881\") " Oct 01 16:01:43 crc kubenswrapper[4949]: I1001 16:01:43.632204 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4222e1cd-2c24-403c-99cb-3bce6a0e8881-config-data\") pod \"4222e1cd-2c24-403c-99cb-3bce6a0e8881\" (UID: \"4222e1cd-2c24-403c-99cb-3bce6a0e8881\") " Oct 01 16:01:43 crc kubenswrapper[4949]: I1001 16:01:43.633375 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vd5q4\" (UniqueName: \"kubernetes.io/projected/4222e1cd-2c24-403c-99cb-3bce6a0e8881-kube-api-access-vd5q4\") pod \"4222e1cd-2c24-403c-99cb-3bce6a0e8881\" (UID: \"4222e1cd-2c24-403c-99cb-3bce6a0e8881\") " Oct 01 16:01:43 crc kubenswrapper[4949]: I1001 16:01:43.646667 4949 scope.go:117] "RemoveContainer" containerID="b5270224f8594c43df03d1f541377692b4dd57279b169d1850b498df5d25e0d7" Oct 01 16:01:43 crc kubenswrapper[4949]: I1001 16:01:43.646965 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4222e1cd-2c24-403c-99cb-3bce6a0e8881-kube-api-access-vd5q4" (OuterVolumeSpecName: "kube-api-access-vd5q4") pod "4222e1cd-2c24-403c-99cb-3bce6a0e8881" (UID: "4222e1cd-2c24-403c-99cb-3bce6a0e8881"). InnerVolumeSpecName "kube-api-access-vd5q4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:01:43 crc kubenswrapper[4949]: I1001 16:01:43.687526 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4222e1cd-2c24-403c-99cb-3bce6a0e8881-config-data" (OuterVolumeSpecName: "config-data") pod "4222e1cd-2c24-403c-99cb-3bce6a0e8881" (UID: "4222e1cd-2c24-403c-99cb-3bce6a0e8881"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:01:43 crc kubenswrapper[4949]: I1001 16:01:43.696909 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4222e1cd-2c24-403c-99cb-3bce6a0e8881-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "4222e1cd-2c24-403c-99cb-3bce6a0e8881" (UID: "4222e1cd-2c24-403c-99cb-3bce6a0e8881"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:01:43 crc kubenswrapper[4949]: I1001 16:01:43.697420 4949 scope.go:117] "RemoveContainer" containerID="fa5568612ac95cc0e5b97335425fd21fb4eb7fd2b1aae3c18644a7896f7bd353" Oct 01 16:01:43 crc kubenswrapper[4949]: I1001 16:01:43.735379 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/4222e1cd-2c24-403c-99cb-3bce6a0e8881-config-data\") on node \"crc\" DevicePath \"\"" Oct 01 16:01:43 crc kubenswrapper[4949]: I1001 16:01:43.735552 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vd5q4\" (UniqueName: \"kubernetes.io/projected/4222e1cd-2c24-403c-99cb-3bce6a0e8881-kube-api-access-vd5q4\") on node \"crc\" DevicePath \"\"" Oct 01 16:01:43 crc kubenswrapper[4949]: I1001 16:01:43.735613 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/4222e1cd-2c24-403c-99cb-3bce6a0e8881-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 16:01:43 crc kubenswrapper[4949]: I1001 16:01:43.783098 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 01 16:01:43 crc kubenswrapper[4949]: I1001 16:01:43.938799 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vfxgj\" (UniqueName: \"kubernetes.io/projected/8b84835e-7891-4f3f-b33f-c1af53dfb7ae-kube-api-access-vfxgj\") pod \"8b84835e-7891-4f3f-b33f-c1af53dfb7ae\" (UID: \"8b84835e-7891-4f3f-b33f-c1af53dfb7ae\") " Oct 01 16:01:43 crc kubenswrapper[4949]: I1001 16:01:43.939234 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b84835e-7891-4f3f-b33f-c1af53dfb7ae-combined-ca-bundle\") pod \"8b84835e-7891-4f3f-b33f-c1af53dfb7ae\" (UID: \"8b84835e-7891-4f3f-b33f-c1af53dfb7ae\") " Oct 01 16:01:43 crc kubenswrapper[4949]: I1001 16:01:43.939294 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b84835e-7891-4f3f-b33f-c1af53dfb7ae-config-data\") pod \"8b84835e-7891-4f3f-b33f-c1af53dfb7ae\" (UID: \"8b84835e-7891-4f3f-b33f-c1af53dfb7ae\") " Oct 01 16:01:43 crc kubenswrapper[4949]: I1001 16:01:43.939319 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b84835e-7891-4f3f-b33f-c1af53dfb7ae-logs\") pod \"8b84835e-7891-4f3f-b33f-c1af53dfb7ae\" (UID: \"8b84835e-7891-4f3f-b33f-c1af53dfb7ae\") " Oct 01 16:01:43 crc kubenswrapper[4949]: I1001 16:01:43.939995 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8b84835e-7891-4f3f-b33f-c1af53dfb7ae-logs" (OuterVolumeSpecName: "logs") pod "8b84835e-7891-4f3f-b33f-c1af53dfb7ae" (UID: "8b84835e-7891-4f3f-b33f-c1af53dfb7ae"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:01:43 crc kubenswrapper[4949]: I1001 16:01:43.940440 4949 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8b84835e-7891-4f3f-b33f-c1af53dfb7ae-logs\") on node \"crc\" DevicePath \"\"" Oct 01 16:01:43 crc kubenswrapper[4949]: I1001 16:01:43.943389 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b84835e-7891-4f3f-b33f-c1af53dfb7ae-kube-api-access-vfxgj" (OuterVolumeSpecName: "kube-api-access-vfxgj") pod "8b84835e-7891-4f3f-b33f-c1af53dfb7ae" (UID: "8b84835e-7891-4f3f-b33f-c1af53dfb7ae"). InnerVolumeSpecName "kube-api-access-vfxgj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:01:43 crc kubenswrapper[4949]: I1001 16:01:43.970596 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b84835e-7891-4f3f-b33f-c1af53dfb7ae-config-data" (OuterVolumeSpecName: "config-data") pod "8b84835e-7891-4f3f-b33f-c1af53dfb7ae" (UID: "8b84835e-7891-4f3f-b33f-c1af53dfb7ae"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:01:43 crc kubenswrapper[4949]: I1001 16:01:43.972877 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b84835e-7891-4f3f-b33f-c1af53dfb7ae-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "8b84835e-7891-4f3f-b33f-c1af53dfb7ae" (UID: "8b84835e-7891-4f3f-b33f-c1af53dfb7ae"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.014395 4949 generic.go:334] "Generic (PLEG): container finished" podID="8b84835e-7891-4f3f-b33f-c1af53dfb7ae" containerID="1b229003ee5c6f37a59b4cddce6874dd34d70c5d85dca00a9e9c53f319d4c824" exitCode=0 Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.014425 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.014469 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8b84835e-7891-4f3f-b33f-c1af53dfb7ae","Type":"ContainerDied","Data":"1b229003ee5c6f37a59b4cddce6874dd34d70c5d85dca00a9e9c53f319d4c824"} Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.014495 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"8b84835e-7891-4f3f-b33f-c1af53dfb7ae","Type":"ContainerDied","Data":"a255eef98a3219b6e5ad1fb0c50a43419b1efb3c85acf536255cf7f1888c1fbb"} Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.014511 4949 scope.go:117] "RemoveContainer" containerID="1b229003ee5c6f37a59b4cddce6874dd34d70c5d85dca00a9e9c53f319d4c824" Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.016469 4949 generic.go:334] "Generic (PLEG): container finished" podID="4222e1cd-2c24-403c-99cb-3bce6a0e8881" containerID="e2a82b092beaa99eb673529b3439ec6bbde4cb8aad54ba2831af17252204c568" exitCode=0 Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.016536 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4222e1cd-2c24-403c-99cb-3bce6a0e8881","Type":"ContainerDied","Data":"e2a82b092beaa99eb673529b3439ec6bbde4cb8aad54ba2831af17252204c568"} Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.016563 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"4222e1cd-2c24-403c-99cb-3bce6a0e8881","Type":"ContainerDied","Data":"62690b5a64b1473064babf5898c450d149689717e716005f01c51a5c7970b165"} Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.016684 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.034418 4949 scope.go:117] "RemoveContainer" containerID="7e40e66bb3034b5bbe865ab775e60fe057144a60b37c318a0c8db1c50d2a75e6" Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.044046 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vfxgj\" (UniqueName: \"kubernetes.io/projected/8b84835e-7891-4f3f-b33f-c1af53dfb7ae-kube-api-access-vfxgj\") on node \"crc\" DevicePath \"\"" Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.044080 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8b84835e-7891-4f3f-b33f-c1af53dfb7ae-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.044093 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8b84835e-7891-4f3f-b33f-c1af53dfb7ae-config-data\") on node \"crc\" DevicePath \"\"" Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.058843 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.065383 4949 scope.go:117] "RemoveContainer" containerID="1b229003ee5c6f37a59b4cddce6874dd34d70c5d85dca00a9e9c53f319d4c824" Oct 01 16:01:44 crc kubenswrapper[4949]: E1001 16:01:44.065862 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1b229003ee5c6f37a59b4cddce6874dd34d70c5d85dca00a9e9c53f319d4c824\": container with ID starting with 1b229003ee5c6f37a59b4cddce6874dd34d70c5d85dca00a9e9c53f319d4c824 not found: ID does not exist" containerID="1b229003ee5c6f37a59b4cddce6874dd34d70c5d85dca00a9e9c53f319d4c824" Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.065898 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1b229003ee5c6f37a59b4cddce6874dd34d70c5d85dca00a9e9c53f319d4c824"} err="failed to get container status \"1b229003ee5c6f37a59b4cddce6874dd34d70c5d85dca00a9e9c53f319d4c824\": rpc error: code = NotFound desc = could not find container \"1b229003ee5c6f37a59b4cddce6874dd34d70c5d85dca00a9e9c53f319d4c824\": container with ID starting with 1b229003ee5c6f37a59b4cddce6874dd34d70c5d85dca00a9e9c53f319d4c824 not found: ID does not exist" Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.065924 4949 scope.go:117] "RemoveContainer" containerID="7e40e66bb3034b5bbe865ab775e60fe057144a60b37c318a0c8db1c50d2a75e6" Oct 01 16:01:44 crc kubenswrapper[4949]: E1001 16:01:44.066165 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7e40e66bb3034b5bbe865ab775e60fe057144a60b37c318a0c8db1c50d2a75e6\": container with ID starting with 7e40e66bb3034b5bbe865ab775e60fe057144a60b37c318a0c8db1c50d2a75e6 not found: ID does not exist" containerID="7e40e66bb3034b5bbe865ab775e60fe057144a60b37c318a0c8db1c50d2a75e6" Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.066187 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7e40e66bb3034b5bbe865ab775e60fe057144a60b37c318a0c8db1c50d2a75e6"} err="failed to get container status \"7e40e66bb3034b5bbe865ab775e60fe057144a60b37c318a0c8db1c50d2a75e6\": rpc error: code = NotFound desc = could not find container \"7e40e66bb3034b5bbe865ab775e60fe057144a60b37c318a0c8db1c50d2a75e6\": container with ID starting with 7e40e66bb3034b5bbe865ab775e60fe057144a60b37c318a0c8db1c50d2a75e6 not found: ID does not exist" Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.066207 4949 scope.go:117] "RemoveContainer" containerID="e2a82b092beaa99eb673529b3439ec6bbde4cb8aad54ba2831af17252204c568" Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.083616 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.086995 4949 scope.go:117] "RemoveContainer" containerID="e2a82b092beaa99eb673529b3439ec6bbde4cb8aad54ba2831af17252204c568" Oct 01 16:01:44 crc kubenswrapper[4949]: E1001 16:01:44.087571 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e2a82b092beaa99eb673529b3439ec6bbde4cb8aad54ba2831af17252204c568\": container with ID starting with e2a82b092beaa99eb673529b3439ec6bbde4cb8aad54ba2831af17252204c568 not found: ID does not exist" containerID="e2a82b092beaa99eb673529b3439ec6bbde4cb8aad54ba2831af17252204c568" Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.087664 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e2a82b092beaa99eb673529b3439ec6bbde4cb8aad54ba2831af17252204c568"} err="failed to get container status \"e2a82b092beaa99eb673529b3439ec6bbde4cb8aad54ba2831af17252204c568\": rpc error: code = NotFound desc = could not find container \"e2a82b092beaa99eb673529b3439ec6bbde4cb8aad54ba2831af17252204c568\": container with ID starting with e2a82b092beaa99eb673529b3439ec6bbde4cb8aad54ba2831af17252204c568 not found: ID does not exist" Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.097492 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.109253 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.116568 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 01 16:01:44 crc kubenswrapper[4949]: E1001 16:01:44.117028 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4222e1cd-2c24-403c-99cb-3bce6a0e8881" containerName="nova-scheduler-scheduler" Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.117049 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="4222e1cd-2c24-403c-99cb-3bce6a0e8881" containerName="nova-scheduler-scheduler" Oct 01 16:01:44 crc kubenswrapper[4949]: E1001 16:01:44.117066 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b84835e-7891-4f3f-b33f-c1af53dfb7ae" containerName="nova-api-log" Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.117090 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b84835e-7891-4f3f-b33f-c1af53dfb7ae" containerName="nova-api-log" Oct 01 16:01:44 crc kubenswrapper[4949]: E1001 16:01:44.117105 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b84835e-7891-4f3f-b33f-c1af53dfb7ae" containerName="nova-api-api" Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.117111 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b84835e-7891-4f3f-b33f-c1af53dfb7ae" containerName="nova-api-api" Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.117364 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="4222e1cd-2c24-403c-99cb-3bce6a0e8881" containerName="nova-scheduler-scheduler" Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.117386 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b84835e-7891-4f3f-b33f-c1af53dfb7ae" containerName="nova-api-api" Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.117406 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b84835e-7891-4f3f-b33f-c1af53dfb7ae" containerName="nova-api-log" Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.118162 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.124191 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.140637 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.147412 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.148740 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.152672 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.154859 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.247131 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b521e871-1589-4db3-a0dc-06eedffd3ada-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b521e871-1589-4db3-a0dc-06eedffd3ada\") " pod="openstack/nova-scheduler-0" Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.247203 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pf9sn\" (UniqueName: \"kubernetes.io/projected/b521e871-1589-4db3-a0dc-06eedffd3ada-kube-api-access-pf9sn\") pod \"nova-scheduler-0\" (UID: \"b521e871-1589-4db3-a0dc-06eedffd3ada\") " pod="openstack/nova-scheduler-0" Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.247239 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfqlq\" (UniqueName: \"kubernetes.io/projected/411f8048-cb89-4e2f-bedb-e22259455682-kube-api-access-zfqlq\") pod \"nova-api-0\" (UID: \"411f8048-cb89-4e2f-bedb-e22259455682\") " pod="openstack/nova-api-0" Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.247274 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/411f8048-cb89-4e2f-bedb-e22259455682-logs\") pod \"nova-api-0\" (UID: \"411f8048-cb89-4e2f-bedb-e22259455682\") " pod="openstack/nova-api-0" Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.247316 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/411f8048-cb89-4e2f-bedb-e22259455682-config-data\") pod \"nova-api-0\" (UID: \"411f8048-cb89-4e2f-bedb-e22259455682\") " pod="openstack/nova-api-0" Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.247836 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/411f8048-cb89-4e2f-bedb-e22259455682-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"411f8048-cb89-4e2f-bedb-e22259455682\") " pod="openstack/nova-api-0" Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.247926 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b521e871-1589-4db3-a0dc-06eedffd3ada-config-data\") pod \"nova-scheduler-0\" (UID: \"b521e871-1589-4db3-a0dc-06eedffd3ada\") " pod="openstack/nova-scheduler-0" Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.349518 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/411f8048-cb89-4e2f-bedb-e22259455682-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"411f8048-cb89-4e2f-bedb-e22259455682\") " pod="openstack/nova-api-0" Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.349777 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b521e871-1589-4db3-a0dc-06eedffd3ada-config-data\") pod \"nova-scheduler-0\" (UID: \"b521e871-1589-4db3-a0dc-06eedffd3ada\") " pod="openstack/nova-scheduler-0" Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.349930 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b521e871-1589-4db3-a0dc-06eedffd3ada-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b521e871-1589-4db3-a0dc-06eedffd3ada\") " pod="openstack/nova-scheduler-0" Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.350089 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pf9sn\" (UniqueName: \"kubernetes.io/projected/b521e871-1589-4db3-a0dc-06eedffd3ada-kube-api-access-pf9sn\") pod \"nova-scheduler-0\" (UID: \"b521e871-1589-4db3-a0dc-06eedffd3ada\") " pod="openstack/nova-scheduler-0" Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.350222 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zfqlq\" (UniqueName: \"kubernetes.io/projected/411f8048-cb89-4e2f-bedb-e22259455682-kube-api-access-zfqlq\") pod \"nova-api-0\" (UID: \"411f8048-cb89-4e2f-bedb-e22259455682\") " pod="openstack/nova-api-0" Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.350317 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/411f8048-cb89-4e2f-bedb-e22259455682-logs\") pod \"nova-api-0\" (UID: \"411f8048-cb89-4e2f-bedb-e22259455682\") " pod="openstack/nova-api-0" Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.350433 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/411f8048-cb89-4e2f-bedb-e22259455682-config-data\") pod \"nova-api-0\" (UID: \"411f8048-cb89-4e2f-bedb-e22259455682\") " pod="openstack/nova-api-0" Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.350924 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/411f8048-cb89-4e2f-bedb-e22259455682-logs\") pod \"nova-api-0\" (UID: \"411f8048-cb89-4e2f-bedb-e22259455682\") " pod="openstack/nova-api-0" Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.354806 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b521e871-1589-4db3-a0dc-06eedffd3ada-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"b521e871-1589-4db3-a0dc-06eedffd3ada\") " pod="openstack/nova-scheduler-0" Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.354951 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/411f8048-cb89-4e2f-bedb-e22259455682-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"411f8048-cb89-4e2f-bedb-e22259455682\") " pod="openstack/nova-api-0" Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.356708 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b521e871-1589-4db3-a0dc-06eedffd3ada-config-data\") pod \"nova-scheduler-0\" (UID: \"b521e871-1589-4db3-a0dc-06eedffd3ada\") " pod="openstack/nova-scheduler-0" Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.364826 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/411f8048-cb89-4e2f-bedb-e22259455682-config-data\") pod \"nova-api-0\" (UID: \"411f8048-cb89-4e2f-bedb-e22259455682\") " pod="openstack/nova-api-0" Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.368188 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pf9sn\" (UniqueName: \"kubernetes.io/projected/b521e871-1589-4db3-a0dc-06eedffd3ada-kube-api-access-pf9sn\") pod \"nova-scheduler-0\" (UID: \"b521e871-1589-4db3-a0dc-06eedffd3ada\") " pod="openstack/nova-scheduler-0" Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.368399 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zfqlq\" (UniqueName: \"kubernetes.io/projected/411f8048-cb89-4e2f-bedb-e22259455682-kube-api-access-zfqlq\") pod \"nova-api-0\" (UID: \"411f8048-cb89-4e2f-bedb-e22259455682\") " pod="openstack/nova-api-0" Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.438654 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.529296 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.921512 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 01 16:01:44 crc kubenswrapper[4949]: W1001 16:01:44.930220 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb521e871_1589_4db3_a0dc_06eedffd3ada.slice/crio-2ae9e983a34fbe5cee149e1c4fed39d0a794a717222e37b45e99c7a43636e380 WatchSource:0}: Error finding container 2ae9e983a34fbe5cee149e1c4fed39d0a794a717222e37b45e99c7a43636e380: Status 404 returned error can't find the container with id 2ae9e983a34fbe5cee149e1c4fed39d0a794a717222e37b45e99c7a43636e380 Oct 01 16:01:44 crc kubenswrapper[4949]: I1001 16:01:44.969089 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 01 16:01:44 crc kubenswrapper[4949]: W1001 16:01:44.972390 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod411f8048_cb89_4e2f_bedb_e22259455682.slice/crio-c895dc8f3bcfac712c689bb60631757d50740d390c11f248dc1989abb18fb894 WatchSource:0}: Error finding container c895dc8f3bcfac712c689bb60631757d50740d390c11f248dc1989abb18fb894: Status 404 returned error can't find the container with id c895dc8f3bcfac712c689bb60631757d50740d390c11f248dc1989abb18fb894 Oct 01 16:01:45 crc kubenswrapper[4949]: I1001 16:01:45.030820 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"411f8048-cb89-4e2f-bedb-e22259455682","Type":"ContainerStarted","Data":"c895dc8f3bcfac712c689bb60631757d50740d390c11f248dc1989abb18fb894"} Oct 01 16:01:45 crc kubenswrapper[4949]: I1001 16:01:45.035016 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b521e871-1589-4db3-a0dc-06eedffd3ada","Type":"ContainerStarted","Data":"2ae9e983a34fbe5cee149e1c4fed39d0a794a717222e37b45e99c7a43636e380"} Oct 01 16:01:45 crc kubenswrapper[4949]: I1001 16:01:45.611045 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4222e1cd-2c24-403c-99cb-3bce6a0e8881" path="/var/lib/kubelet/pods/4222e1cd-2c24-403c-99cb-3bce6a0e8881/volumes" Oct 01 16:01:45 crc kubenswrapper[4949]: I1001 16:01:45.611646 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b84835e-7891-4f3f-b33f-c1af53dfb7ae" path="/var/lib/kubelet/pods/8b84835e-7891-4f3f-b33f-c1af53dfb7ae/volumes" Oct 01 16:01:46 crc kubenswrapper[4949]: I1001 16:01:46.057296 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"411f8048-cb89-4e2f-bedb-e22259455682","Type":"ContainerStarted","Data":"7117362f0058d60da14bbcb14a96ff5b03d0b0faf5bad4063dd9112ea2f7eee1"} Oct 01 16:01:46 crc kubenswrapper[4949]: I1001 16:01:46.057763 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"411f8048-cb89-4e2f-bedb-e22259455682","Type":"ContainerStarted","Data":"e8515c5b6619a931c017ad4d7493ae11d8bfdcdff6c037bc02ebd00d2328ddb9"} Oct 01 16:01:46 crc kubenswrapper[4949]: I1001 16:01:46.060538 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b521e871-1589-4db3-a0dc-06eedffd3ada","Type":"ContainerStarted","Data":"515fc1f74ea21db077344a168f73569962ec4d7d4492379de4dc11d6b5330f1a"} Oct 01 16:01:46 crc kubenswrapper[4949]: I1001 16:01:46.086385 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.086365563 podStartE2EDuration="2.086365563s" podCreationTimestamp="2025-10-01 16:01:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 16:01:46.074198424 +0000 UTC m=+1205.379804635" watchObservedRunningTime="2025-10-01 16:01:46.086365563 +0000 UTC m=+1205.391971764" Oct 01 16:01:46 crc kubenswrapper[4949]: I1001 16:01:46.094763 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.094738877 podStartE2EDuration="2.094738877s" podCreationTimestamp="2025-10-01 16:01:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 16:01:46.091828466 +0000 UTC m=+1205.397434657" watchObservedRunningTime="2025-10-01 16:01:46.094738877 +0000 UTC m=+1205.400345088" Oct 01 16:01:49 crc kubenswrapper[4949]: I1001 16:01:49.437002 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-conductor-0" Oct 01 16:01:49 crc kubenswrapper[4949]: I1001 16:01:49.438770 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 01 16:01:54 crc kubenswrapper[4949]: I1001 16:01:54.439199 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 01 16:01:54 crc kubenswrapper[4949]: I1001 16:01:54.473920 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 01 16:01:54 crc kubenswrapper[4949]: I1001 16:01:54.529888 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 01 16:01:54 crc kubenswrapper[4949]: I1001 16:01:54.529952 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 01 16:01:55 crc kubenswrapper[4949]: I1001 16:01:55.159760 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 01 16:01:55 crc kubenswrapper[4949]: I1001 16:01:55.611362 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="411f8048-cb89-4e2f-bedb-e22259455682" containerName="nova-api-api" probeResult="failure" output="Get \"http://10.217.0.180:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 01 16:01:55 crc kubenswrapper[4949]: I1001 16:01:55.611692 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="411f8048-cb89-4e2f-bedb-e22259455682" containerName="nova-api-log" probeResult="failure" output="Get \"http://10.217.0.180:8774/\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Oct 01 16:02:02 crc kubenswrapper[4949]: I1001 16:02:02.212902 4949 generic.go:334] "Generic (PLEG): container finished" podID="3bca9c47-f575-421c-8560-0e9959ea3031" containerID="a67fdc8fb1e78cbec010fc0ab38a3ff652430fa3a12eb648f2cda39f56f88f06" exitCode=137 Oct 01 16:02:02 crc kubenswrapper[4949]: I1001 16:02:02.212961 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3bca9c47-f575-421c-8560-0e9959ea3031","Type":"ContainerDied","Data":"a67fdc8fb1e78cbec010fc0ab38a3ff652430fa3a12eb648f2cda39f56f88f06"} Oct 01 16:02:02 crc kubenswrapper[4949]: I1001 16:02:02.216531 4949 generic.go:334] "Generic (PLEG): container finished" podID="dcb99768-4ca7-4116-86f9-118cbcee56f4" containerID="752b9bf1705fca9fe2325f6cfbe7ce3a7bab632879d1cf95ebc09de622838b7a" exitCode=137 Oct 01 16:02:02 crc kubenswrapper[4949]: I1001 16:02:02.216581 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"dcb99768-4ca7-4116-86f9-118cbcee56f4","Type":"ContainerDied","Data":"752b9bf1705fca9fe2325f6cfbe7ce3a7bab632879d1cf95ebc09de622838b7a"} Oct 01 16:02:02 crc kubenswrapper[4949]: I1001 16:02:02.324274 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 01 16:02:02 crc kubenswrapper[4949]: I1001 16:02:02.330969 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 01 16:02:02 crc kubenswrapper[4949]: I1001 16:02:02.518642 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcb99768-4ca7-4116-86f9-118cbcee56f4-combined-ca-bundle\") pod \"dcb99768-4ca7-4116-86f9-118cbcee56f4\" (UID: \"dcb99768-4ca7-4116-86f9-118cbcee56f4\") " Oct 01 16:02:02 crc kubenswrapper[4949]: I1001 16:02:02.518875 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bca9c47-f575-421c-8560-0e9959ea3031-logs\") pod \"3bca9c47-f575-421c-8560-0e9959ea3031\" (UID: \"3bca9c47-f575-421c-8560-0e9959ea3031\") " Oct 01 16:02:02 crc kubenswrapper[4949]: I1001 16:02:02.518986 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bca9c47-f575-421c-8560-0e9959ea3031-combined-ca-bundle\") pod \"3bca9c47-f575-421c-8560-0e9959ea3031\" (UID: \"3bca9c47-f575-421c-8560-0e9959ea3031\") " Oct 01 16:02:02 crc kubenswrapper[4949]: I1001 16:02:02.519041 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bca9c47-f575-421c-8560-0e9959ea3031-config-data\") pod \"3bca9c47-f575-421c-8560-0e9959ea3031\" (UID: \"3bca9c47-f575-421c-8560-0e9959ea3031\") " Oct 01 16:02:02 crc kubenswrapper[4949]: I1001 16:02:02.519126 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgtph\" (UniqueName: \"kubernetes.io/projected/3bca9c47-f575-421c-8560-0e9959ea3031-kube-api-access-zgtph\") pod \"3bca9c47-f575-421c-8560-0e9959ea3031\" (UID: \"3bca9c47-f575-421c-8560-0e9959ea3031\") " Oct 01 16:02:02 crc kubenswrapper[4949]: I1001 16:02:02.519211 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brzng\" (UniqueName: \"kubernetes.io/projected/dcb99768-4ca7-4116-86f9-118cbcee56f4-kube-api-access-brzng\") pod \"dcb99768-4ca7-4116-86f9-118cbcee56f4\" (UID: \"dcb99768-4ca7-4116-86f9-118cbcee56f4\") " Oct 01 16:02:02 crc kubenswrapper[4949]: I1001 16:02:02.519271 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcb99768-4ca7-4116-86f9-118cbcee56f4-config-data\") pod \"dcb99768-4ca7-4116-86f9-118cbcee56f4\" (UID: \"dcb99768-4ca7-4116-86f9-118cbcee56f4\") " Oct 01 16:02:02 crc kubenswrapper[4949]: I1001 16:02:02.519408 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3bca9c47-f575-421c-8560-0e9959ea3031-logs" (OuterVolumeSpecName: "logs") pod "3bca9c47-f575-421c-8560-0e9959ea3031" (UID: "3bca9c47-f575-421c-8560-0e9959ea3031"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:02:02 crc kubenswrapper[4949]: I1001 16:02:02.519820 4949 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3bca9c47-f575-421c-8560-0e9959ea3031-logs\") on node \"crc\" DevicePath \"\"" Oct 01 16:02:02 crc kubenswrapper[4949]: I1001 16:02:02.524973 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bca9c47-f575-421c-8560-0e9959ea3031-kube-api-access-zgtph" (OuterVolumeSpecName: "kube-api-access-zgtph") pod "3bca9c47-f575-421c-8560-0e9959ea3031" (UID: "3bca9c47-f575-421c-8560-0e9959ea3031"). InnerVolumeSpecName "kube-api-access-zgtph". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:02:02 crc kubenswrapper[4949]: I1001 16:02:02.529145 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcb99768-4ca7-4116-86f9-118cbcee56f4-kube-api-access-brzng" (OuterVolumeSpecName: "kube-api-access-brzng") pod "dcb99768-4ca7-4116-86f9-118cbcee56f4" (UID: "dcb99768-4ca7-4116-86f9-118cbcee56f4"). InnerVolumeSpecName "kube-api-access-brzng". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:02:02 crc kubenswrapper[4949]: I1001 16:02:02.549117 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bca9c47-f575-421c-8560-0e9959ea3031-config-data" (OuterVolumeSpecName: "config-data") pod "3bca9c47-f575-421c-8560-0e9959ea3031" (UID: "3bca9c47-f575-421c-8560-0e9959ea3031"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:02:02 crc kubenswrapper[4949]: I1001 16:02:02.555512 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3bca9c47-f575-421c-8560-0e9959ea3031-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "3bca9c47-f575-421c-8560-0e9959ea3031" (UID: "3bca9c47-f575-421c-8560-0e9959ea3031"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:02:02 crc kubenswrapper[4949]: I1001 16:02:02.563165 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcb99768-4ca7-4116-86f9-118cbcee56f4-config-data" (OuterVolumeSpecName: "config-data") pod "dcb99768-4ca7-4116-86f9-118cbcee56f4" (UID: "dcb99768-4ca7-4116-86f9-118cbcee56f4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:02:02 crc kubenswrapper[4949]: I1001 16:02:02.566058 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcb99768-4ca7-4116-86f9-118cbcee56f4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "dcb99768-4ca7-4116-86f9-118cbcee56f4" (UID: "dcb99768-4ca7-4116-86f9-118cbcee56f4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:02:02 crc kubenswrapper[4949]: I1001 16:02:02.621195 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3bca9c47-f575-421c-8560-0e9959ea3031-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 16:02:02 crc kubenswrapper[4949]: I1001 16:02:02.621226 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/3bca9c47-f575-421c-8560-0e9959ea3031-config-data\") on node \"crc\" DevicePath \"\"" Oct 01 16:02:02 crc kubenswrapper[4949]: I1001 16:02:02.621236 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgtph\" (UniqueName: \"kubernetes.io/projected/3bca9c47-f575-421c-8560-0e9959ea3031-kube-api-access-zgtph\") on node \"crc\" DevicePath \"\"" Oct 01 16:02:02 crc kubenswrapper[4949]: I1001 16:02:02.621247 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brzng\" (UniqueName: \"kubernetes.io/projected/dcb99768-4ca7-4116-86f9-118cbcee56f4-kube-api-access-brzng\") on node \"crc\" DevicePath \"\"" Oct 01 16:02:02 crc kubenswrapper[4949]: I1001 16:02:02.621255 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dcb99768-4ca7-4116-86f9-118cbcee56f4-config-data\") on node \"crc\" DevicePath \"\"" Oct 01 16:02:02 crc kubenswrapper[4949]: I1001 16:02:02.621265 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/dcb99768-4ca7-4116-86f9-118cbcee56f4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.229151 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"3bca9c47-f575-421c-8560-0e9959ea3031","Type":"ContainerDied","Data":"7d2060b8a2765b310beb7bde9d430c0e7e8ae6864c345f21ea4b2fc289218fe6"} Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.229211 4949 scope.go:117] "RemoveContainer" containerID="a67fdc8fb1e78cbec010fc0ab38a3ff652430fa3a12eb648f2cda39f56f88f06" Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.230274 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.232220 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"dcb99768-4ca7-4116-86f9-118cbcee56f4","Type":"ContainerDied","Data":"e7e1b515bab802d6d44d012b4e30311a4758b03cc052c5a0f1a356fc134ef8c0"} Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.232261 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.258422 4949 scope.go:117] "RemoveContainer" containerID="f70149aab3e5b6d70b594a1f755f3f6ae7e93d7b17c5b7ec957f4817edd81835" Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.289142 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.306335 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.309674 4949 scope.go:117] "RemoveContainer" containerID="752b9bf1705fca9fe2325f6cfbe7ce3a7bab632879d1cf95ebc09de622838b7a" Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.320115 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.331735 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.340750 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 01 16:02:03 crc kubenswrapper[4949]: E1001 16:02:03.341215 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dcb99768-4ca7-4116-86f9-118cbcee56f4" containerName="nova-cell1-novncproxy-novncproxy" Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.341234 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="dcb99768-4ca7-4116-86f9-118cbcee56f4" containerName="nova-cell1-novncproxy-novncproxy" Oct 01 16:02:03 crc kubenswrapper[4949]: E1001 16:02:03.341255 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bca9c47-f575-421c-8560-0e9959ea3031" containerName="nova-metadata-metadata" Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.341264 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bca9c47-f575-421c-8560-0e9959ea3031" containerName="nova-metadata-metadata" Oct 01 16:02:03 crc kubenswrapper[4949]: E1001 16:02:03.341284 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bca9c47-f575-421c-8560-0e9959ea3031" containerName="nova-metadata-log" Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.341292 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bca9c47-f575-421c-8560-0e9959ea3031" containerName="nova-metadata-log" Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.341519 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bca9c47-f575-421c-8560-0e9959ea3031" containerName="nova-metadata-metadata" Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.341537 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="dcb99768-4ca7-4116-86f9-118cbcee56f4" containerName="nova-cell1-novncproxy-novncproxy" Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.341549 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bca9c47-f575-421c-8560-0e9959ea3031" containerName="nova-metadata-log" Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.342304 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.346628 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-public-svc" Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.346799 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-novncproxy-cell1-vencrypt" Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.347107 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-novncproxy-config-data" Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.353648 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.365399 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.367677 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.370792 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.371136 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.373581 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.435967 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/661e15c7-897e-4b47-9202-95dd5c6d9456-config-data\") pod \"nova-metadata-0\" (UID: \"661e15c7-897e-4b47-9202-95dd5c6d9456\") " pod="openstack/nova-metadata-0" Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.436298 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ptsdc\" (UniqueName: \"kubernetes.io/projected/661e15c7-897e-4b47-9202-95dd5c6d9456-kube-api-access-ptsdc\") pod \"nova-metadata-0\" (UID: \"661e15c7-897e-4b47-9202-95dd5c6d9456\") " pod="openstack/nova-metadata-0" Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.436344 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae61c1e7-7ee2-4610-9cc5-e7df710424c7-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae61c1e7-7ee2-4610-9cc5-e7df710424c7\") " pod="openstack/nova-cell1-novncproxy-0" Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.436370 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/661e15c7-897e-4b47-9202-95dd5c6d9456-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"661e15c7-897e-4b47-9202-95dd5c6d9456\") " pod="openstack/nova-metadata-0" Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.436442 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/661e15c7-897e-4b47-9202-95dd5c6d9456-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"661e15c7-897e-4b47-9202-95dd5c6d9456\") " pod="openstack/nova-metadata-0" Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.436468 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae61c1e7-7ee2-4610-9cc5-e7df710424c7-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae61c1e7-7ee2-4610-9cc5-e7df710424c7\") " pod="openstack/nova-cell1-novncproxy-0" Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.436549 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/661e15c7-897e-4b47-9202-95dd5c6d9456-logs\") pod \"nova-metadata-0\" (UID: \"661e15c7-897e-4b47-9202-95dd5c6d9456\") " pod="openstack/nova-metadata-0" Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.436628 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rplwt\" (UniqueName: \"kubernetes.io/projected/ae61c1e7-7ee2-4610-9cc5-e7df710424c7-kube-api-access-rplwt\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae61c1e7-7ee2-4610-9cc5-e7df710424c7\") " pod="openstack/nova-cell1-novncproxy-0" Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.436705 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae61c1e7-7ee2-4610-9cc5-e7df710424c7-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae61c1e7-7ee2-4610-9cc5-e7df710424c7\") " pod="openstack/nova-cell1-novncproxy-0" Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.436770 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae61c1e7-7ee2-4610-9cc5-e7df710424c7-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae61c1e7-7ee2-4610-9cc5-e7df710424c7\") " pod="openstack/nova-cell1-novncproxy-0" Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.537955 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae61c1e7-7ee2-4610-9cc5-e7df710424c7-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae61c1e7-7ee2-4610-9cc5-e7df710424c7\") " pod="openstack/nova-cell1-novncproxy-0" Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.538276 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/661e15c7-897e-4b47-9202-95dd5c6d9456-config-data\") pod \"nova-metadata-0\" (UID: \"661e15c7-897e-4b47-9202-95dd5c6d9456\") " pod="openstack/nova-metadata-0" Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.538405 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ptsdc\" (UniqueName: \"kubernetes.io/projected/661e15c7-897e-4b47-9202-95dd5c6d9456-kube-api-access-ptsdc\") pod \"nova-metadata-0\" (UID: \"661e15c7-897e-4b47-9202-95dd5c6d9456\") " pod="openstack/nova-metadata-0" Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.538515 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae61c1e7-7ee2-4610-9cc5-e7df710424c7-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae61c1e7-7ee2-4610-9cc5-e7df710424c7\") " pod="openstack/nova-cell1-novncproxy-0" Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.538624 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/661e15c7-897e-4b47-9202-95dd5c6d9456-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"661e15c7-897e-4b47-9202-95dd5c6d9456\") " pod="openstack/nova-metadata-0" Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.538746 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/661e15c7-897e-4b47-9202-95dd5c6d9456-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"661e15c7-897e-4b47-9202-95dd5c6d9456\") " pod="openstack/nova-metadata-0" Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.538892 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae61c1e7-7ee2-4610-9cc5-e7df710424c7-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae61c1e7-7ee2-4610-9cc5-e7df710424c7\") " pod="openstack/nova-cell1-novncproxy-0" Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.539647 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/661e15c7-897e-4b47-9202-95dd5c6d9456-logs\") pod \"nova-metadata-0\" (UID: \"661e15c7-897e-4b47-9202-95dd5c6d9456\") " pod="openstack/nova-metadata-0" Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.539851 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rplwt\" (UniqueName: \"kubernetes.io/projected/ae61c1e7-7ee2-4610-9cc5-e7df710424c7-kube-api-access-rplwt\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae61c1e7-7ee2-4610-9cc5-e7df710424c7\") " pod="openstack/nova-cell1-novncproxy-0" Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.540053 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae61c1e7-7ee2-4610-9cc5-e7df710424c7-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae61c1e7-7ee2-4610-9cc5-e7df710424c7\") " pod="openstack/nova-cell1-novncproxy-0" Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.540233 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/661e15c7-897e-4b47-9202-95dd5c6d9456-logs\") pod \"nova-metadata-0\" (UID: \"661e15c7-897e-4b47-9202-95dd5c6d9456\") " pod="openstack/nova-metadata-0" Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.542553 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-novncproxy-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae61c1e7-7ee2-4610-9cc5-e7df710424c7-nova-novncproxy-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae61c1e7-7ee2-4610-9cc5-e7df710424c7\") " pod="openstack/nova-cell1-novncproxy-0" Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.542678 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ae61c1e7-7ee2-4610-9cc5-e7df710424c7-combined-ca-bundle\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae61c1e7-7ee2-4610-9cc5-e7df710424c7\") " pod="openstack/nova-cell1-novncproxy-0" Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.543928 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/661e15c7-897e-4b47-9202-95dd5c6d9456-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"661e15c7-897e-4b47-9202-95dd5c6d9456\") " pod="openstack/nova-metadata-0" Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.544541 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ae61c1e7-7ee2-4610-9cc5-e7df710424c7-config-data\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae61c1e7-7ee2-4610-9cc5-e7df710424c7\") " pod="openstack/nova-cell1-novncproxy-0" Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.544565 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"vencrypt-tls-certs\" (UniqueName: \"kubernetes.io/secret/ae61c1e7-7ee2-4610-9cc5-e7df710424c7-vencrypt-tls-certs\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae61c1e7-7ee2-4610-9cc5-e7df710424c7\") " pod="openstack/nova-cell1-novncproxy-0" Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.545691 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/661e15c7-897e-4b47-9202-95dd5c6d9456-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"661e15c7-897e-4b47-9202-95dd5c6d9456\") " pod="openstack/nova-metadata-0" Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.553976 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/661e15c7-897e-4b47-9202-95dd5c6d9456-config-data\") pod \"nova-metadata-0\" (UID: \"661e15c7-897e-4b47-9202-95dd5c6d9456\") " pod="openstack/nova-metadata-0" Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.555285 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ptsdc\" (UniqueName: \"kubernetes.io/projected/661e15c7-897e-4b47-9202-95dd5c6d9456-kube-api-access-ptsdc\") pod \"nova-metadata-0\" (UID: \"661e15c7-897e-4b47-9202-95dd5c6d9456\") " pod="openstack/nova-metadata-0" Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.556963 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rplwt\" (UniqueName: \"kubernetes.io/projected/ae61c1e7-7ee2-4610-9cc5-e7df710424c7-kube-api-access-rplwt\") pod \"nova-cell1-novncproxy-0\" (UID: \"ae61c1e7-7ee2-4610-9cc5-e7df710424c7\") " pod="openstack/nova-cell1-novncproxy-0" Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.617143 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3bca9c47-f575-421c-8560-0e9959ea3031" path="/var/lib/kubelet/pods/3bca9c47-f575-421c-8560-0e9959ea3031/volumes" Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.617891 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcb99768-4ca7-4116-86f9-118cbcee56f4" path="/var/lib/kubelet/pods/dcb99768-4ca7-4116-86f9-118cbcee56f4/volumes" Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.662815 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-novncproxy-0" Oct 01 16:02:03 crc kubenswrapper[4949]: I1001 16:02:03.688421 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 01 16:02:04 crc kubenswrapper[4949]: I1001 16:02:04.211794 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-novncproxy-0"] Oct 01 16:02:04 crc kubenswrapper[4949]: I1001 16:02:04.224450 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 01 16:02:04 crc kubenswrapper[4949]: W1001 16:02:04.231051 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod661e15c7_897e_4b47_9202_95dd5c6d9456.slice/crio-e0afb970676fd2da1dd166172f49e27216b4f3a2ff977afa123296dc282a0f53 WatchSource:0}: Error finding container e0afb970676fd2da1dd166172f49e27216b4f3a2ff977afa123296dc282a0f53: Status 404 returned error can't find the container with id e0afb970676fd2da1dd166172f49e27216b4f3a2ff977afa123296dc282a0f53 Oct 01 16:02:04 crc kubenswrapper[4949]: I1001 16:02:04.243180 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ae61c1e7-7ee2-4610-9cc5-e7df710424c7","Type":"ContainerStarted","Data":"c8a14339eb54a0bbfb811e6ea49fe73395471617a156274c3ec96c0c704a3a3b"} Oct 01 16:02:04 crc kubenswrapper[4949]: I1001 16:02:04.245610 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"661e15c7-897e-4b47-9202-95dd5c6d9456","Type":"ContainerStarted","Data":"e0afb970676fd2da1dd166172f49e27216b4f3a2ff977afa123296dc282a0f53"} Oct 01 16:02:04 crc kubenswrapper[4949]: I1001 16:02:04.536187 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 01 16:02:04 crc kubenswrapper[4949]: I1001 16:02:04.536752 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 01 16:02:04 crc kubenswrapper[4949]: I1001 16:02:04.539947 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 01 16:02:04 crc kubenswrapper[4949]: I1001 16:02:04.541499 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 01 16:02:05 crc kubenswrapper[4949]: I1001 16:02:05.257876 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"661e15c7-897e-4b47-9202-95dd5c6d9456","Type":"ContainerStarted","Data":"3f6a090a9f41560edbdd22e26eec4ec27025534cc833a303d4a90b249961ef16"} Oct 01 16:02:05 crc kubenswrapper[4949]: I1001 16:02:05.258461 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"661e15c7-897e-4b47-9202-95dd5c6d9456","Type":"ContainerStarted","Data":"99fa9363b7bd81edeb2ea79a2171a519602d969795bc13c4f6b536d2de062680"} Oct 01 16:02:05 crc kubenswrapper[4949]: I1001 16:02:05.259987 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-novncproxy-0" event={"ID":"ae61c1e7-7ee2-4610-9cc5-e7df710424c7","Type":"ContainerStarted","Data":"a21fac6606a1de1f15aad75a2c3da31bc862567c7a8403014b178848da126909"} Oct 01 16:02:05 crc kubenswrapper[4949]: I1001 16:02:05.260559 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 01 16:02:05 crc kubenswrapper[4949]: I1001 16:02:05.277464 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 01 16:02:05 crc kubenswrapper[4949]: I1001 16:02:05.284047 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.284026844 podStartE2EDuration="2.284026844s" podCreationTimestamp="2025-10-01 16:02:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 16:02:05.279525808 +0000 UTC m=+1224.585131989" watchObservedRunningTime="2025-10-01 16:02:05.284026844 +0000 UTC m=+1224.589633045" Oct 01 16:02:05 crc kubenswrapper[4949]: I1001 16:02:05.325732 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-novncproxy-0" podStartSLOduration=2.325710837 podStartE2EDuration="2.325710837s" podCreationTimestamp="2025-10-01 16:02:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 16:02:05.314975617 +0000 UTC m=+1224.620581818" watchObservedRunningTime="2025-10-01 16:02:05.325710837 +0000 UTC m=+1224.631317028" Oct 01 16:02:05 crc kubenswrapper[4949]: I1001 16:02:05.455319 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-p6rns"] Oct 01 16:02:05 crc kubenswrapper[4949]: I1001 16:02:05.460737 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-p6rns" Oct 01 16:02:05 crc kubenswrapper[4949]: I1001 16:02:05.474014 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-p6rns"] Oct 01 16:02:05 crc kubenswrapper[4949]: I1001 16:02:05.488585 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c21b4c0b-7e29-4226-8745-3f942703d8f0-config\") pod \"dnsmasq-dns-5b856c5697-p6rns\" (UID: \"c21b4c0b-7e29-4226-8745-3f942703d8f0\") " pod="openstack/dnsmasq-dns-5b856c5697-p6rns" Oct 01 16:02:05 crc kubenswrapper[4949]: I1001 16:02:05.488647 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c21b4c0b-7e29-4226-8745-3f942703d8f0-dns-svc\") pod \"dnsmasq-dns-5b856c5697-p6rns\" (UID: \"c21b4c0b-7e29-4226-8745-3f942703d8f0\") " pod="openstack/dnsmasq-dns-5b856c5697-p6rns" Oct 01 16:02:05 crc kubenswrapper[4949]: I1001 16:02:05.488855 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c21b4c0b-7e29-4226-8745-3f942703d8f0-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-p6rns\" (UID: \"c21b4c0b-7e29-4226-8745-3f942703d8f0\") " pod="openstack/dnsmasq-dns-5b856c5697-p6rns" Oct 01 16:02:05 crc kubenswrapper[4949]: I1001 16:02:05.489092 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c21b4c0b-7e29-4226-8745-3f942703d8f0-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-p6rns\" (UID: \"c21b4c0b-7e29-4226-8745-3f942703d8f0\") " pod="openstack/dnsmasq-dns-5b856c5697-p6rns" Oct 01 16:02:05 crc kubenswrapper[4949]: I1001 16:02:05.489229 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2qqdl\" (UniqueName: \"kubernetes.io/projected/c21b4c0b-7e29-4226-8745-3f942703d8f0-kube-api-access-2qqdl\") pod \"dnsmasq-dns-5b856c5697-p6rns\" (UID: \"c21b4c0b-7e29-4226-8745-3f942703d8f0\") " pod="openstack/dnsmasq-dns-5b856c5697-p6rns" Oct 01 16:02:05 crc kubenswrapper[4949]: I1001 16:02:05.590557 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c21b4c0b-7e29-4226-8745-3f942703d8f0-config\") pod \"dnsmasq-dns-5b856c5697-p6rns\" (UID: \"c21b4c0b-7e29-4226-8745-3f942703d8f0\") " pod="openstack/dnsmasq-dns-5b856c5697-p6rns" Oct 01 16:02:05 crc kubenswrapper[4949]: I1001 16:02:05.590637 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c21b4c0b-7e29-4226-8745-3f942703d8f0-dns-svc\") pod \"dnsmasq-dns-5b856c5697-p6rns\" (UID: \"c21b4c0b-7e29-4226-8745-3f942703d8f0\") " pod="openstack/dnsmasq-dns-5b856c5697-p6rns" Oct 01 16:02:05 crc kubenswrapper[4949]: I1001 16:02:05.590688 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c21b4c0b-7e29-4226-8745-3f942703d8f0-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-p6rns\" (UID: \"c21b4c0b-7e29-4226-8745-3f942703d8f0\") " pod="openstack/dnsmasq-dns-5b856c5697-p6rns" Oct 01 16:02:05 crc kubenswrapper[4949]: I1001 16:02:05.590759 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c21b4c0b-7e29-4226-8745-3f942703d8f0-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-p6rns\" (UID: \"c21b4c0b-7e29-4226-8745-3f942703d8f0\") " pod="openstack/dnsmasq-dns-5b856c5697-p6rns" Oct 01 16:02:05 crc kubenswrapper[4949]: I1001 16:02:05.590800 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2qqdl\" (UniqueName: \"kubernetes.io/projected/c21b4c0b-7e29-4226-8745-3f942703d8f0-kube-api-access-2qqdl\") pod \"dnsmasq-dns-5b856c5697-p6rns\" (UID: \"c21b4c0b-7e29-4226-8745-3f942703d8f0\") " pod="openstack/dnsmasq-dns-5b856c5697-p6rns" Oct 01 16:02:05 crc kubenswrapper[4949]: I1001 16:02:05.591999 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c21b4c0b-7e29-4226-8745-3f942703d8f0-ovsdbserver-nb\") pod \"dnsmasq-dns-5b856c5697-p6rns\" (UID: \"c21b4c0b-7e29-4226-8745-3f942703d8f0\") " pod="openstack/dnsmasq-dns-5b856c5697-p6rns" Oct 01 16:02:05 crc kubenswrapper[4949]: I1001 16:02:05.592030 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c21b4c0b-7e29-4226-8745-3f942703d8f0-dns-svc\") pod \"dnsmasq-dns-5b856c5697-p6rns\" (UID: \"c21b4c0b-7e29-4226-8745-3f942703d8f0\") " pod="openstack/dnsmasq-dns-5b856c5697-p6rns" Oct 01 16:02:05 crc kubenswrapper[4949]: I1001 16:02:05.592067 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c21b4c0b-7e29-4226-8745-3f942703d8f0-ovsdbserver-sb\") pod \"dnsmasq-dns-5b856c5697-p6rns\" (UID: \"c21b4c0b-7e29-4226-8745-3f942703d8f0\") " pod="openstack/dnsmasq-dns-5b856c5697-p6rns" Oct 01 16:02:05 crc kubenswrapper[4949]: I1001 16:02:05.592581 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c21b4c0b-7e29-4226-8745-3f942703d8f0-config\") pod \"dnsmasq-dns-5b856c5697-p6rns\" (UID: \"c21b4c0b-7e29-4226-8745-3f942703d8f0\") " pod="openstack/dnsmasq-dns-5b856c5697-p6rns" Oct 01 16:02:05 crc kubenswrapper[4949]: I1001 16:02:05.617158 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2qqdl\" (UniqueName: \"kubernetes.io/projected/c21b4c0b-7e29-4226-8745-3f942703d8f0-kube-api-access-2qqdl\") pod \"dnsmasq-dns-5b856c5697-p6rns\" (UID: \"c21b4c0b-7e29-4226-8745-3f942703d8f0\") " pod="openstack/dnsmasq-dns-5b856c5697-p6rns" Oct 01 16:02:05 crc kubenswrapper[4949]: I1001 16:02:05.785303 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-p6rns" Oct 01 16:02:06 crc kubenswrapper[4949]: I1001 16:02:06.258696 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-p6rns"] Oct 01 16:02:06 crc kubenswrapper[4949]: W1001 16:02:06.262984 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc21b4c0b_7e29_4226_8745_3f942703d8f0.slice/crio-9e8465bac9a873cf336b107d28e3ab039f63f9f2ff1c96ba5582fd3946e55575 WatchSource:0}: Error finding container 9e8465bac9a873cf336b107d28e3ab039f63f9f2ff1c96ba5582fd3946e55575: Status 404 returned error can't find the container with id 9e8465bac9a873cf336b107d28e3ab039f63f9f2ff1c96ba5582fd3946e55575 Oct 01 16:02:07 crc kubenswrapper[4949]: I1001 16:02:07.306832 4949 generic.go:334] "Generic (PLEG): container finished" podID="c21b4c0b-7e29-4226-8745-3f942703d8f0" containerID="390b6f1d47afe51b11f7ac9dd6e655003f50307478a665229c01309cd44248e2" exitCode=0 Oct 01 16:02:07 crc kubenswrapper[4949]: I1001 16:02:07.306924 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-p6rns" event={"ID":"c21b4c0b-7e29-4226-8745-3f942703d8f0","Type":"ContainerDied","Data":"390b6f1d47afe51b11f7ac9dd6e655003f50307478a665229c01309cd44248e2"} Oct 01 16:02:07 crc kubenswrapper[4949]: I1001 16:02:07.309352 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-p6rns" event={"ID":"c21b4c0b-7e29-4226-8745-3f942703d8f0","Type":"ContainerStarted","Data":"9e8465bac9a873cf336b107d28e3ab039f63f9f2ff1c96ba5582fd3946e55575"} Oct 01 16:02:07 crc kubenswrapper[4949]: I1001 16:02:07.887401 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 01 16:02:07 crc kubenswrapper[4949]: I1001 16:02:07.929240 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 01 16:02:07 crc kubenswrapper[4949]: I1001 16:02:07.929528 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7" containerName="ceilometer-central-agent" containerID="cri-o://7492efd1e575c76cde19a3982e727743ba9f888aa74bf111a896e63fb8c084ef" gracePeriod=30 Oct 01 16:02:07 crc kubenswrapper[4949]: I1001 16:02:07.929594 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7" containerName="proxy-httpd" containerID="cri-o://b6e520a4ab3ec0344e5a328b0d487c299448002c3336f5e05d51ed955fc9f403" gracePeriod=30 Oct 01 16:02:07 crc kubenswrapper[4949]: I1001 16:02:07.929612 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7" containerName="ceilometer-notification-agent" containerID="cri-o://9b80a3caf6be28c81a0be839223c4c5ed03a307538c8613024a7ff602573ec63" gracePeriod=30 Oct 01 16:02:07 crc kubenswrapper[4949]: I1001 16:02:07.929947 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7" containerName="sg-core" containerID="cri-o://d621bf56498b2f79c4d1ae216e2a200a2a5a9e71a942bad32ab6104a94a3ab12" gracePeriod=30 Oct 01 16:02:08 crc kubenswrapper[4949]: I1001 16:02:08.322251 4949 generic.go:334] "Generic (PLEG): container finished" podID="9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7" containerID="b6e520a4ab3ec0344e5a328b0d487c299448002c3336f5e05d51ed955fc9f403" exitCode=0 Oct 01 16:02:08 crc kubenswrapper[4949]: I1001 16:02:08.322291 4949 generic.go:334] "Generic (PLEG): container finished" podID="9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7" containerID="d621bf56498b2f79c4d1ae216e2a200a2a5a9e71a942bad32ab6104a94a3ab12" exitCode=2 Oct 01 16:02:08 crc kubenswrapper[4949]: I1001 16:02:08.322367 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7","Type":"ContainerDied","Data":"b6e520a4ab3ec0344e5a328b0d487c299448002c3336f5e05d51ed955fc9f403"} Oct 01 16:02:08 crc kubenswrapper[4949]: I1001 16:02:08.322437 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7","Type":"ContainerDied","Data":"d621bf56498b2f79c4d1ae216e2a200a2a5a9e71a942bad32ab6104a94a3ab12"} Oct 01 16:02:08 crc kubenswrapper[4949]: I1001 16:02:08.324387 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-p6rns" event={"ID":"c21b4c0b-7e29-4226-8745-3f942703d8f0","Type":"ContainerStarted","Data":"19b37063e89002ccef186f1a486b161a26ad16f1bea3c5083d17f2c744a5c1a9"} Oct 01 16:02:08 crc kubenswrapper[4949]: I1001 16:02:08.324633 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="411f8048-cb89-4e2f-bedb-e22259455682" containerName="nova-api-log" containerID="cri-o://e8515c5b6619a931c017ad4d7493ae11d8bfdcdff6c037bc02ebd00d2328ddb9" gracePeriod=30 Oct 01 16:02:08 crc kubenswrapper[4949]: I1001 16:02:08.324729 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="411f8048-cb89-4e2f-bedb-e22259455682" containerName="nova-api-api" containerID="cri-o://7117362f0058d60da14bbcb14a96ff5b03d0b0faf5bad4063dd9112ea2f7eee1" gracePeriod=30 Oct 01 16:02:08 crc kubenswrapper[4949]: I1001 16:02:08.353544 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-5b856c5697-p6rns" podStartSLOduration=3.35352982 podStartE2EDuration="3.35352982s" podCreationTimestamp="2025-10-01 16:02:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 16:02:08.352459209 +0000 UTC m=+1227.658065400" watchObservedRunningTime="2025-10-01 16:02:08.35352982 +0000 UTC m=+1227.659136001" Oct 01 16:02:08 crc kubenswrapper[4949]: I1001 16:02:08.663844 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-cell1-novncproxy-0" Oct 01 16:02:08 crc kubenswrapper[4949]: I1001 16:02:08.688612 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 01 16:02:08 crc kubenswrapper[4949]: I1001 16:02:08.688652 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 01 16:02:09 crc kubenswrapper[4949]: I1001 16:02:09.333202 4949 generic.go:334] "Generic (PLEG): container finished" podID="411f8048-cb89-4e2f-bedb-e22259455682" containerID="e8515c5b6619a931c017ad4d7493ae11d8bfdcdff6c037bc02ebd00d2328ddb9" exitCode=143 Oct 01 16:02:09 crc kubenswrapper[4949]: I1001 16:02:09.333258 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"411f8048-cb89-4e2f-bedb-e22259455682","Type":"ContainerDied","Data":"e8515c5b6619a931c017ad4d7493ae11d8bfdcdff6c037bc02ebd00d2328ddb9"} Oct 01 16:02:09 crc kubenswrapper[4949]: I1001 16:02:09.335869 4949 generic.go:334] "Generic (PLEG): container finished" podID="9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7" containerID="7492efd1e575c76cde19a3982e727743ba9f888aa74bf111a896e63fb8c084ef" exitCode=0 Oct 01 16:02:09 crc kubenswrapper[4949]: I1001 16:02:09.336619 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7","Type":"ContainerDied","Data":"7492efd1e575c76cde19a3982e727743ba9f888aa74bf111a896e63fb8c084ef"} Oct 01 16:02:09 crc kubenswrapper[4949]: I1001 16:02:09.336650 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-5b856c5697-p6rns" Oct 01 16:02:10 crc kubenswrapper[4949]: I1001 16:02:10.352008 4949 generic.go:334] "Generic (PLEG): container finished" podID="9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7" containerID="9b80a3caf6be28c81a0be839223c4c5ed03a307538c8613024a7ff602573ec63" exitCode=0 Oct 01 16:02:10 crc kubenswrapper[4949]: I1001 16:02:10.352155 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7","Type":"ContainerDied","Data":"9b80a3caf6be28c81a0be839223c4c5ed03a307538c8613024a7ff602573ec63"} Oct 01 16:02:10 crc kubenswrapper[4949]: E1001 16:02:10.358013 4949 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9cbd7766_b0b0_4766_a89f_0d1f0bfb1fa7.slice/crio-9b80a3caf6be28c81a0be839223c4c5ed03a307538c8613024a7ff602573ec63.scope\": RecentStats: unable to find data in memory cache]" Oct 01 16:02:10 crc kubenswrapper[4949]: I1001 16:02:10.535049 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 01 16:02:10 crc kubenswrapper[4949]: I1001 16:02:10.679514 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7-ceilometer-tls-certs\") pod \"9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7\" (UID: \"9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7\") " Oct 01 16:02:10 crc kubenswrapper[4949]: I1001 16:02:10.679585 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rpn8l\" (UniqueName: \"kubernetes.io/projected/9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7-kube-api-access-rpn8l\") pod \"9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7\" (UID: \"9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7\") " Oct 01 16:02:10 crc kubenswrapper[4949]: I1001 16:02:10.679658 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7-log-httpd\") pod \"9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7\" (UID: \"9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7\") " Oct 01 16:02:10 crc kubenswrapper[4949]: I1001 16:02:10.679695 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7-combined-ca-bundle\") pod \"9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7\" (UID: \"9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7\") " Oct 01 16:02:10 crc kubenswrapper[4949]: I1001 16:02:10.679718 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7-run-httpd\") pod \"9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7\" (UID: \"9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7\") " Oct 01 16:02:10 crc kubenswrapper[4949]: I1001 16:02:10.679740 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7-sg-core-conf-yaml\") pod \"9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7\" (UID: \"9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7\") " Oct 01 16:02:10 crc kubenswrapper[4949]: I1001 16:02:10.679761 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7-scripts\") pod \"9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7\" (UID: \"9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7\") " Oct 01 16:02:10 crc kubenswrapper[4949]: I1001 16:02:10.679799 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7-config-data\") pod \"9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7\" (UID: \"9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7\") " Oct 01 16:02:10 crc kubenswrapper[4949]: I1001 16:02:10.680385 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7" (UID: "9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:02:10 crc kubenswrapper[4949]: I1001 16:02:10.680418 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7" (UID: "9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:02:10 crc kubenswrapper[4949]: I1001 16:02:10.686030 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7-kube-api-access-rpn8l" (OuterVolumeSpecName: "kube-api-access-rpn8l") pod "9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7" (UID: "9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7"). InnerVolumeSpecName "kube-api-access-rpn8l". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:02:10 crc kubenswrapper[4949]: I1001 16:02:10.695347 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7-scripts" (OuterVolumeSpecName: "scripts") pod "9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7" (UID: "9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:02:10 crc kubenswrapper[4949]: I1001 16:02:10.719683 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7" (UID: "9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:02:10 crc kubenswrapper[4949]: I1001 16:02:10.739335 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7" (UID: "9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:02:10 crc kubenswrapper[4949]: I1001 16:02:10.783259 4949 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 01 16:02:10 crc kubenswrapper[4949]: I1001 16:02:10.783542 4949 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 01 16:02:10 crc kubenswrapper[4949]: I1001 16:02:10.783671 4949 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 01 16:02:10 crc kubenswrapper[4949]: I1001 16:02:10.783755 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7-scripts\") on node \"crc\" DevicePath \"\"" Oct 01 16:02:10 crc kubenswrapper[4949]: I1001 16:02:10.783618 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7" (UID: "9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:02:10 crc kubenswrapper[4949]: I1001 16:02:10.783825 4949 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 01 16:02:10 crc kubenswrapper[4949]: I1001 16:02:10.783892 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rpn8l\" (UniqueName: \"kubernetes.io/projected/9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7-kube-api-access-rpn8l\") on node \"crc\" DevicePath \"\"" Oct 01 16:02:10 crc kubenswrapper[4949]: I1001 16:02:10.805706 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7-config-data" (OuterVolumeSpecName: "config-data") pod "9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7" (UID: "9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:02:10 crc kubenswrapper[4949]: I1001 16:02:10.887220 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 16:02:10 crc kubenswrapper[4949]: I1001 16:02:10.887410 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7-config-data\") on node \"crc\" DevicePath \"\"" Oct 01 16:02:11 crc kubenswrapper[4949]: I1001 16:02:11.363298 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7","Type":"ContainerDied","Data":"d25ee1dba6d5d767a6925704a75154fcbf87aa238318c98d264c019a95c2c374"} Oct 01 16:02:11 crc kubenswrapper[4949]: I1001 16:02:11.363580 4949 scope.go:117] "RemoveContainer" containerID="b6e520a4ab3ec0344e5a328b0d487c299448002c3336f5e05d51ed955fc9f403" Oct 01 16:02:11 crc kubenswrapper[4949]: I1001 16:02:11.363391 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 01 16:02:11 crc kubenswrapper[4949]: I1001 16:02:11.401865 4949 scope.go:117] "RemoveContainer" containerID="d621bf56498b2f79c4d1ae216e2a200a2a5a9e71a942bad32ab6104a94a3ab12" Oct 01 16:02:11 crc kubenswrapper[4949]: I1001 16:02:11.411380 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 01 16:02:11 crc kubenswrapper[4949]: I1001 16:02:11.421783 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 01 16:02:11 crc kubenswrapper[4949]: I1001 16:02:11.435016 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 01 16:02:11 crc kubenswrapper[4949]: E1001 16:02:11.435480 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7" containerName="ceilometer-central-agent" Oct 01 16:02:11 crc kubenswrapper[4949]: I1001 16:02:11.435504 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7" containerName="ceilometer-central-agent" Oct 01 16:02:11 crc kubenswrapper[4949]: E1001 16:02:11.435516 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7" containerName="ceilometer-notification-agent" Oct 01 16:02:11 crc kubenswrapper[4949]: I1001 16:02:11.435523 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7" containerName="ceilometer-notification-agent" Oct 01 16:02:11 crc kubenswrapper[4949]: E1001 16:02:11.435546 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7" containerName="proxy-httpd" Oct 01 16:02:11 crc kubenswrapper[4949]: I1001 16:02:11.435554 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7" containerName="proxy-httpd" Oct 01 16:02:11 crc kubenswrapper[4949]: E1001 16:02:11.435590 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7" containerName="sg-core" Oct 01 16:02:11 crc kubenswrapper[4949]: I1001 16:02:11.435597 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7" containerName="sg-core" Oct 01 16:02:11 crc kubenswrapper[4949]: I1001 16:02:11.435813 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7" containerName="proxy-httpd" Oct 01 16:02:11 crc kubenswrapper[4949]: I1001 16:02:11.435836 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7" containerName="ceilometer-notification-agent" Oct 01 16:02:11 crc kubenswrapper[4949]: I1001 16:02:11.435849 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7" containerName="sg-core" Oct 01 16:02:11 crc kubenswrapper[4949]: I1001 16:02:11.435865 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7" containerName="ceilometer-central-agent" Oct 01 16:02:11 crc kubenswrapper[4949]: I1001 16:02:11.437789 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 01 16:02:11 crc kubenswrapper[4949]: I1001 16:02:11.441656 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 01 16:02:11 crc kubenswrapper[4949]: I1001 16:02:11.441838 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 01 16:02:11 crc kubenswrapper[4949]: I1001 16:02:11.442191 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 01 16:02:11 crc kubenswrapper[4949]: I1001 16:02:11.449644 4949 scope.go:117] "RemoveContainer" containerID="9b80a3caf6be28c81a0be839223c4c5ed03a307538c8613024a7ff602573ec63" Oct 01 16:02:11 crc kubenswrapper[4949]: I1001 16:02:11.474261 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 01 16:02:11 crc kubenswrapper[4949]: I1001 16:02:11.483858 4949 scope.go:117] "RemoveContainer" containerID="7492efd1e575c76cde19a3982e727743ba9f888aa74bf111a896e63fb8c084ef" Oct 01 16:02:11 crc kubenswrapper[4949]: I1001 16:02:11.599285 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7a262ff-0c09-4792-a7e7-e8fb709aa971-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f7a262ff-0c09-4792-a7e7-e8fb709aa971\") " pod="openstack/ceilometer-0" Oct 01 16:02:11 crc kubenswrapper[4949]: I1001 16:02:11.599523 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7a262ff-0c09-4792-a7e7-e8fb709aa971-run-httpd\") pod \"ceilometer-0\" (UID: \"f7a262ff-0c09-4792-a7e7-e8fb709aa971\") " pod="openstack/ceilometer-0" Oct 01 16:02:11 crc kubenswrapper[4949]: I1001 16:02:11.599619 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wshz\" (UniqueName: \"kubernetes.io/projected/f7a262ff-0c09-4792-a7e7-e8fb709aa971-kube-api-access-7wshz\") pod \"ceilometer-0\" (UID: \"f7a262ff-0c09-4792-a7e7-e8fb709aa971\") " pod="openstack/ceilometer-0" Oct 01 16:02:11 crc kubenswrapper[4949]: I1001 16:02:11.599719 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7a262ff-0c09-4792-a7e7-e8fb709aa971-scripts\") pod \"ceilometer-0\" (UID: \"f7a262ff-0c09-4792-a7e7-e8fb709aa971\") " pod="openstack/ceilometer-0" Oct 01 16:02:11 crc kubenswrapper[4949]: I1001 16:02:11.599813 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7a262ff-0c09-4792-a7e7-e8fb709aa971-config-data\") pod \"ceilometer-0\" (UID: \"f7a262ff-0c09-4792-a7e7-e8fb709aa971\") " pod="openstack/ceilometer-0" Oct 01 16:02:11 crc kubenswrapper[4949]: I1001 16:02:11.599919 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7a262ff-0c09-4792-a7e7-e8fb709aa971-log-httpd\") pod \"ceilometer-0\" (UID: \"f7a262ff-0c09-4792-a7e7-e8fb709aa971\") " pod="openstack/ceilometer-0" Oct 01 16:02:11 crc kubenswrapper[4949]: I1001 16:02:11.600091 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f7a262ff-0c09-4792-a7e7-e8fb709aa971-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f7a262ff-0c09-4792-a7e7-e8fb709aa971\") " pod="openstack/ceilometer-0" Oct 01 16:02:11 crc kubenswrapper[4949]: I1001 16:02:11.600205 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7a262ff-0c09-4792-a7e7-e8fb709aa971-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f7a262ff-0c09-4792-a7e7-e8fb709aa971\") " pod="openstack/ceilometer-0" Oct 01 16:02:11 crc kubenswrapper[4949]: I1001 16:02:11.614189 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7" path="/var/lib/kubelet/pods/9cbd7766-b0b0-4766-a89f-0d1f0bfb1fa7/volumes" Oct 01 16:02:11 crc kubenswrapper[4949]: I1001 16:02:11.704091 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f7a262ff-0c09-4792-a7e7-e8fb709aa971-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f7a262ff-0c09-4792-a7e7-e8fb709aa971\") " pod="openstack/ceilometer-0" Oct 01 16:02:11 crc kubenswrapper[4949]: I1001 16:02:11.704147 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7a262ff-0c09-4792-a7e7-e8fb709aa971-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f7a262ff-0c09-4792-a7e7-e8fb709aa971\") " pod="openstack/ceilometer-0" Oct 01 16:02:11 crc kubenswrapper[4949]: I1001 16:02:11.704199 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7a262ff-0c09-4792-a7e7-e8fb709aa971-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f7a262ff-0c09-4792-a7e7-e8fb709aa971\") " pod="openstack/ceilometer-0" Oct 01 16:02:11 crc kubenswrapper[4949]: I1001 16:02:11.704214 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7a262ff-0c09-4792-a7e7-e8fb709aa971-run-httpd\") pod \"ceilometer-0\" (UID: \"f7a262ff-0c09-4792-a7e7-e8fb709aa971\") " pod="openstack/ceilometer-0" Oct 01 16:02:11 crc kubenswrapper[4949]: I1001 16:02:11.704229 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wshz\" (UniqueName: \"kubernetes.io/projected/f7a262ff-0c09-4792-a7e7-e8fb709aa971-kube-api-access-7wshz\") pod \"ceilometer-0\" (UID: \"f7a262ff-0c09-4792-a7e7-e8fb709aa971\") " pod="openstack/ceilometer-0" Oct 01 16:02:11 crc kubenswrapper[4949]: I1001 16:02:11.704251 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7a262ff-0c09-4792-a7e7-e8fb709aa971-scripts\") pod \"ceilometer-0\" (UID: \"f7a262ff-0c09-4792-a7e7-e8fb709aa971\") " pod="openstack/ceilometer-0" Oct 01 16:02:11 crc kubenswrapper[4949]: I1001 16:02:11.704265 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7a262ff-0c09-4792-a7e7-e8fb709aa971-config-data\") pod \"ceilometer-0\" (UID: \"f7a262ff-0c09-4792-a7e7-e8fb709aa971\") " pod="openstack/ceilometer-0" Oct 01 16:02:11 crc kubenswrapper[4949]: I1001 16:02:11.704290 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7a262ff-0c09-4792-a7e7-e8fb709aa971-log-httpd\") pod \"ceilometer-0\" (UID: \"f7a262ff-0c09-4792-a7e7-e8fb709aa971\") " pod="openstack/ceilometer-0" Oct 01 16:02:11 crc kubenswrapper[4949]: I1001 16:02:11.704806 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7a262ff-0c09-4792-a7e7-e8fb709aa971-log-httpd\") pod \"ceilometer-0\" (UID: \"f7a262ff-0c09-4792-a7e7-e8fb709aa971\") " pod="openstack/ceilometer-0" Oct 01 16:02:11 crc kubenswrapper[4949]: I1001 16:02:11.705199 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7a262ff-0c09-4792-a7e7-e8fb709aa971-run-httpd\") pod \"ceilometer-0\" (UID: \"f7a262ff-0c09-4792-a7e7-e8fb709aa971\") " pod="openstack/ceilometer-0" Oct 01 16:02:11 crc kubenswrapper[4949]: I1001 16:02:11.710031 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7a262ff-0c09-4792-a7e7-e8fb709aa971-scripts\") pod \"ceilometer-0\" (UID: \"f7a262ff-0c09-4792-a7e7-e8fb709aa971\") " pod="openstack/ceilometer-0" Oct 01 16:02:11 crc kubenswrapper[4949]: I1001 16:02:11.710040 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7a262ff-0c09-4792-a7e7-e8fb709aa971-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"f7a262ff-0c09-4792-a7e7-e8fb709aa971\") " pod="openstack/ceilometer-0" Oct 01 16:02:11 crc kubenswrapper[4949]: I1001 16:02:11.711224 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f7a262ff-0c09-4792-a7e7-e8fb709aa971-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"f7a262ff-0c09-4792-a7e7-e8fb709aa971\") " pod="openstack/ceilometer-0" Oct 01 16:02:11 crc kubenswrapper[4949]: I1001 16:02:11.712536 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7a262ff-0c09-4792-a7e7-e8fb709aa971-config-data\") pod \"ceilometer-0\" (UID: \"f7a262ff-0c09-4792-a7e7-e8fb709aa971\") " pod="openstack/ceilometer-0" Oct 01 16:02:11 crc kubenswrapper[4949]: I1001 16:02:11.713410 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7a262ff-0c09-4792-a7e7-e8fb709aa971-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"f7a262ff-0c09-4792-a7e7-e8fb709aa971\") " pod="openstack/ceilometer-0" Oct 01 16:02:11 crc kubenswrapper[4949]: I1001 16:02:11.730228 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wshz\" (UniqueName: \"kubernetes.io/projected/f7a262ff-0c09-4792-a7e7-e8fb709aa971-kube-api-access-7wshz\") pod \"ceilometer-0\" (UID: \"f7a262ff-0c09-4792-a7e7-e8fb709aa971\") " pod="openstack/ceilometer-0" Oct 01 16:02:11 crc kubenswrapper[4949]: I1001 16:02:11.772643 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 01 16:02:11 crc kubenswrapper[4949]: I1001 16:02:11.863536 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.009850 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/411f8048-cb89-4e2f-bedb-e22259455682-combined-ca-bundle\") pod \"411f8048-cb89-4e2f-bedb-e22259455682\" (UID: \"411f8048-cb89-4e2f-bedb-e22259455682\") " Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.009900 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/411f8048-cb89-4e2f-bedb-e22259455682-logs\") pod \"411f8048-cb89-4e2f-bedb-e22259455682\" (UID: \"411f8048-cb89-4e2f-bedb-e22259455682\") " Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.010030 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zfqlq\" (UniqueName: \"kubernetes.io/projected/411f8048-cb89-4e2f-bedb-e22259455682-kube-api-access-zfqlq\") pod \"411f8048-cb89-4e2f-bedb-e22259455682\" (UID: \"411f8048-cb89-4e2f-bedb-e22259455682\") " Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.010137 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/411f8048-cb89-4e2f-bedb-e22259455682-config-data\") pod \"411f8048-cb89-4e2f-bedb-e22259455682\" (UID: \"411f8048-cb89-4e2f-bedb-e22259455682\") " Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.010758 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/411f8048-cb89-4e2f-bedb-e22259455682-logs" (OuterVolumeSpecName: "logs") pod "411f8048-cb89-4e2f-bedb-e22259455682" (UID: "411f8048-cb89-4e2f-bedb-e22259455682"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.014958 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/411f8048-cb89-4e2f-bedb-e22259455682-kube-api-access-zfqlq" (OuterVolumeSpecName: "kube-api-access-zfqlq") pod "411f8048-cb89-4e2f-bedb-e22259455682" (UID: "411f8048-cb89-4e2f-bedb-e22259455682"). InnerVolumeSpecName "kube-api-access-zfqlq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.046324 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/411f8048-cb89-4e2f-bedb-e22259455682-config-data" (OuterVolumeSpecName: "config-data") pod "411f8048-cb89-4e2f-bedb-e22259455682" (UID: "411f8048-cb89-4e2f-bedb-e22259455682"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.059649 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/411f8048-cb89-4e2f-bedb-e22259455682-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "411f8048-cb89-4e2f-bedb-e22259455682" (UID: "411f8048-cb89-4e2f-bedb-e22259455682"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.112076 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/411f8048-cb89-4e2f-bedb-e22259455682-config-data\") on node \"crc\" DevicePath \"\"" Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.112117 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/411f8048-cb89-4e2f-bedb-e22259455682-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.112156 4949 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/411f8048-cb89-4e2f-bedb-e22259455682-logs\") on node \"crc\" DevicePath \"\"" Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.112166 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zfqlq\" (UniqueName: \"kubernetes.io/projected/411f8048-cb89-4e2f-bedb-e22259455682-kube-api-access-zfqlq\") on node \"crc\" DevicePath \"\"" Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.252196 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 01 16:02:12 crc kubenswrapper[4949]: W1001 16:02:12.253267 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf7a262ff_0c09_4792_a7e7_e8fb709aa971.slice/crio-581bd4750393be3da4103f4fff0134dd605b6310abb00213d8bd2cd2aa22e392 WatchSource:0}: Error finding container 581bd4750393be3da4103f4fff0134dd605b6310abb00213d8bd2cd2aa22e392: Status 404 returned error can't find the container with id 581bd4750393be3da4103f4fff0134dd605b6310abb00213d8bd2cd2aa22e392 Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.255880 4949 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.375465 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7a262ff-0c09-4792-a7e7-e8fb709aa971","Type":"ContainerStarted","Data":"581bd4750393be3da4103f4fff0134dd605b6310abb00213d8bd2cd2aa22e392"} Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.377731 4949 generic.go:334] "Generic (PLEG): container finished" podID="411f8048-cb89-4e2f-bedb-e22259455682" containerID="7117362f0058d60da14bbcb14a96ff5b03d0b0faf5bad4063dd9112ea2f7eee1" exitCode=0 Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.377774 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"411f8048-cb89-4e2f-bedb-e22259455682","Type":"ContainerDied","Data":"7117362f0058d60da14bbcb14a96ff5b03d0b0faf5bad4063dd9112ea2f7eee1"} Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.377790 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.377811 4949 scope.go:117] "RemoveContainer" containerID="7117362f0058d60da14bbcb14a96ff5b03d0b0faf5bad4063dd9112ea2f7eee1" Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.377799 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"411f8048-cb89-4e2f-bedb-e22259455682","Type":"ContainerDied","Data":"c895dc8f3bcfac712c689bb60631757d50740d390c11f248dc1989abb18fb894"} Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.407565 4949 scope.go:117] "RemoveContainer" containerID="e8515c5b6619a931c017ad4d7493ae11d8bfdcdff6c037bc02ebd00d2328ddb9" Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.421257 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.433444 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.438321 4949 scope.go:117] "RemoveContainer" containerID="7117362f0058d60da14bbcb14a96ff5b03d0b0faf5bad4063dd9112ea2f7eee1" Oct 01 16:02:12 crc kubenswrapper[4949]: E1001 16:02:12.438751 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7117362f0058d60da14bbcb14a96ff5b03d0b0faf5bad4063dd9112ea2f7eee1\": container with ID starting with 7117362f0058d60da14bbcb14a96ff5b03d0b0faf5bad4063dd9112ea2f7eee1 not found: ID does not exist" containerID="7117362f0058d60da14bbcb14a96ff5b03d0b0faf5bad4063dd9112ea2f7eee1" Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.438778 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7117362f0058d60da14bbcb14a96ff5b03d0b0faf5bad4063dd9112ea2f7eee1"} err="failed to get container status \"7117362f0058d60da14bbcb14a96ff5b03d0b0faf5bad4063dd9112ea2f7eee1\": rpc error: code = NotFound desc = could not find container \"7117362f0058d60da14bbcb14a96ff5b03d0b0faf5bad4063dd9112ea2f7eee1\": container with ID starting with 7117362f0058d60da14bbcb14a96ff5b03d0b0faf5bad4063dd9112ea2f7eee1 not found: ID does not exist" Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.438797 4949 scope.go:117] "RemoveContainer" containerID="e8515c5b6619a931c017ad4d7493ae11d8bfdcdff6c037bc02ebd00d2328ddb9" Oct 01 16:02:12 crc kubenswrapper[4949]: E1001 16:02:12.439411 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e8515c5b6619a931c017ad4d7493ae11d8bfdcdff6c037bc02ebd00d2328ddb9\": container with ID starting with e8515c5b6619a931c017ad4d7493ae11d8bfdcdff6c037bc02ebd00d2328ddb9 not found: ID does not exist" containerID="e8515c5b6619a931c017ad4d7493ae11d8bfdcdff6c037bc02ebd00d2328ddb9" Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.439432 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e8515c5b6619a931c017ad4d7493ae11d8bfdcdff6c037bc02ebd00d2328ddb9"} err="failed to get container status \"e8515c5b6619a931c017ad4d7493ae11d8bfdcdff6c037bc02ebd00d2328ddb9\": rpc error: code = NotFound desc = could not find container \"e8515c5b6619a931c017ad4d7493ae11d8bfdcdff6c037bc02ebd00d2328ddb9\": container with ID starting with e8515c5b6619a931c017ad4d7493ae11d8bfdcdff6c037bc02ebd00d2328ddb9 not found: ID does not exist" Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.442614 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 01 16:02:12 crc kubenswrapper[4949]: E1001 16:02:12.443022 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="411f8048-cb89-4e2f-bedb-e22259455682" containerName="nova-api-api" Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.443038 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="411f8048-cb89-4e2f-bedb-e22259455682" containerName="nova-api-api" Oct 01 16:02:12 crc kubenswrapper[4949]: E1001 16:02:12.443061 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="411f8048-cb89-4e2f-bedb-e22259455682" containerName="nova-api-log" Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.443069 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="411f8048-cb89-4e2f-bedb-e22259455682" containerName="nova-api-log" Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.443308 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="411f8048-cb89-4e2f-bedb-e22259455682" containerName="nova-api-api" Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.443339 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="411f8048-cb89-4e2f-bedb-e22259455682" containerName="nova-api-log" Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.444432 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.447989 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.448198 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.448343 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.451824 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.528233 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58fhw\" (UniqueName: \"kubernetes.io/projected/21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba-kube-api-access-58fhw\") pod \"nova-api-0\" (UID: \"21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba\") " pod="openstack/nova-api-0" Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.528319 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba-config-data\") pod \"nova-api-0\" (UID: \"21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba\") " pod="openstack/nova-api-0" Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.528426 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba\") " pod="openstack/nova-api-0" Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.528528 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba-logs\") pod \"nova-api-0\" (UID: \"21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba\") " pod="openstack/nova-api-0" Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.528573 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba-public-tls-certs\") pod \"nova-api-0\" (UID: \"21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba\") " pod="openstack/nova-api-0" Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.528615 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba-internal-tls-certs\") pod \"nova-api-0\" (UID: \"21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba\") " pod="openstack/nova-api-0" Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.630310 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba-config-data\") pod \"nova-api-0\" (UID: \"21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba\") " pod="openstack/nova-api-0" Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.630440 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba\") " pod="openstack/nova-api-0" Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.630526 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba-logs\") pod \"nova-api-0\" (UID: \"21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba\") " pod="openstack/nova-api-0" Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.630565 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba-public-tls-certs\") pod \"nova-api-0\" (UID: \"21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba\") " pod="openstack/nova-api-0" Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.630598 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba-internal-tls-certs\") pod \"nova-api-0\" (UID: \"21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba\") " pod="openstack/nova-api-0" Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.630650 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58fhw\" (UniqueName: \"kubernetes.io/projected/21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba-kube-api-access-58fhw\") pod \"nova-api-0\" (UID: \"21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba\") " pod="openstack/nova-api-0" Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.630948 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba-logs\") pod \"nova-api-0\" (UID: \"21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba\") " pod="openstack/nova-api-0" Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.635590 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba-internal-tls-certs\") pod \"nova-api-0\" (UID: \"21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba\") " pod="openstack/nova-api-0" Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.635669 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba-public-tls-certs\") pod \"nova-api-0\" (UID: \"21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba\") " pod="openstack/nova-api-0" Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.638649 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba\") " pod="openstack/nova-api-0" Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.641772 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba-config-data\") pod \"nova-api-0\" (UID: \"21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba\") " pod="openstack/nova-api-0" Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.653453 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58fhw\" (UniqueName: \"kubernetes.io/projected/21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba-kube-api-access-58fhw\") pod \"nova-api-0\" (UID: \"21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba\") " pod="openstack/nova-api-0" Oct 01 16:02:12 crc kubenswrapper[4949]: I1001 16:02:12.776276 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 01 16:02:13 crc kubenswrapper[4949]: W1001 16:02:13.211716 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod21fb8ff9_9ef8_46ba_9e54_2cfc5052a6ba.slice/crio-2a1497d9017d758e831357aee7e122f436362b9e05814bf43d81e03ddbf8ceba WatchSource:0}: Error finding container 2a1497d9017d758e831357aee7e122f436362b9e05814bf43d81e03ddbf8ceba: Status 404 returned error can't find the container with id 2a1497d9017d758e831357aee7e122f436362b9e05814bf43d81e03ddbf8ceba Oct 01 16:02:13 crc kubenswrapper[4949]: I1001 16:02:13.213576 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 01 16:02:13 crc kubenswrapper[4949]: I1001 16:02:13.386521 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7a262ff-0c09-4792-a7e7-e8fb709aa971","Type":"ContainerStarted","Data":"946a6b626d30f73202269e09687cc84f40b3410711e2e3bd0f9193b7a401a14e"} Oct 01 16:02:13 crc kubenswrapper[4949]: I1001 16:02:13.388705 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba","Type":"ContainerStarted","Data":"2a1497d9017d758e831357aee7e122f436362b9e05814bf43d81e03ddbf8ceba"} Oct 01 16:02:13 crc kubenswrapper[4949]: I1001 16:02:13.610712 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="411f8048-cb89-4e2f-bedb-e22259455682" path="/var/lib/kubelet/pods/411f8048-cb89-4e2f-bedb-e22259455682/volumes" Oct 01 16:02:13 crc kubenswrapper[4949]: I1001 16:02:13.663596 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-cell1-novncproxy-0" Oct 01 16:02:13 crc kubenswrapper[4949]: I1001 16:02:13.683211 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-cell1-novncproxy-0" Oct 01 16:02:13 crc kubenswrapper[4949]: I1001 16:02:13.689539 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 01 16:02:13 crc kubenswrapper[4949]: I1001 16:02:13.689785 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 01 16:02:14 crc kubenswrapper[4949]: I1001 16:02:14.398515 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba","Type":"ContainerStarted","Data":"6d53e22659598a78146c34a260d1508fd84a87d3c8c1cc0b353f2949fd783021"} Oct 01 16:02:14 crc kubenswrapper[4949]: I1001 16:02:14.398564 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba","Type":"ContainerStarted","Data":"232b490a022160b8d907d3d1be8c42fb6b78ef0d88b2aa99f6d6b18d989a1756"} Oct 01 16:02:14 crc kubenswrapper[4949]: I1001 16:02:14.400903 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7a262ff-0c09-4792-a7e7-e8fb709aa971","Type":"ContainerStarted","Data":"d2a043ada7ee2b29c171a5df0dec7188fa155611656b0510de9231338296fe57"} Oct 01 16:02:14 crc kubenswrapper[4949]: I1001 16:02:14.421468 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.421447491 podStartE2EDuration="2.421447491s" podCreationTimestamp="2025-10-01 16:02:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 16:02:14.415446094 +0000 UTC m=+1233.721052285" watchObservedRunningTime="2025-10-01 16:02:14.421447491 +0000 UTC m=+1233.727053682" Oct 01 16:02:14 crc kubenswrapper[4949]: I1001 16:02:14.421789 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-cell1-novncproxy-0" Oct 01 16:02:14 crc kubenswrapper[4949]: I1001 16:02:14.554038 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-cell1-cell-mapping-bkdh7"] Oct 01 16:02:14 crc kubenswrapper[4949]: I1001 16:02:14.555808 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bkdh7" Oct 01 16:02:14 crc kubenswrapper[4949]: I1001 16:02:14.561419 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-config-data" Oct 01 16:02:14 crc kubenswrapper[4949]: I1001 16:02:14.561611 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-manage-scripts" Oct 01 16:02:14 crc kubenswrapper[4949]: I1001 16:02:14.566509 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-bkdh7"] Oct 01 16:02:14 crc kubenswrapper[4949]: I1001 16:02:14.576066 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef377cf2-dc25-42b4-bbbc-057ddd12c20d-config-data\") pod \"nova-cell1-cell-mapping-bkdh7\" (UID: \"ef377cf2-dc25-42b4-bbbc-057ddd12c20d\") " pod="openstack/nova-cell1-cell-mapping-bkdh7" Oct 01 16:02:14 crc kubenswrapper[4949]: I1001 16:02:14.576230 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-htjzt\" (UniqueName: \"kubernetes.io/projected/ef377cf2-dc25-42b4-bbbc-057ddd12c20d-kube-api-access-htjzt\") pod \"nova-cell1-cell-mapping-bkdh7\" (UID: \"ef377cf2-dc25-42b4-bbbc-057ddd12c20d\") " pod="openstack/nova-cell1-cell-mapping-bkdh7" Oct 01 16:02:14 crc kubenswrapper[4949]: I1001 16:02:14.576327 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef377cf2-dc25-42b4-bbbc-057ddd12c20d-scripts\") pod \"nova-cell1-cell-mapping-bkdh7\" (UID: \"ef377cf2-dc25-42b4-bbbc-057ddd12c20d\") " pod="openstack/nova-cell1-cell-mapping-bkdh7" Oct 01 16:02:14 crc kubenswrapper[4949]: I1001 16:02:14.576533 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef377cf2-dc25-42b4-bbbc-057ddd12c20d-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-bkdh7\" (UID: \"ef377cf2-dc25-42b4-bbbc-057ddd12c20d\") " pod="openstack/nova-cell1-cell-mapping-bkdh7" Oct 01 16:02:14 crc kubenswrapper[4949]: I1001 16:02:14.677871 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef377cf2-dc25-42b4-bbbc-057ddd12c20d-config-data\") pod \"nova-cell1-cell-mapping-bkdh7\" (UID: \"ef377cf2-dc25-42b4-bbbc-057ddd12c20d\") " pod="openstack/nova-cell1-cell-mapping-bkdh7" Oct 01 16:02:14 crc kubenswrapper[4949]: I1001 16:02:14.678443 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-htjzt\" (UniqueName: \"kubernetes.io/projected/ef377cf2-dc25-42b4-bbbc-057ddd12c20d-kube-api-access-htjzt\") pod \"nova-cell1-cell-mapping-bkdh7\" (UID: \"ef377cf2-dc25-42b4-bbbc-057ddd12c20d\") " pod="openstack/nova-cell1-cell-mapping-bkdh7" Oct 01 16:02:14 crc kubenswrapper[4949]: I1001 16:02:14.678763 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef377cf2-dc25-42b4-bbbc-057ddd12c20d-scripts\") pod \"nova-cell1-cell-mapping-bkdh7\" (UID: \"ef377cf2-dc25-42b4-bbbc-057ddd12c20d\") " pod="openstack/nova-cell1-cell-mapping-bkdh7" Oct 01 16:02:14 crc kubenswrapper[4949]: I1001 16:02:14.678827 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef377cf2-dc25-42b4-bbbc-057ddd12c20d-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-bkdh7\" (UID: \"ef377cf2-dc25-42b4-bbbc-057ddd12c20d\") " pod="openstack/nova-cell1-cell-mapping-bkdh7" Oct 01 16:02:14 crc kubenswrapper[4949]: I1001 16:02:14.682823 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef377cf2-dc25-42b4-bbbc-057ddd12c20d-config-data\") pod \"nova-cell1-cell-mapping-bkdh7\" (UID: \"ef377cf2-dc25-42b4-bbbc-057ddd12c20d\") " pod="openstack/nova-cell1-cell-mapping-bkdh7" Oct 01 16:02:14 crc kubenswrapper[4949]: I1001 16:02:14.683792 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef377cf2-dc25-42b4-bbbc-057ddd12c20d-scripts\") pod \"nova-cell1-cell-mapping-bkdh7\" (UID: \"ef377cf2-dc25-42b4-bbbc-057ddd12c20d\") " pod="openstack/nova-cell1-cell-mapping-bkdh7" Oct 01 16:02:14 crc kubenswrapper[4949]: I1001 16:02:14.692528 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef377cf2-dc25-42b4-bbbc-057ddd12c20d-combined-ca-bundle\") pod \"nova-cell1-cell-mapping-bkdh7\" (UID: \"ef377cf2-dc25-42b4-bbbc-057ddd12c20d\") " pod="openstack/nova-cell1-cell-mapping-bkdh7" Oct 01 16:02:14 crc kubenswrapper[4949]: I1001 16:02:14.697628 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-htjzt\" (UniqueName: \"kubernetes.io/projected/ef377cf2-dc25-42b4-bbbc-057ddd12c20d-kube-api-access-htjzt\") pod \"nova-cell1-cell-mapping-bkdh7\" (UID: \"ef377cf2-dc25-42b4-bbbc-057ddd12c20d\") " pod="openstack/nova-cell1-cell-mapping-bkdh7" Oct 01 16:02:14 crc kubenswrapper[4949]: I1001 16:02:14.706735 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="661e15c7-897e-4b47-9202-95dd5c6d9456" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.182:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 01 16:02:14 crc kubenswrapper[4949]: I1001 16:02:14.707229 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="661e15c7-897e-4b47-9202-95dd5c6d9456" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.182:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 01 16:02:14 crc kubenswrapper[4949]: I1001 16:02:14.882211 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bkdh7" Oct 01 16:02:15 crc kubenswrapper[4949]: I1001 16:02:15.417213 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-cell1-cell-mapping-bkdh7"] Oct 01 16:02:15 crc kubenswrapper[4949]: I1001 16:02:15.426189 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7a262ff-0c09-4792-a7e7-e8fb709aa971","Type":"ContainerStarted","Data":"0c91a6d2b2323683370ce4b166f8316585c1db48b081a1cfc6659d754f9be901"} Oct 01 16:02:15 crc kubenswrapper[4949]: I1001 16:02:15.787300 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-5b856c5697-p6rns" Oct 01 16:02:15 crc kubenswrapper[4949]: I1001 16:02:15.860722 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-v47vk"] Oct 01 16:02:15 crc kubenswrapper[4949]: I1001 16:02:15.861005 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-566b5b7845-v47vk" podUID="4f13ebe6-cca0-46cf-a0c9-aa20e2abc385" containerName="dnsmasq-dns" containerID="cri-o://c10c04897d2fcea21ba258dbd462d7f62a5343064bda9b186b8c5869e7bb2c92" gracePeriod=10 Oct 01 16:02:16 crc kubenswrapper[4949]: I1001 16:02:16.435579 4949 generic.go:334] "Generic (PLEG): container finished" podID="4f13ebe6-cca0-46cf-a0c9-aa20e2abc385" containerID="c10c04897d2fcea21ba258dbd462d7f62a5343064bda9b186b8c5869e7bb2c92" exitCode=0 Oct 01 16:02:16 crc kubenswrapper[4949]: I1001 16:02:16.435677 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-v47vk" event={"ID":"4f13ebe6-cca0-46cf-a0c9-aa20e2abc385","Type":"ContainerDied","Data":"c10c04897d2fcea21ba258dbd462d7f62a5343064bda9b186b8c5869e7bb2c92"} Oct 01 16:02:16 crc kubenswrapper[4949]: I1001 16:02:16.435826 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-566b5b7845-v47vk" event={"ID":"4f13ebe6-cca0-46cf-a0c9-aa20e2abc385","Type":"ContainerDied","Data":"8b5b9e5f714e6ed7ee3de556cb7fdfbbc0701dcdf4d483ce88c9b4e28709f4fe"} Oct 01 16:02:16 crc kubenswrapper[4949]: I1001 16:02:16.435839 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b5b9e5f714e6ed7ee3de556cb7fdfbbc0701dcdf4d483ce88c9b4e28709f4fe" Oct 01 16:02:16 crc kubenswrapper[4949]: I1001 16:02:16.437496 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bkdh7" event={"ID":"ef377cf2-dc25-42b4-bbbc-057ddd12c20d","Type":"ContainerStarted","Data":"d9737839c3b9452cbe4b575d7008806208f989bdaf749ddf577ea68583eabdbf"} Oct 01 16:02:16 crc kubenswrapper[4949]: I1001 16:02:16.437525 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bkdh7" event={"ID":"ef377cf2-dc25-42b4-bbbc-057ddd12c20d","Type":"ContainerStarted","Data":"1299028e96e80ce5a0aa4cfcec70076a978320da26bdfa7a36cd851d3b11c5f9"} Oct 01 16:02:16 crc kubenswrapper[4949]: I1001 16:02:16.448887 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-v47vk" Oct 01 16:02:16 crc kubenswrapper[4949]: I1001 16:02:16.453798 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-cell1-cell-mapping-bkdh7" podStartSLOduration=2.453782507 podStartE2EDuration="2.453782507s" podCreationTimestamp="2025-10-01 16:02:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 16:02:16.452970935 +0000 UTC m=+1235.758577126" watchObservedRunningTime="2025-10-01 16:02:16.453782507 +0000 UTC m=+1235.759388698" Oct 01 16:02:16 crc kubenswrapper[4949]: I1001 16:02:16.622271 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f13ebe6-cca0-46cf-a0c9-aa20e2abc385-config\") pod \"4f13ebe6-cca0-46cf-a0c9-aa20e2abc385\" (UID: \"4f13ebe6-cca0-46cf-a0c9-aa20e2abc385\") " Oct 01 16:02:16 crc kubenswrapper[4949]: I1001 16:02:16.622338 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f13ebe6-cca0-46cf-a0c9-aa20e2abc385-ovsdbserver-nb\") pod \"4f13ebe6-cca0-46cf-a0c9-aa20e2abc385\" (UID: \"4f13ebe6-cca0-46cf-a0c9-aa20e2abc385\") " Oct 01 16:02:16 crc kubenswrapper[4949]: I1001 16:02:16.622688 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2snp6\" (UniqueName: \"kubernetes.io/projected/4f13ebe6-cca0-46cf-a0c9-aa20e2abc385-kube-api-access-2snp6\") pod \"4f13ebe6-cca0-46cf-a0c9-aa20e2abc385\" (UID: \"4f13ebe6-cca0-46cf-a0c9-aa20e2abc385\") " Oct 01 16:02:16 crc kubenswrapper[4949]: I1001 16:02:16.622849 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f13ebe6-cca0-46cf-a0c9-aa20e2abc385-dns-svc\") pod \"4f13ebe6-cca0-46cf-a0c9-aa20e2abc385\" (UID: \"4f13ebe6-cca0-46cf-a0c9-aa20e2abc385\") " Oct 01 16:02:16 crc kubenswrapper[4949]: I1001 16:02:16.623085 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f13ebe6-cca0-46cf-a0c9-aa20e2abc385-ovsdbserver-sb\") pod \"4f13ebe6-cca0-46cf-a0c9-aa20e2abc385\" (UID: \"4f13ebe6-cca0-46cf-a0c9-aa20e2abc385\") " Oct 01 16:02:16 crc kubenswrapper[4949]: I1001 16:02:16.633938 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f13ebe6-cca0-46cf-a0c9-aa20e2abc385-kube-api-access-2snp6" (OuterVolumeSpecName: "kube-api-access-2snp6") pod "4f13ebe6-cca0-46cf-a0c9-aa20e2abc385" (UID: "4f13ebe6-cca0-46cf-a0c9-aa20e2abc385"). InnerVolumeSpecName "kube-api-access-2snp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:02:16 crc kubenswrapper[4949]: I1001 16:02:16.672088 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f13ebe6-cca0-46cf-a0c9-aa20e2abc385-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "4f13ebe6-cca0-46cf-a0c9-aa20e2abc385" (UID: "4f13ebe6-cca0-46cf-a0c9-aa20e2abc385"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 16:02:16 crc kubenswrapper[4949]: I1001 16:02:16.687175 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f13ebe6-cca0-46cf-a0c9-aa20e2abc385-config" (OuterVolumeSpecName: "config") pod "4f13ebe6-cca0-46cf-a0c9-aa20e2abc385" (UID: "4f13ebe6-cca0-46cf-a0c9-aa20e2abc385"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 16:02:16 crc kubenswrapper[4949]: I1001 16:02:16.692936 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f13ebe6-cca0-46cf-a0c9-aa20e2abc385-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "4f13ebe6-cca0-46cf-a0c9-aa20e2abc385" (UID: "4f13ebe6-cca0-46cf-a0c9-aa20e2abc385"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 16:02:16 crc kubenswrapper[4949]: I1001 16:02:16.702753 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4f13ebe6-cca0-46cf-a0c9-aa20e2abc385-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "4f13ebe6-cca0-46cf-a0c9-aa20e2abc385" (UID: "4f13ebe6-cca0-46cf-a0c9-aa20e2abc385"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 16:02:16 crc kubenswrapper[4949]: I1001 16:02:16.727182 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4f13ebe6-cca0-46cf-a0c9-aa20e2abc385-config\") on node \"crc\" DevicePath \"\"" Oct 01 16:02:16 crc kubenswrapper[4949]: I1001 16:02:16.727644 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/4f13ebe6-cca0-46cf-a0c9-aa20e2abc385-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 01 16:02:16 crc kubenswrapper[4949]: I1001 16:02:16.727724 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2snp6\" (UniqueName: \"kubernetes.io/projected/4f13ebe6-cca0-46cf-a0c9-aa20e2abc385-kube-api-access-2snp6\") on node \"crc\" DevicePath \"\"" Oct 01 16:02:16 crc kubenswrapper[4949]: I1001 16:02:16.727832 4949 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/4f13ebe6-cca0-46cf-a0c9-aa20e2abc385-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 01 16:02:16 crc kubenswrapper[4949]: I1001 16:02:16.727974 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/4f13ebe6-cca0-46cf-a0c9-aa20e2abc385-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 01 16:02:17 crc kubenswrapper[4949]: I1001 16:02:17.446076 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-566b5b7845-v47vk" Oct 01 16:02:17 crc kubenswrapper[4949]: I1001 16:02:17.494403 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-v47vk"] Oct 01 16:02:17 crc kubenswrapper[4949]: I1001 16:02:17.501967 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-566b5b7845-v47vk"] Oct 01 16:02:17 crc kubenswrapper[4949]: I1001 16:02:17.612697 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f13ebe6-cca0-46cf-a0c9-aa20e2abc385" path="/var/lib/kubelet/pods/4f13ebe6-cca0-46cf-a0c9-aa20e2abc385/volumes" Oct 01 16:02:18 crc kubenswrapper[4949]: I1001 16:02:18.456655 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7a262ff-0c09-4792-a7e7-e8fb709aa971","Type":"ContainerStarted","Data":"5fb65314d9da2accfa5168bf188a43f203c5777662b29a4aaa382fd2b953230b"} Oct 01 16:02:18 crc kubenswrapper[4949]: I1001 16:02:18.457587 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 01 16:02:18 crc kubenswrapper[4949]: I1001 16:02:18.494659 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.224802878 podStartE2EDuration="7.494639461s" podCreationTimestamp="2025-10-01 16:02:11 +0000 UTC" firstStartedPulling="2025-10-01 16:02:12.255647133 +0000 UTC m=+1231.561253324" lastFinishedPulling="2025-10-01 16:02:17.525483716 +0000 UTC m=+1236.831089907" observedRunningTime="2025-10-01 16:02:18.481147275 +0000 UTC m=+1237.786753466" watchObservedRunningTime="2025-10-01 16:02:18.494639461 +0000 UTC m=+1237.800245652" Oct 01 16:02:20 crc kubenswrapper[4949]: I1001 16:02:20.478764 4949 generic.go:334] "Generic (PLEG): container finished" podID="ef377cf2-dc25-42b4-bbbc-057ddd12c20d" containerID="d9737839c3b9452cbe4b575d7008806208f989bdaf749ddf577ea68583eabdbf" exitCode=0 Oct 01 16:02:20 crc kubenswrapper[4949]: I1001 16:02:20.478828 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bkdh7" event={"ID":"ef377cf2-dc25-42b4-bbbc-057ddd12c20d","Type":"ContainerDied","Data":"d9737839c3b9452cbe4b575d7008806208f989bdaf749ddf577ea68583eabdbf"} Oct 01 16:02:21 crc kubenswrapper[4949]: I1001 16:02:21.870636 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bkdh7" Oct 01 16:02:22 crc kubenswrapper[4949]: I1001 16:02:22.028783 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef377cf2-dc25-42b4-bbbc-057ddd12c20d-combined-ca-bundle\") pod \"ef377cf2-dc25-42b4-bbbc-057ddd12c20d\" (UID: \"ef377cf2-dc25-42b4-bbbc-057ddd12c20d\") " Oct 01 16:02:22 crc kubenswrapper[4949]: I1001 16:02:22.029206 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htjzt\" (UniqueName: \"kubernetes.io/projected/ef377cf2-dc25-42b4-bbbc-057ddd12c20d-kube-api-access-htjzt\") pod \"ef377cf2-dc25-42b4-bbbc-057ddd12c20d\" (UID: \"ef377cf2-dc25-42b4-bbbc-057ddd12c20d\") " Oct 01 16:02:22 crc kubenswrapper[4949]: I1001 16:02:22.029361 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef377cf2-dc25-42b4-bbbc-057ddd12c20d-config-data\") pod \"ef377cf2-dc25-42b4-bbbc-057ddd12c20d\" (UID: \"ef377cf2-dc25-42b4-bbbc-057ddd12c20d\") " Oct 01 16:02:22 crc kubenswrapper[4949]: I1001 16:02:22.029559 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef377cf2-dc25-42b4-bbbc-057ddd12c20d-scripts\") pod \"ef377cf2-dc25-42b4-bbbc-057ddd12c20d\" (UID: \"ef377cf2-dc25-42b4-bbbc-057ddd12c20d\") " Oct 01 16:02:22 crc kubenswrapper[4949]: I1001 16:02:22.033778 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef377cf2-dc25-42b4-bbbc-057ddd12c20d-kube-api-access-htjzt" (OuterVolumeSpecName: "kube-api-access-htjzt") pod "ef377cf2-dc25-42b4-bbbc-057ddd12c20d" (UID: "ef377cf2-dc25-42b4-bbbc-057ddd12c20d"). InnerVolumeSpecName "kube-api-access-htjzt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:02:22 crc kubenswrapper[4949]: I1001 16:02:22.042037 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef377cf2-dc25-42b4-bbbc-057ddd12c20d-scripts" (OuterVolumeSpecName: "scripts") pod "ef377cf2-dc25-42b4-bbbc-057ddd12c20d" (UID: "ef377cf2-dc25-42b4-bbbc-057ddd12c20d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:02:22 crc kubenswrapper[4949]: I1001 16:02:22.054767 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef377cf2-dc25-42b4-bbbc-057ddd12c20d-config-data" (OuterVolumeSpecName: "config-data") pod "ef377cf2-dc25-42b4-bbbc-057ddd12c20d" (UID: "ef377cf2-dc25-42b4-bbbc-057ddd12c20d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:02:22 crc kubenswrapper[4949]: I1001 16:02:22.055726 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef377cf2-dc25-42b4-bbbc-057ddd12c20d-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ef377cf2-dc25-42b4-bbbc-057ddd12c20d" (UID: "ef377cf2-dc25-42b4-bbbc-057ddd12c20d"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:02:22 crc kubenswrapper[4949]: I1001 16:02:22.131383 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ef377cf2-dc25-42b4-bbbc-057ddd12c20d-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 16:02:22 crc kubenswrapper[4949]: I1001 16:02:22.131413 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htjzt\" (UniqueName: \"kubernetes.io/projected/ef377cf2-dc25-42b4-bbbc-057ddd12c20d-kube-api-access-htjzt\") on node \"crc\" DevicePath \"\"" Oct 01 16:02:22 crc kubenswrapper[4949]: I1001 16:02:22.131425 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ef377cf2-dc25-42b4-bbbc-057ddd12c20d-config-data\") on node \"crc\" DevicePath \"\"" Oct 01 16:02:22 crc kubenswrapper[4949]: I1001 16:02:22.131434 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ef377cf2-dc25-42b4-bbbc-057ddd12c20d-scripts\") on node \"crc\" DevicePath \"\"" Oct 01 16:02:22 crc kubenswrapper[4949]: I1001 16:02:22.497607 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-cell1-cell-mapping-bkdh7" event={"ID":"ef377cf2-dc25-42b4-bbbc-057ddd12c20d","Type":"ContainerDied","Data":"1299028e96e80ce5a0aa4cfcec70076a978320da26bdfa7a36cd851d3b11c5f9"} Oct 01 16:02:22 crc kubenswrapper[4949]: I1001 16:02:22.497659 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1299028e96e80ce5a0aa4cfcec70076a978320da26bdfa7a36cd851d3b11c5f9" Oct 01 16:02:22 crc kubenswrapper[4949]: I1001 16:02:22.497665 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-cell1-cell-mapping-bkdh7" Oct 01 16:02:22 crc kubenswrapper[4949]: I1001 16:02:22.679770 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 01 16:02:22 crc kubenswrapper[4949]: I1001 16:02:22.680032 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba" containerName="nova-api-log" containerID="cri-o://232b490a022160b8d907d3d1be8c42fb6b78ef0d88b2aa99f6d6b18d989a1756" gracePeriod=30 Oct 01 16:02:22 crc kubenswrapper[4949]: I1001 16:02:22.680259 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-api-0" podUID="21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba" containerName="nova-api-api" containerID="cri-o://6d53e22659598a78146c34a260d1508fd84a87d3c8c1cc0b353f2949fd783021" gracePeriod=30 Oct 01 16:02:22 crc kubenswrapper[4949]: I1001 16:02:22.690322 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 01 16:02:22 crc kubenswrapper[4949]: I1001 16:02:22.690603 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-scheduler-0" podUID="b521e871-1589-4db3-a0dc-06eedffd3ada" containerName="nova-scheduler-scheduler" containerID="cri-o://515fc1f74ea21db077344a168f73569962ec4d7d4492379de4dc11d6b5330f1a" gracePeriod=30 Oct 01 16:02:22 crc kubenswrapper[4949]: I1001 16:02:22.714677 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 01 16:02:22 crc kubenswrapper[4949]: I1001 16:02:22.714937 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="661e15c7-897e-4b47-9202-95dd5c6d9456" containerName="nova-metadata-log" containerID="cri-o://99fa9363b7bd81edeb2ea79a2171a519602d969795bc13c4f6b536d2de062680" gracePeriod=30 Oct 01 16:02:22 crc kubenswrapper[4949]: I1001 16:02:22.715082 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/nova-metadata-0" podUID="661e15c7-897e-4b47-9202-95dd5c6d9456" containerName="nova-metadata-metadata" containerID="cri-o://3f6a090a9f41560edbdd22e26eec4ec27025534cc833a303d4a90b249961ef16" gracePeriod=30 Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.247917 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.351866 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-58fhw\" (UniqueName: \"kubernetes.io/projected/21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba-kube-api-access-58fhw\") pod \"21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba\" (UID: \"21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba\") " Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.351968 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba-public-tls-certs\") pod \"21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba\" (UID: \"21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba\") " Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.352012 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba-combined-ca-bundle\") pod \"21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba\" (UID: \"21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba\") " Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.352146 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba-internal-tls-certs\") pod \"21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba\" (UID: \"21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba\") " Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.352167 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba-config-data\") pod \"21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba\" (UID: \"21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba\") " Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.352288 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba-logs\") pod \"21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba\" (UID: \"21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba\") " Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.352964 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba-logs" (OuterVolumeSpecName: "logs") pod "21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba" (UID: "21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.358012 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba-kube-api-access-58fhw" (OuterVolumeSpecName: "kube-api-access-58fhw") pod "21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba" (UID: "21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba"). InnerVolumeSpecName "kube-api-access-58fhw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.381203 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba-config-data" (OuterVolumeSpecName: "config-data") pod "21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba" (UID: "21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.387900 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba" (UID: "21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.405257 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba" (UID: "21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.410615 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba" (UID: "21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.455514 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-58fhw\" (UniqueName: \"kubernetes.io/projected/21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba-kube-api-access-58fhw\") on node \"crc\" DevicePath \"\"" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.455582 4949 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.455595 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.455631 4949 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.455646 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba-config-data\") on node \"crc\" DevicePath \"\"" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.455657 4949 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba-logs\") on node \"crc\" DevicePath \"\"" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.508696 4949 generic.go:334] "Generic (PLEG): container finished" podID="661e15c7-897e-4b47-9202-95dd5c6d9456" containerID="99fa9363b7bd81edeb2ea79a2171a519602d969795bc13c4f6b536d2de062680" exitCode=143 Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.508782 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"661e15c7-897e-4b47-9202-95dd5c6d9456","Type":"ContainerDied","Data":"99fa9363b7bd81edeb2ea79a2171a519602d969795bc13c4f6b536d2de062680"} Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.510760 4949 generic.go:334] "Generic (PLEG): container finished" podID="21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba" containerID="6d53e22659598a78146c34a260d1508fd84a87d3c8c1cc0b353f2949fd783021" exitCode=0 Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.510777 4949 generic.go:334] "Generic (PLEG): container finished" podID="21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba" containerID="232b490a022160b8d907d3d1be8c42fb6b78ef0d88b2aa99f6d6b18d989a1756" exitCode=143 Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.510791 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba","Type":"ContainerDied","Data":"6d53e22659598a78146c34a260d1508fd84a87d3c8c1cc0b353f2949fd783021"} Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.510852 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba","Type":"ContainerDied","Data":"232b490a022160b8d907d3d1be8c42fb6b78ef0d88b2aa99f6d6b18d989a1756"} Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.510862 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.510876 4949 scope.go:117] "RemoveContainer" containerID="6d53e22659598a78146c34a260d1508fd84a87d3c8c1cc0b353f2949fd783021" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.510865 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba","Type":"ContainerDied","Data":"2a1497d9017d758e831357aee7e122f436362b9e05814bf43d81e03ddbf8ceba"} Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.545530 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-0"] Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.553329 4949 scope.go:117] "RemoveContainer" containerID="232b490a022160b8d907d3d1be8c42fb6b78ef0d88b2aa99f6d6b18d989a1756" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.562784 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-0"] Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.572795 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-api-0"] Oct 01 16:02:23 crc kubenswrapper[4949]: E1001 16:02:23.573187 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f13ebe6-cca0-46cf-a0c9-aa20e2abc385" containerName="init" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.573205 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f13ebe6-cca0-46cf-a0c9-aa20e2abc385" containerName="init" Oct 01 16:02:23 crc kubenswrapper[4949]: E1001 16:02:23.573216 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f13ebe6-cca0-46cf-a0c9-aa20e2abc385" containerName="dnsmasq-dns" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.573223 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f13ebe6-cca0-46cf-a0c9-aa20e2abc385" containerName="dnsmasq-dns" Oct 01 16:02:23 crc kubenswrapper[4949]: E1001 16:02:23.573232 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba" containerName="nova-api-log" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.573239 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba" containerName="nova-api-log" Oct 01 16:02:23 crc kubenswrapper[4949]: E1001 16:02:23.573273 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba" containerName="nova-api-api" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.573281 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba" containerName="nova-api-api" Oct 01 16:02:23 crc kubenswrapper[4949]: E1001 16:02:23.573290 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ef377cf2-dc25-42b4-bbbc-057ddd12c20d" containerName="nova-manage" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.573296 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="ef377cf2-dc25-42b4-bbbc-057ddd12c20d" containerName="nova-manage" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.573514 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="ef377cf2-dc25-42b4-bbbc-057ddd12c20d" containerName="nova-manage" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.573531 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba" containerName="nova-api-log" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.573548 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f13ebe6-cca0-46cf-a0c9-aa20e2abc385" containerName="dnsmasq-dns" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.573562 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba" containerName="nova-api-api" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.575892 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.580507 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-internal-svc" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.580677 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-public-svc" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.584937 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-api-config-data" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.587477 4949 scope.go:117] "RemoveContainer" containerID="6d53e22659598a78146c34a260d1508fd84a87d3c8c1cc0b353f2949fd783021" Oct 01 16:02:23 crc kubenswrapper[4949]: E1001 16:02:23.592669 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d53e22659598a78146c34a260d1508fd84a87d3c8c1cc0b353f2949fd783021\": container with ID starting with 6d53e22659598a78146c34a260d1508fd84a87d3c8c1cc0b353f2949fd783021 not found: ID does not exist" containerID="6d53e22659598a78146c34a260d1508fd84a87d3c8c1cc0b353f2949fd783021" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.592708 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d53e22659598a78146c34a260d1508fd84a87d3c8c1cc0b353f2949fd783021"} err="failed to get container status \"6d53e22659598a78146c34a260d1508fd84a87d3c8c1cc0b353f2949fd783021\": rpc error: code = NotFound desc = could not find container \"6d53e22659598a78146c34a260d1508fd84a87d3c8c1cc0b353f2949fd783021\": container with ID starting with 6d53e22659598a78146c34a260d1508fd84a87d3c8c1cc0b353f2949fd783021 not found: ID does not exist" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.592729 4949 scope.go:117] "RemoveContainer" containerID="232b490a022160b8d907d3d1be8c42fb6b78ef0d88b2aa99f6d6b18d989a1756" Oct 01 16:02:23 crc kubenswrapper[4949]: E1001 16:02:23.593718 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"232b490a022160b8d907d3d1be8c42fb6b78ef0d88b2aa99f6d6b18d989a1756\": container with ID starting with 232b490a022160b8d907d3d1be8c42fb6b78ef0d88b2aa99f6d6b18d989a1756 not found: ID does not exist" containerID="232b490a022160b8d907d3d1be8c42fb6b78ef0d88b2aa99f6d6b18d989a1756" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.593792 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"232b490a022160b8d907d3d1be8c42fb6b78ef0d88b2aa99f6d6b18d989a1756"} err="failed to get container status \"232b490a022160b8d907d3d1be8c42fb6b78ef0d88b2aa99f6d6b18d989a1756\": rpc error: code = NotFound desc = could not find container \"232b490a022160b8d907d3d1be8c42fb6b78ef0d88b2aa99f6d6b18d989a1756\": container with ID starting with 232b490a022160b8d907d3d1be8c42fb6b78ef0d88b2aa99f6d6b18d989a1756 not found: ID does not exist" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.593839 4949 scope.go:117] "RemoveContainer" containerID="6d53e22659598a78146c34a260d1508fd84a87d3c8c1cc0b353f2949fd783021" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.594931 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d53e22659598a78146c34a260d1508fd84a87d3c8c1cc0b353f2949fd783021"} err="failed to get container status \"6d53e22659598a78146c34a260d1508fd84a87d3c8c1cc0b353f2949fd783021\": rpc error: code = NotFound desc = could not find container \"6d53e22659598a78146c34a260d1508fd84a87d3c8c1cc0b353f2949fd783021\": container with ID starting with 6d53e22659598a78146c34a260d1508fd84a87d3c8c1cc0b353f2949fd783021 not found: ID does not exist" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.594985 4949 scope.go:117] "RemoveContainer" containerID="232b490a022160b8d907d3d1be8c42fb6b78ef0d88b2aa99f6d6b18d989a1756" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.610525 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"232b490a022160b8d907d3d1be8c42fb6b78ef0d88b2aa99f6d6b18d989a1756"} err="failed to get container status \"232b490a022160b8d907d3d1be8c42fb6b78ef0d88b2aa99f6d6b18d989a1756\": rpc error: code = NotFound desc = could not find container \"232b490a022160b8d907d3d1be8c42fb6b78ef0d88b2aa99f6d6b18d989a1756\": container with ID starting with 232b490a022160b8d907d3d1be8c42fb6b78ef0d88b2aa99f6d6b18d989a1756 not found: ID does not exist" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.627216 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba" path="/var/lib/kubelet/pods/21fb8ff9-9ef8-46ba-9e54-2cfc5052a6ba/volumes" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.628075 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.762835 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fb393d4-7b75-432e-aaec-767addd7eb30-public-tls-certs\") pod \"nova-api-0\" (UID: \"9fb393d4-7b75-432e-aaec-767addd7eb30\") " pod="openstack/nova-api-0" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.763190 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d84k6\" (UniqueName: \"kubernetes.io/projected/9fb393d4-7b75-432e-aaec-767addd7eb30-kube-api-access-d84k6\") pod \"nova-api-0\" (UID: \"9fb393d4-7b75-432e-aaec-767addd7eb30\") " pod="openstack/nova-api-0" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.763210 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fb393d4-7b75-432e-aaec-767addd7eb30-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9fb393d4-7b75-432e-aaec-767addd7eb30\") " pod="openstack/nova-api-0" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.763258 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fb393d4-7b75-432e-aaec-767addd7eb30-config-data\") pod \"nova-api-0\" (UID: \"9fb393d4-7b75-432e-aaec-767addd7eb30\") " pod="openstack/nova-api-0" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.763326 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fb393d4-7b75-432e-aaec-767addd7eb30-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9fb393d4-7b75-432e-aaec-767addd7eb30\") " pod="openstack/nova-api-0" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.763368 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fb393d4-7b75-432e-aaec-767addd7eb30-logs\") pod \"nova-api-0\" (UID: \"9fb393d4-7b75-432e-aaec-767addd7eb30\") " pod="openstack/nova-api-0" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.867016 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fb393d4-7b75-432e-aaec-767addd7eb30-public-tls-certs\") pod \"nova-api-0\" (UID: \"9fb393d4-7b75-432e-aaec-767addd7eb30\") " pod="openstack/nova-api-0" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.867159 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d84k6\" (UniqueName: \"kubernetes.io/projected/9fb393d4-7b75-432e-aaec-767addd7eb30-kube-api-access-d84k6\") pod \"nova-api-0\" (UID: \"9fb393d4-7b75-432e-aaec-767addd7eb30\") " pod="openstack/nova-api-0" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.867193 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fb393d4-7b75-432e-aaec-767addd7eb30-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9fb393d4-7b75-432e-aaec-767addd7eb30\") " pod="openstack/nova-api-0" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.867254 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fb393d4-7b75-432e-aaec-767addd7eb30-config-data\") pod \"nova-api-0\" (UID: \"9fb393d4-7b75-432e-aaec-767addd7eb30\") " pod="openstack/nova-api-0" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.867332 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fb393d4-7b75-432e-aaec-767addd7eb30-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9fb393d4-7b75-432e-aaec-767addd7eb30\") " pod="openstack/nova-api-0" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.867380 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fb393d4-7b75-432e-aaec-767addd7eb30-logs\") pod \"nova-api-0\" (UID: \"9fb393d4-7b75-432e-aaec-767addd7eb30\") " pod="openstack/nova-api-0" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.867865 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9fb393d4-7b75-432e-aaec-767addd7eb30-logs\") pod \"nova-api-0\" (UID: \"9fb393d4-7b75-432e-aaec-767addd7eb30\") " pod="openstack/nova-api-0" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.872891 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fb393d4-7b75-432e-aaec-767addd7eb30-public-tls-certs\") pod \"nova-api-0\" (UID: \"9fb393d4-7b75-432e-aaec-767addd7eb30\") " pod="openstack/nova-api-0" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.873773 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/9fb393d4-7b75-432e-aaec-767addd7eb30-internal-tls-certs\") pod \"nova-api-0\" (UID: \"9fb393d4-7b75-432e-aaec-767addd7eb30\") " pod="openstack/nova-api-0" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.874472 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9fb393d4-7b75-432e-aaec-767addd7eb30-config-data\") pod \"nova-api-0\" (UID: \"9fb393d4-7b75-432e-aaec-767addd7eb30\") " pod="openstack/nova-api-0" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.880323 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9fb393d4-7b75-432e-aaec-767addd7eb30-combined-ca-bundle\") pod \"nova-api-0\" (UID: \"9fb393d4-7b75-432e-aaec-767addd7eb30\") " pod="openstack/nova-api-0" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.886543 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d84k6\" (UniqueName: \"kubernetes.io/projected/9fb393d4-7b75-432e-aaec-767addd7eb30-kube-api-access-d84k6\") pod \"nova-api-0\" (UID: \"9fb393d4-7b75-432e-aaec-767addd7eb30\") " pod="openstack/nova-api-0" Oct 01 16:02:23 crc kubenswrapper[4949]: I1001 16:02:23.912187 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-api-0" Oct 01 16:02:24 crc kubenswrapper[4949]: W1001 16:02:24.349341 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9fb393d4_7b75_432e_aaec_767addd7eb30.slice/crio-f54507518fde7f0400ba92698e368b6970b1c29a513284872128107db184b957 WatchSource:0}: Error finding container f54507518fde7f0400ba92698e368b6970b1c29a513284872128107db184b957: Status 404 returned error can't find the container with id f54507518fde7f0400ba92698e368b6970b1c29a513284872128107db184b957 Oct 01 16:02:24 crc kubenswrapper[4949]: I1001 16:02:24.351337 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-api-0"] Oct 01 16:02:24 crc kubenswrapper[4949]: E1001 16:02:24.440716 4949 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="515fc1f74ea21db077344a168f73569962ec4d7d4492379de4dc11d6b5330f1a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 01 16:02:24 crc kubenswrapper[4949]: E1001 16:02:24.442168 4949 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="515fc1f74ea21db077344a168f73569962ec4d7d4492379de4dc11d6b5330f1a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 01 16:02:24 crc kubenswrapper[4949]: E1001 16:02:24.443582 4949 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="515fc1f74ea21db077344a168f73569962ec4d7d4492379de4dc11d6b5330f1a" cmd=["/usr/bin/pgrep","-r","DRST","nova-scheduler"] Oct 01 16:02:24 crc kubenswrapper[4949]: E1001 16:02:24.443703 4949 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openstack/nova-scheduler-0" podUID="b521e871-1589-4db3-a0dc-06eedffd3ada" containerName="nova-scheduler-scheduler" Oct 01 16:02:24 crc kubenswrapper[4949]: I1001 16:02:24.522105 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9fb393d4-7b75-432e-aaec-767addd7eb30","Type":"ContainerStarted","Data":"343d1dcd8606cfa8e922c9cd35ce9d17b293698aa2bbd10bc46c1044af6a89e1"} Oct 01 16:02:24 crc kubenswrapper[4949]: I1001 16:02:24.522175 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9fb393d4-7b75-432e-aaec-767addd7eb30","Type":"ContainerStarted","Data":"f54507518fde7f0400ba92698e368b6970b1c29a513284872128107db184b957"} Oct 01 16:02:25 crc kubenswrapper[4949]: I1001 16:02:25.536470 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-api-0" event={"ID":"9fb393d4-7b75-432e-aaec-767addd7eb30","Type":"ContainerStarted","Data":"aa2ec3b66f738f643789b96e2a69224ba2b9c30881f58b9d07fe0b6418373349"} Oct 01 16:02:25 crc kubenswrapper[4949]: I1001 16:02:25.563784 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-api-0" podStartSLOduration=2.563765734 podStartE2EDuration="2.563765734s" podCreationTimestamp="2025-10-01 16:02:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 16:02:25.560502633 +0000 UTC m=+1244.866108824" watchObservedRunningTime="2025-10-01 16:02:25.563765734 +0000 UTC m=+1244.869371915" Oct 01 16:02:26 crc kubenswrapper[4949]: I1001 16:02:26.303488 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 01 16:02:26 crc kubenswrapper[4949]: I1001 16:02:26.412228 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptsdc\" (UniqueName: \"kubernetes.io/projected/661e15c7-897e-4b47-9202-95dd5c6d9456-kube-api-access-ptsdc\") pod \"661e15c7-897e-4b47-9202-95dd5c6d9456\" (UID: \"661e15c7-897e-4b47-9202-95dd5c6d9456\") " Oct 01 16:02:26 crc kubenswrapper[4949]: I1001 16:02:26.412337 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/661e15c7-897e-4b47-9202-95dd5c6d9456-combined-ca-bundle\") pod \"661e15c7-897e-4b47-9202-95dd5c6d9456\" (UID: \"661e15c7-897e-4b47-9202-95dd5c6d9456\") " Oct 01 16:02:26 crc kubenswrapper[4949]: I1001 16:02:26.412428 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/661e15c7-897e-4b47-9202-95dd5c6d9456-config-data\") pod \"661e15c7-897e-4b47-9202-95dd5c6d9456\" (UID: \"661e15c7-897e-4b47-9202-95dd5c6d9456\") " Oct 01 16:02:26 crc kubenswrapper[4949]: I1001 16:02:26.412501 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/661e15c7-897e-4b47-9202-95dd5c6d9456-nova-metadata-tls-certs\") pod \"661e15c7-897e-4b47-9202-95dd5c6d9456\" (UID: \"661e15c7-897e-4b47-9202-95dd5c6d9456\") " Oct 01 16:02:26 crc kubenswrapper[4949]: I1001 16:02:26.412524 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/661e15c7-897e-4b47-9202-95dd5c6d9456-logs\") pod \"661e15c7-897e-4b47-9202-95dd5c6d9456\" (UID: \"661e15c7-897e-4b47-9202-95dd5c6d9456\") " Oct 01 16:02:26 crc kubenswrapper[4949]: I1001 16:02:26.413069 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/661e15c7-897e-4b47-9202-95dd5c6d9456-logs" (OuterVolumeSpecName: "logs") pod "661e15c7-897e-4b47-9202-95dd5c6d9456" (UID: "661e15c7-897e-4b47-9202-95dd5c6d9456"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:02:26 crc kubenswrapper[4949]: I1001 16:02:26.413522 4949 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/661e15c7-897e-4b47-9202-95dd5c6d9456-logs\") on node \"crc\" DevicePath \"\"" Oct 01 16:02:26 crc kubenswrapper[4949]: I1001 16:02:26.420429 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/661e15c7-897e-4b47-9202-95dd5c6d9456-kube-api-access-ptsdc" (OuterVolumeSpecName: "kube-api-access-ptsdc") pod "661e15c7-897e-4b47-9202-95dd5c6d9456" (UID: "661e15c7-897e-4b47-9202-95dd5c6d9456"). InnerVolumeSpecName "kube-api-access-ptsdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:02:26 crc kubenswrapper[4949]: I1001 16:02:26.441830 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/661e15c7-897e-4b47-9202-95dd5c6d9456-config-data" (OuterVolumeSpecName: "config-data") pod "661e15c7-897e-4b47-9202-95dd5c6d9456" (UID: "661e15c7-897e-4b47-9202-95dd5c6d9456"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:02:26 crc kubenswrapper[4949]: I1001 16:02:26.445332 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/661e15c7-897e-4b47-9202-95dd5c6d9456-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "661e15c7-897e-4b47-9202-95dd5c6d9456" (UID: "661e15c7-897e-4b47-9202-95dd5c6d9456"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:02:26 crc kubenswrapper[4949]: I1001 16:02:26.470062 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/661e15c7-897e-4b47-9202-95dd5c6d9456-nova-metadata-tls-certs" (OuterVolumeSpecName: "nova-metadata-tls-certs") pod "661e15c7-897e-4b47-9202-95dd5c6d9456" (UID: "661e15c7-897e-4b47-9202-95dd5c6d9456"). InnerVolumeSpecName "nova-metadata-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:02:26 crc kubenswrapper[4949]: I1001 16:02:26.514815 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ptsdc\" (UniqueName: \"kubernetes.io/projected/661e15c7-897e-4b47-9202-95dd5c6d9456-kube-api-access-ptsdc\") on node \"crc\" DevicePath \"\"" Oct 01 16:02:26 crc kubenswrapper[4949]: I1001 16:02:26.515036 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/661e15c7-897e-4b47-9202-95dd5c6d9456-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 16:02:26 crc kubenswrapper[4949]: I1001 16:02:26.515148 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/661e15c7-897e-4b47-9202-95dd5c6d9456-config-data\") on node \"crc\" DevicePath \"\"" Oct 01 16:02:26 crc kubenswrapper[4949]: I1001 16:02:26.515206 4949 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/661e15c7-897e-4b47-9202-95dd5c6d9456-nova-metadata-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 01 16:02:26 crc kubenswrapper[4949]: I1001 16:02:26.545580 4949 generic.go:334] "Generic (PLEG): container finished" podID="661e15c7-897e-4b47-9202-95dd5c6d9456" containerID="3f6a090a9f41560edbdd22e26eec4ec27025534cc833a303d4a90b249961ef16" exitCode=0 Oct 01 16:02:26 crc kubenswrapper[4949]: I1001 16:02:26.545663 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 01 16:02:26 crc kubenswrapper[4949]: I1001 16:02:26.545711 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"661e15c7-897e-4b47-9202-95dd5c6d9456","Type":"ContainerDied","Data":"3f6a090a9f41560edbdd22e26eec4ec27025534cc833a303d4a90b249961ef16"} Oct 01 16:02:26 crc kubenswrapper[4949]: I1001 16:02:26.545747 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"661e15c7-897e-4b47-9202-95dd5c6d9456","Type":"ContainerDied","Data":"e0afb970676fd2da1dd166172f49e27216b4f3a2ff977afa123296dc282a0f53"} Oct 01 16:02:26 crc kubenswrapper[4949]: I1001 16:02:26.545767 4949 scope.go:117] "RemoveContainer" containerID="3f6a090a9f41560edbdd22e26eec4ec27025534cc833a303d4a90b249961ef16" Oct 01 16:02:26 crc kubenswrapper[4949]: I1001 16:02:26.567624 4949 scope.go:117] "RemoveContainer" containerID="99fa9363b7bd81edeb2ea79a2171a519602d969795bc13c4f6b536d2de062680" Oct 01 16:02:26 crc kubenswrapper[4949]: I1001 16:02:26.581718 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-metadata-0"] Oct 01 16:02:26 crc kubenswrapper[4949]: I1001 16:02:26.600375 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-metadata-0"] Oct 01 16:02:26 crc kubenswrapper[4949]: I1001 16:02:26.609097 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-metadata-0"] Oct 01 16:02:26 crc kubenswrapper[4949]: E1001 16:02:26.609449 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="661e15c7-897e-4b47-9202-95dd5c6d9456" containerName="nova-metadata-log" Oct 01 16:02:26 crc kubenswrapper[4949]: I1001 16:02:26.609468 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="661e15c7-897e-4b47-9202-95dd5c6d9456" containerName="nova-metadata-log" Oct 01 16:02:26 crc kubenswrapper[4949]: E1001 16:02:26.609479 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="661e15c7-897e-4b47-9202-95dd5c6d9456" containerName="nova-metadata-metadata" Oct 01 16:02:26 crc kubenswrapper[4949]: I1001 16:02:26.609485 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="661e15c7-897e-4b47-9202-95dd5c6d9456" containerName="nova-metadata-metadata" Oct 01 16:02:26 crc kubenswrapper[4949]: I1001 16:02:26.609665 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="661e15c7-897e-4b47-9202-95dd5c6d9456" containerName="nova-metadata-log" Oct 01 16:02:26 crc kubenswrapper[4949]: I1001 16:02:26.609698 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="661e15c7-897e-4b47-9202-95dd5c6d9456" containerName="nova-metadata-metadata" Oct 01 16:02:26 crc kubenswrapper[4949]: I1001 16:02:26.610373 4949 scope.go:117] "RemoveContainer" containerID="3f6a090a9f41560edbdd22e26eec4ec27025534cc833a303d4a90b249961ef16" Oct 01 16:02:26 crc kubenswrapper[4949]: I1001 16:02:26.610622 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 01 16:02:26 crc kubenswrapper[4949]: I1001 16:02:26.623419 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 01 16:02:26 crc kubenswrapper[4949]: E1001 16:02:26.623448 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f6a090a9f41560edbdd22e26eec4ec27025534cc833a303d4a90b249961ef16\": container with ID starting with 3f6a090a9f41560edbdd22e26eec4ec27025534cc833a303d4a90b249961ef16 not found: ID does not exist" containerID="3f6a090a9f41560edbdd22e26eec4ec27025534cc833a303d4a90b249961ef16" Oct 01 16:02:26 crc kubenswrapper[4949]: I1001 16:02:26.623493 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f6a090a9f41560edbdd22e26eec4ec27025534cc833a303d4a90b249961ef16"} err="failed to get container status \"3f6a090a9f41560edbdd22e26eec4ec27025534cc833a303d4a90b249961ef16\": rpc error: code = NotFound desc = could not find container \"3f6a090a9f41560edbdd22e26eec4ec27025534cc833a303d4a90b249961ef16\": container with ID starting with 3f6a090a9f41560edbdd22e26eec4ec27025534cc833a303d4a90b249961ef16 not found: ID does not exist" Oct 01 16:02:26 crc kubenswrapper[4949]: I1001 16:02:26.623520 4949 scope.go:117] "RemoveContainer" containerID="99fa9363b7bd81edeb2ea79a2171a519602d969795bc13c4f6b536d2de062680" Oct 01 16:02:26 crc kubenswrapper[4949]: I1001 16:02:26.628810 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-config-data" Oct 01 16:02:26 crc kubenswrapper[4949]: I1001 16:02:26.629248 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-nova-metadata-internal-svc" Oct 01 16:02:26 crc kubenswrapper[4949]: E1001 16:02:26.629235 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"99fa9363b7bd81edeb2ea79a2171a519602d969795bc13c4f6b536d2de062680\": container with ID starting with 99fa9363b7bd81edeb2ea79a2171a519602d969795bc13c4f6b536d2de062680 not found: ID does not exist" containerID="99fa9363b7bd81edeb2ea79a2171a519602d969795bc13c4f6b536d2de062680" Oct 01 16:02:26 crc kubenswrapper[4949]: I1001 16:02:26.629324 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"99fa9363b7bd81edeb2ea79a2171a519602d969795bc13c4f6b536d2de062680"} err="failed to get container status \"99fa9363b7bd81edeb2ea79a2171a519602d969795bc13c4f6b536d2de062680\": rpc error: code = NotFound desc = could not find container \"99fa9363b7bd81edeb2ea79a2171a519602d969795bc13c4f6b536d2de062680\": container with ID starting with 99fa9363b7bd81edeb2ea79a2171a519602d969795bc13c4f6b536d2de062680 not found: ID does not exist" Oct 01 16:02:26 crc kubenswrapper[4949]: I1001 16:02:26.726548 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/562eed2a-1a27-4c6d-8c8c-675924006456-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"562eed2a-1a27-4c6d-8c8c-675924006456\") " pod="openstack/nova-metadata-0" Oct 01 16:02:26 crc kubenswrapper[4949]: I1001 16:02:26.726815 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/562eed2a-1a27-4c6d-8c8c-675924006456-config-data\") pod \"nova-metadata-0\" (UID: \"562eed2a-1a27-4c6d-8c8c-675924006456\") " pod="openstack/nova-metadata-0" Oct 01 16:02:26 crc kubenswrapper[4949]: I1001 16:02:26.726864 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-975cg\" (UniqueName: \"kubernetes.io/projected/562eed2a-1a27-4c6d-8c8c-675924006456-kube-api-access-975cg\") pod \"nova-metadata-0\" (UID: \"562eed2a-1a27-4c6d-8c8c-675924006456\") " pod="openstack/nova-metadata-0" Oct 01 16:02:26 crc kubenswrapper[4949]: I1001 16:02:26.726995 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/562eed2a-1a27-4c6d-8c8c-675924006456-logs\") pod \"nova-metadata-0\" (UID: \"562eed2a-1a27-4c6d-8c8c-675924006456\") " pod="openstack/nova-metadata-0" Oct 01 16:02:26 crc kubenswrapper[4949]: I1001 16:02:26.727064 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/562eed2a-1a27-4c6d-8c8c-675924006456-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"562eed2a-1a27-4c6d-8c8c-675924006456\") " pod="openstack/nova-metadata-0" Oct 01 16:02:26 crc kubenswrapper[4949]: I1001 16:02:26.829207 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/562eed2a-1a27-4c6d-8c8c-675924006456-config-data\") pod \"nova-metadata-0\" (UID: \"562eed2a-1a27-4c6d-8c8c-675924006456\") " pod="openstack/nova-metadata-0" Oct 01 16:02:26 crc kubenswrapper[4949]: I1001 16:02:26.829244 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-975cg\" (UniqueName: \"kubernetes.io/projected/562eed2a-1a27-4c6d-8c8c-675924006456-kube-api-access-975cg\") pod \"nova-metadata-0\" (UID: \"562eed2a-1a27-4c6d-8c8c-675924006456\") " pod="openstack/nova-metadata-0" Oct 01 16:02:26 crc kubenswrapper[4949]: I1001 16:02:26.829298 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/562eed2a-1a27-4c6d-8c8c-675924006456-logs\") pod \"nova-metadata-0\" (UID: \"562eed2a-1a27-4c6d-8c8c-675924006456\") " pod="openstack/nova-metadata-0" Oct 01 16:02:26 crc kubenswrapper[4949]: I1001 16:02:26.829323 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/562eed2a-1a27-4c6d-8c8c-675924006456-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"562eed2a-1a27-4c6d-8c8c-675924006456\") " pod="openstack/nova-metadata-0" Oct 01 16:02:26 crc kubenswrapper[4949]: I1001 16:02:26.829375 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/562eed2a-1a27-4c6d-8c8c-675924006456-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"562eed2a-1a27-4c6d-8c8c-675924006456\") " pod="openstack/nova-metadata-0" Oct 01 16:02:26 crc kubenswrapper[4949]: I1001 16:02:26.830138 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/562eed2a-1a27-4c6d-8c8c-675924006456-logs\") pod \"nova-metadata-0\" (UID: \"562eed2a-1a27-4c6d-8c8c-675924006456\") " pod="openstack/nova-metadata-0" Oct 01 16:02:26 crc kubenswrapper[4949]: I1001 16:02:26.833726 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/562eed2a-1a27-4c6d-8c8c-675924006456-combined-ca-bundle\") pod \"nova-metadata-0\" (UID: \"562eed2a-1a27-4c6d-8c8c-675924006456\") " pod="openstack/nova-metadata-0" Oct 01 16:02:26 crc kubenswrapper[4949]: I1001 16:02:26.834357 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/562eed2a-1a27-4c6d-8c8c-675924006456-config-data\") pod \"nova-metadata-0\" (UID: \"562eed2a-1a27-4c6d-8c8c-675924006456\") " pod="openstack/nova-metadata-0" Oct 01 16:02:26 crc kubenswrapper[4949]: I1001 16:02:26.834474 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-tls-certs\" (UniqueName: \"kubernetes.io/secret/562eed2a-1a27-4c6d-8c8c-675924006456-nova-metadata-tls-certs\") pod \"nova-metadata-0\" (UID: \"562eed2a-1a27-4c6d-8c8c-675924006456\") " pod="openstack/nova-metadata-0" Oct 01 16:02:26 crc kubenswrapper[4949]: I1001 16:02:26.846264 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-975cg\" (UniqueName: \"kubernetes.io/projected/562eed2a-1a27-4c6d-8c8c-675924006456-kube-api-access-975cg\") pod \"nova-metadata-0\" (UID: \"562eed2a-1a27-4c6d-8c8c-675924006456\") " pod="openstack/nova-metadata-0" Oct 01 16:02:26 crc kubenswrapper[4949]: I1001 16:02:26.947898 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-metadata-0" Oct 01 16:02:27 crc kubenswrapper[4949]: I1001 16:02:27.381089 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-metadata-0"] Oct 01 16:02:27 crc kubenswrapper[4949]: W1001 16:02:27.382677 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod562eed2a_1a27_4c6d_8c8c_675924006456.slice/crio-a6130725d5f5e2a3f38c6e0f1fc7341a4a6f38d7f9d88853c30ccfe283e4c4be WatchSource:0}: Error finding container a6130725d5f5e2a3f38c6e0f1fc7341a4a6f38d7f9d88853c30ccfe283e4c4be: Status 404 returned error can't find the container with id a6130725d5f5e2a3f38c6e0f1fc7341a4a6f38d7f9d88853c30ccfe283e4c4be Oct 01 16:02:27 crc kubenswrapper[4949]: I1001 16:02:27.559171 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"562eed2a-1a27-4c6d-8c8c-675924006456","Type":"ContainerStarted","Data":"a6130725d5f5e2a3f38c6e0f1fc7341a4a6f38d7f9d88853c30ccfe283e4c4be"} Oct 01 16:02:27 crc kubenswrapper[4949]: I1001 16:02:27.613897 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="661e15c7-897e-4b47-9202-95dd5c6d9456" path="/var/lib/kubelet/pods/661e15c7-897e-4b47-9202-95dd5c6d9456/volumes" Oct 01 16:02:28 crc kubenswrapper[4949]: I1001 16:02:28.465823 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 01 16:02:28 crc kubenswrapper[4949]: I1001 16:02:28.574447 4949 generic.go:334] "Generic (PLEG): container finished" podID="b521e871-1589-4db3-a0dc-06eedffd3ada" containerID="515fc1f74ea21db077344a168f73569962ec4d7d4492379de4dc11d6b5330f1a" exitCode=0 Oct 01 16:02:28 crc kubenswrapper[4949]: I1001 16:02:28.574573 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b521e871-1589-4db3-a0dc-06eedffd3ada","Type":"ContainerDied","Data":"515fc1f74ea21db077344a168f73569962ec4d7d4492379de4dc11d6b5330f1a"} Oct 01 16:02:28 crc kubenswrapper[4949]: I1001 16:02:28.574647 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"b521e871-1589-4db3-a0dc-06eedffd3ada","Type":"ContainerDied","Data":"2ae9e983a34fbe5cee149e1c4fed39d0a794a717222e37b45e99c7a43636e380"} Oct 01 16:02:28 crc kubenswrapper[4949]: I1001 16:02:28.574704 4949 scope.go:117] "RemoveContainer" containerID="515fc1f74ea21db077344a168f73569962ec4d7d4492379de4dc11d6b5330f1a" Oct 01 16:02:28 crc kubenswrapper[4949]: I1001 16:02:28.575167 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 01 16:02:28 crc kubenswrapper[4949]: I1001 16:02:28.581109 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"562eed2a-1a27-4c6d-8c8c-675924006456","Type":"ContainerStarted","Data":"e7ab3502ebc7d1328be6b8d95f0383b4c0dbe8d46a71cd872dc29d836781c296"} Oct 01 16:02:28 crc kubenswrapper[4949]: I1001 16:02:28.581216 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-metadata-0" event={"ID":"562eed2a-1a27-4c6d-8c8c-675924006456","Type":"ContainerStarted","Data":"c6520db4bfa97ad353fd4fa87e4ea7bc6b4235bd2534af7f8934a4ff67fe1faf"} Oct 01 16:02:28 crc kubenswrapper[4949]: I1001 16:02:28.610511 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-metadata-0" podStartSLOduration=2.610481783 podStartE2EDuration="2.610481783s" podCreationTimestamp="2025-10-01 16:02:26 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 16:02:28.607647594 +0000 UTC m=+1247.913253785" watchObservedRunningTime="2025-10-01 16:02:28.610481783 +0000 UTC m=+1247.916087974" Oct 01 16:02:28 crc kubenswrapper[4949]: I1001 16:02:28.611903 4949 scope.go:117] "RemoveContainer" containerID="515fc1f74ea21db077344a168f73569962ec4d7d4492379de4dc11d6b5330f1a" Oct 01 16:02:28 crc kubenswrapper[4949]: E1001 16:02:28.612323 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"515fc1f74ea21db077344a168f73569962ec4d7d4492379de4dc11d6b5330f1a\": container with ID starting with 515fc1f74ea21db077344a168f73569962ec4d7d4492379de4dc11d6b5330f1a not found: ID does not exist" containerID="515fc1f74ea21db077344a168f73569962ec4d7d4492379de4dc11d6b5330f1a" Oct 01 16:02:28 crc kubenswrapper[4949]: I1001 16:02:28.612367 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"515fc1f74ea21db077344a168f73569962ec4d7d4492379de4dc11d6b5330f1a"} err="failed to get container status \"515fc1f74ea21db077344a168f73569962ec4d7d4492379de4dc11d6b5330f1a\": rpc error: code = NotFound desc = could not find container \"515fc1f74ea21db077344a168f73569962ec4d7d4492379de4dc11d6b5330f1a\": container with ID starting with 515fc1f74ea21db077344a168f73569962ec4d7d4492379de4dc11d6b5330f1a not found: ID does not exist" Oct 01 16:02:28 crc kubenswrapper[4949]: I1001 16:02:28.662809 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b521e871-1589-4db3-a0dc-06eedffd3ada-config-data\") pod \"b521e871-1589-4db3-a0dc-06eedffd3ada\" (UID: \"b521e871-1589-4db3-a0dc-06eedffd3ada\") " Oct 01 16:02:28 crc kubenswrapper[4949]: I1001 16:02:28.662897 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pf9sn\" (UniqueName: \"kubernetes.io/projected/b521e871-1589-4db3-a0dc-06eedffd3ada-kube-api-access-pf9sn\") pod \"b521e871-1589-4db3-a0dc-06eedffd3ada\" (UID: \"b521e871-1589-4db3-a0dc-06eedffd3ada\") " Oct 01 16:02:28 crc kubenswrapper[4949]: I1001 16:02:28.662964 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b521e871-1589-4db3-a0dc-06eedffd3ada-combined-ca-bundle\") pod \"b521e871-1589-4db3-a0dc-06eedffd3ada\" (UID: \"b521e871-1589-4db3-a0dc-06eedffd3ada\") " Oct 01 16:02:28 crc kubenswrapper[4949]: I1001 16:02:28.672807 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b521e871-1589-4db3-a0dc-06eedffd3ada-kube-api-access-pf9sn" (OuterVolumeSpecName: "kube-api-access-pf9sn") pod "b521e871-1589-4db3-a0dc-06eedffd3ada" (UID: "b521e871-1589-4db3-a0dc-06eedffd3ada"). InnerVolumeSpecName "kube-api-access-pf9sn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:02:28 crc kubenswrapper[4949]: I1001 16:02:28.688707 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b521e871-1589-4db3-a0dc-06eedffd3ada-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b521e871-1589-4db3-a0dc-06eedffd3ada" (UID: "b521e871-1589-4db3-a0dc-06eedffd3ada"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:02:28 crc kubenswrapper[4949]: I1001 16:02:28.692821 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b521e871-1589-4db3-a0dc-06eedffd3ada-config-data" (OuterVolumeSpecName: "config-data") pod "b521e871-1589-4db3-a0dc-06eedffd3ada" (UID: "b521e871-1589-4db3-a0dc-06eedffd3ada"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:02:28 crc kubenswrapper[4949]: I1001 16:02:28.765052 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b521e871-1589-4db3-a0dc-06eedffd3ada-config-data\") on node \"crc\" DevicePath \"\"" Oct 01 16:02:28 crc kubenswrapper[4949]: I1001 16:02:28.765088 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pf9sn\" (UniqueName: \"kubernetes.io/projected/b521e871-1589-4db3-a0dc-06eedffd3ada-kube-api-access-pf9sn\") on node \"crc\" DevicePath \"\"" Oct 01 16:02:28 crc kubenswrapper[4949]: I1001 16:02:28.765101 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b521e871-1589-4db3-a0dc-06eedffd3ada-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 16:02:28 crc kubenswrapper[4949]: I1001 16:02:28.909187 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-scheduler-0"] Oct 01 16:02:28 crc kubenswrapper[4949]: I1001 16:02:28.920185 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-scheduler-0"] Oct 01 16:02:28 crc kubenswrapper[4949]: I1001 16:02:28.928939 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-scheduler-0"] Oct 01 16:02:28 crc kubenswrapper[4949]: E1001 16:02:28.929917 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b521e871-1589-4db3-a0dc-06eedffd3ada" containerName="nova-scheduler-scheduler" Oct 01 16:02:28 crc kubenswrapper[4949]: I1001 16:02:28.929946 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="b521e871-1589-4db3-a0dc-06eedffd3ada" containerName="nova-scheduler-scheduler" Oct 01 16:02:28 crc kubenswrapper[4949]: I1001 16:02:28.930183 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="b521e871-1589-4db3-a0dc-06eedffd3ada" containerName="nova-scheduler-scheduler" Oct 01 16:02:28 crc kubenswrapper[4949]: I1001 16:02:28.930843 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 01 16:02:28 crc kubenswrapper[4949]: I1001 16:02:28.932334 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-scheduler-config-data" Oct 01 16:02:28 crc kubenswrapper[4949]: I1001 16:02:28.940237 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 01 16:02:29 crc kubenswrapper[4949]: I1001 16:02:29.070324 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53891933-1769-4a81-b239-f5b4a02cbe81-config-data\") pod \"nova-scheduler-0\" (UID: \"53891933-1769-4a81-b239-f5b4a02cbe81\") " pod="openstack/nova-scheduler-0" Oct 01 16:02:29 crc kubenswrapper[4949]: I1001 16:02:29.070377 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53891933-1769-4a81-b239-f5b4a02cbe81-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"53891933-1769-4a81-b239-f5b4a02cbe81\") " pod="openstack/nova-scheduler-0" Oct 01 16:02:29 crc kubenswrapper[4949]: I1001 16:02:29.070406 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tz2s\" (UniqueName: \"kubernetes.io/projected/53891933-1769-4a81-b239-f5b4a02cbe81-kube-api-access-2tz2s\") pod \"nova-scheduler-0\" (UID: \"53891933-1769-4a81-b239-f5b4a02cbe81\") " pod="openstack/nova-scheduler-0" Oct 01 16:02:29 crc kubenswrapper[4949]: I1001 16:02:29.172050 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53891933-1769-4a81-b239-f5b4a02cbe81-config-data\") pod \"nova-scheduler-0\" (UID: \"53891933-1769-4a81-b239-f5b4a02cbe81\") " pod="openstack/nova-scheduler-0" Oct 01 16:02:29 crc kubenswrapper[4949]: I1001 16:02:29.172119 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53891933-1769-4a81-b239-f5b4a02cbe81-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"53891933-1769-4a81-b239-f5b4a02cbe81\") " pod="openstack/nova-scheduler-0" Oct 01 16:02:29 crc kubenswrapper[4949]: I1001 16:02:29.172210 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tz2s\" (UniqueName: \"kubernetes.io/projected/53891933-1769-4a81-b239-f5b4a02cbe81-kube-api-access-2tz2s\") pod \"nova-scheduler-0\" (UID: \"53891933-1769-4a81-b239-f5b4a02cbe81\") " pod="openstack/nova-scheduler-0" Oct 01 16:02:29 crc kubenswrapper[4949]: I1001 16:02:29.175987 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/53891933-1769-4a81-b239-f5b4a02cbe81-combined-ca-bundle\") pod \"nova-scheduler-0\" (UID: \"53891933-1769-4a81-b239-f5b4a02cbe81\") " pod="openstack/nova-scheduler-0" Oct 01 16:02:29 crc kubenswrapper[4949]: I1001 16:02:29.176546 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53891933-1769-4a81-b239-f5b4a02cbe81-config-data\") pod \"nova-scheduler-0\" (UID: \"53891933-1769-4a81-b239-f5b4a02cbe81\") " pod="openstack/nova-scheduler-0" Oct 01 16:02:29 crc kubenswrapper[4949]: I1001 16:02:29.187588 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tz2s\" (UniqueName: \"kubernetes.io/projected/53891933-1769-4a81-b239-f5b4a02cbe81-kube-api-access-2tz2s\") pod \"nova-scheduler-0\" (UID: \"53891933-1769-4a81-b239-f5b4a02cbe81\") " pod="openstack/nova-scheduler-0" Oct 01 16:02:29 crc kubenswrapper[4949]: I1001 16:02:29.288910 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-scheduler-0" Oct 01 16:02:29 crc kubenswrapper[4949]: I1001 16:02:29.610389 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b521e871-1589-4db3-a0dc-06eedffd3ada" path="/var/lib/kubelet/pods/b521e871-1589-4db3-a0dc-06eedffd3ada/volumes" Oct 01 16:02:29 crc kubenswrapper[4949]: I1001 16:02:29.711377 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-scheduler-0"] Oct 01 16:02:29 crc kubenswrapper[4949]: W1001 16:02:29.714043 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53891933_1769_4a81_b239_f5b4a02cbe81.slice/crio-d591376280ca9bd851754cce14fcd8b030cb8feb79d818f4f5b36e9eb588e549 WatchSource:0}: Error finding container d591376280ca9bd851754cce14fcd8b030cb8feb79d818f4f5b36e9eb588e549: Status 404 returned error can't find the container with id d591376280ca9bd851754cce14fcd8b030cb8feb79d818f4f5b36e9eb588e549 Oct 01 16:02:30 crc kubenswrapper[4949]: I1001 16:02:30.596087 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"53891933-1769-4a81-b239-f5b4a02cbe81","Type":"ContainerStarted","Data":"bf5585b96dd014c4e07362cb10b3ebd8147f65d33b3119988daf8b8263c24ca8"} Oct 01 16:02:30 crc kubenswrapper[4949]: I1001 16:02:30.596415 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-scheduler-0" event={"ID":"53891933-1769-4a81-b239-f5b4a02cbe81","Type":"ContainerStarted","Data":"d591376280ca9bd851754cce14fcd8b030cb8feb79d818f4f5b36e9eb588e549"} Oct 01 16:02:30 crc kubenswrapper[4949]: I1001 16:02:30.619262 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-scheduler-0" podStartSLOduration=2.619237652 podStartE2EDuration="2.619237652s" podCreationTimestamp="2025-10-01 16:02:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 16:02:30.616418234 +0000 UTC m=+1249.922024425" watchObservedRunningTime="2025-10-01 16:02:30.619237652 +0000 UTC m=+1249.924843853" Oct 01 16:02:31 crc kubenswrapper[4949]: I1001 16:02:31.948822 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 01 16:02:31 crc kubenswrapper[4949]: I1001 16:02:31.949187 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-metadata-0" Oct 01 16:02:33 crc kubenswrapper[4949]: I1001 16:02:33.913457 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 01 16:02:33 crc kubenswrapper[4949]: I1001 16:02:33.913803 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-api-0" Oct 01 16:02:34 crc kubenswrapper[4949]: I1001 16:02:34.290052 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-scheduler-0" Oct 01 16:02:34 crc kubenswrapper[4949]: I1001 16:02:34.929595 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9fb393d4-7b75-432e-aaec-767addd7eb30" containerName="nova-api-log" probeResult="failure" output="Get \"https://10.217.0.187:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 01 16:02:34 crc kubenswrapper[4949]: I1001 16:02:34.929688 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-api-0" podUID="9fb393d4-7b75-432e-aaec-767addd7eb30" containerName="nova-api-api" probeResult="failure" output="Get \"https://10.217.0.187:8774/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 01 16:02:36 crc kubenswrapper[4949]: I1001 16:02:36.949720 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 01 16:02:36 crc kubenswrapper[4949]: I1001 16:02:36.950064 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-metadata-0" Oct 01 16:02:37 crc kubenswrapper[4949]: I1001 16:02:37.962408 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="562eed2a-1a27-4c6d-8c8c-675924006456" containerName="nova-metadata-log" probeResult="failure" output="Get \"https://10.217.0.188:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 01 16:02:37 crc kubenswrapper[4949]: I1001 16:02:37.962417 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/nova-metadata-0" podUID="562eed2a-1a27-4c6d-8c8c-675924006456" containerName="nova-metadata-metadata" probeResult="failure" output="Get \"https://10.217.0.188:8775/\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Oct 01 16:02:39 crc kubenswrapper[4949]: I1001 16:02:39.289779 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/nova-scheduler-0" Oct 01 16:02:39 crc kubenswrapper[4949]: I1001 16:02:39.324517 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-scheduler-0" Oct 01 16:02:39 crc kubenswrapper[4949]: I1001 16:02:39.724689 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-scheduler-0" Oct 01 16:02:41 crc kubenswrapper[4949]: I1001 16:02:41.781728 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 01 16:02:43 crc kubenswrapper[4949]: I1001 16:02:43.918919 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 01 16:02:43 crc kubenswrapper[4949]: I1001 16:02:43.920198 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 01 16:02:43 crc kubenswrapper[4949]: I1001 16:02:43.925517 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-api-0" Oct 01 16:02:43 crc kubenswrapper[4949]: I1001 16:02:43.925562 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 01 16:02:44 crc kubenswrapper[4949]: I1001 16:02:44.738155 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/nova-api-0" Oct 01 16:02:44 crc kubenswrapper[4949]: I1001 16:02:44.745741 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-api-0" Oct 01 16:02:46 crc kubenswrapper[4949]: I1001 16:02:46.955616 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 01 16:02:46 crc kubenswrapper[4949]: I1001 16:02:46.958319 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/nova-metadata-0" Oct 01 16:02:46 crc kubenswrapper[4949]: I1001 16:02:46.962764 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 01 16:02:47 crc kubenswrapper[4949]: I1001 16:02:47.770955 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/nova-metadata-0" Oct 01 16:02:48 crc kubenswrapper[4949]: I1001 16:02:48.038957 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 16:02:48 crc kubenswrapper[4949]: I1001 16:02:48.039045 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 16:02:55 crc kubenswrapper[4949]: I1001 16:02:55.771916 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 01 16:02:56 crc kubenswrapper[4949]: I1001 16:02:56.881209 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 01 16:03:00 crc kubenswrapper[4949]: I1001 16:03:00.036889 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-server-0" podUID="01cf3bff-bdb8-43f9-bf81-c106d5c5dae2" containerName="rabbitmq" containerID="cri-o://988a4aea139d70fee8c74440ebe2c28bff716f0cf678b079a97952a5beef6bf4" gracePeriod=604796 Oct 01 16:03:00 crc kubenswrapper[4949]: I1001 16:03:00.717508 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/rabbitmq-cell1-server-0" podUID="b9ca2257-0b8d-4f57-8772-8b6d5d28b10a" containerName="rabbitmq" containerID="cri-o://2e1045bad23fe7d809b70ff5cc0749488cda57fe86e9807366144059f0c42bde" gracePeriod=604797 Oct 01 16:03:00 crc kubenswrapper[4949]: I1001 16:03:00.992742 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-server-0" podUID="01cf3bff-bdb8-43f9-bf81-c106d5c5dae2" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.101:5671: connect: connection refused" Oct 01 16:03:01 crc kubenswrapper[4949]: I1001 16:03:01.276453 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/rabbitmq-cell1-server-0" podUID="b9ca2257-0b8d-4f57-8772-8b6d5d28b10a" containerName="rabbitmq" probeResult="failure" output="dial tcp 10.217.0.102:5671: connect: connection refused" Oct 01 16:03:06 crc kubenswrapper[4949]: I1001 16:03:06.572193 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 01 16:03:06 crc kubenswrapper[4949]: I1001 16:03:06.745899 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-config-data\") pod \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\" (UID: \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\") " Oct 01 16:03:06 crc kubenswrapper[4949]: I1001 16:03:06.746596 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-server-conf\") pod \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\" (UID: \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\") " Oct 01 16:03:06 crc kubenswrapper[4949]: I1001 16:03:06.746828 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-pod-info\") pod \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\" (UID: \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\") " Oct 01 16:03:06 crc kubenswrapper[4949]: I1001 16:03:06.746854 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-plugins-conf\") pod \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\" (UID: \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\") " Oct 01 16:03:06 crc kubenswrapper[4949]: I1001 16:03:06.746972 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z4w8w\" (UniqueName: \"kubernetes.io/projected/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-kube-api-access-z4w8w\") pod \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\" (UID: \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\") " Oct 01 16:03:06 crc kubenswrapper[4949]: I1001 16:03:06.747032 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-rabbitmq-plugins\") pod \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\" (UID: \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\") " Oct 01 16:03:06 crc kubenswrapper[4949]: I1001 16:03:06.747060 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-rabbitmq-tls\") pod \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\" (UID: \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\") " Oct 01 16:03:06 crc kubenswrapper[4949]: I1001 16:03:06.747109 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-rabbitmq-erlang-cookie\") pod \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\" (UID: \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\") " Oct 01 16:03:06 crc kubenswrapper[4949]: I1001 16:03:06.747165 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-erlang-cookie-secret\") pod \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\" (UID: \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\") " Oct 01 16:03:06 crc kubenswrapper[4949]: I1001 16:03:06.747191 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-rabbitmq-confd\") pod \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\" (UID: \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\") " Oct 01 16:03:06 crc kubenswrapper[4949]: I1001 16:03:06.747242 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\" (UID: \"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2\") " Oct 01 16:03:06 crc kubenswrapper[4949]: I1001 16:03:06.747405 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "01cf3bff-bdb8-43f9-bf81-c106d5c5dae2" (UID: "01cf3bff-bdb8-43f9-bf81-c106d5c5dae2"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 16:03:06 crc kubenswrapper[4949]: I1001 16:03:06.747650 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "01cf3bff-bdb8-43f9-bf81-c106d5c5dae2" (UID: "01cf3bff-bdb8-43f9-bf81-c106d5c5dae2"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:03:06 crc kubenswrapper[4949]: I1001 16:03:06.747739 4949 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-plugins-conf\") on node \"crc\" DevicePath \"\"" Oct 01 16:03:06 crc kubenswrapper[4949]: I1001 16:03:06.747752 4949 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Oct 01 16:03:06 crc kubenswrapper[4949]: I1001 16:03:06.747946 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "01cf3bff-bdb8-43f9-bf81-c106d5c5dae2" (UID: "01cf3bff-bdb8-43f9-bf81-c106d5c5dae2"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:03:06 crc kubenswrapper[4949]: I1001 16:03:06.752835 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-pod-info" (OuterVolumeSpecName: "pod-info") pod "01cf3bff-bdb8-43f9-bf81-c106d5c5dae2" (UID: "01cf3bff-bdb8-43f9-bf81-c106d5c5dae2"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Oct 01 16:03:06 crc kubenswrapper[4949]: I1001 16:03:06.753094 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "persistence") pod "01cf3bff-bdb8-43f9-bf81-c106d5c5dae2" (UID: "01cf3bff-bdb8-43f9-bf81-c106d5c5dae2"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 01 16:03:06 crc kubenswrapper[4949]: I1001 16:03:06.753268 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "01cf3bff-bdb8-43f9-bf81-c106d5c5dae2" (UID: "01cf3bff-bdb8-43f9-bf81-c106d5c5dae2"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:03:06 crc kubenswrapper[4949]: I1001 16:03:06.753771 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "01cf3bff-bdb8-43f9-bf81-c106d5c5dae2" (UID: "01cf3bff-bdb8-43f9-bf81-c106d5c5dae2"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:03:06 crc kubenswrapper[4949]: I1001 16:03:06.756253 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-kube-api-access-z4w8w" (OuterVolumeSpecName: "kube-api-access-z4w8w") pod "01cf3bff-bdb8-43f9-bf81-c106d5c5dae2" (UID: "01cf3bff-bdb8-43f9-bf81-c106d5c5dae2"). InnerVolumeSpecName "kube-api-access-z4w8w". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:03:06 crc kubenswrapper[4949]: I1001 16:03:06.773313 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-config-data" (OuterVolumeSpecName: "config-data") pod "01cf3bff-bdb8-43f9-bf81-c106d5c5dae2" (UID: "01cf3bff-bdb8-43f9-bf81-c106d5c5dae2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 16:03:06 crc kubenswrapper[4949]: I1001 16:03:06.804886 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-server-conf" (OuterVolumeSpecName: "server-conf") pod "01cf3bff-bdb8-43f9-bf81-c106d5c5dae2" (UID: "01cf3bff-bdb8-43f9-bf81-c106d5c5dae2"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 16:03:06 crc kubenswrapper[4949]: I1001 16:03:06.848589 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z4w8w\" (UniqueName: \"kubernetes.io/projected/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-kube-api-access-z4w8w\") on node \"crc\" DevicePath \"\"" Oct 01 16:03:06 crc kubenswrapper[4949]: I1001 16:03:06.848634 4949 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Oct 01 16:03:06 crc kubenswrapper[4949]: I1001 16:03:06.848651 4949 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Oct 01 16:03:06 crc kubenswrapper[4949]: I1001 16:03:06.848664 4949 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Oct 01 16:03:06 crc kubenswrapper[4949]: I1001 16:03:06.848699 4949 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Oct 01 16:03:06 crc kubenswrapper[4949]: I1001 16:03:06.848711 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-config-data\") on node \"crc\" DevicePath \"\"" Oct 01 16:03:06 crc kubenswrapper[4949]: I1001 16:03:06.848721 4949 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-server-conf\") on node \"crc\" DevicePath \"\"" Oct 01 16:03:06 crc kubenswrapper[4949]: I1001 16:03:06.848733 4949 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-pod-info\") on node \"crc\" DevicePath \"\"" Oct 01 16:03:06 crc kubenswrapper[4949]: I1001 16:03:06.856655 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "01cf3bff-bdb8-43f9-bf81-c106d5c5dae2" (UID: "01cf3bff-bdb8-43f9-bf81-c106d5c5dae2"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:03:06 crc kubenswrapper[4949]: I1001 16:03:06.870621 4949 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Oct 01 16:03:06 crc kubenswrapper[4949]: I1001 16:03:06.927444 4949 generic.go:334] "Generic (PLEG): container finished" podID="01cf3bff-bdb8-43f9-bf81-c106d5c5dae2" containerID="988a4aea139d70fee8c74440ebe2c28bff716f0cf678b079a97952a5beef6bf4" exitCode=0 Oct 01 16:03:06 crc kubenswrapper[4949]: I1001 16:03:06.927494 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2","Type":"ContainerDied","Data":"988a4aea139d70fee8c74440ebe2c28bff716f0cf678b079a97952a5beef6bf4"} Oct 01 16:03:06 crc kubenswrapper[4949]: I1001 16:03:06.927526 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"01cf3bff-bdb8-43f9-bf81-c106d5c5dae2","Type":"ContainerDied","Data":"6ca829030abdba0e6e2294139faa3220f1c3ec37d7dab9e2c8d904b469b527e7"} Oct 01 16:03:06 crc kubenswrapper[4949]: I1001 16:03:06.927548 4949 scope.go:117] "RemoveContainer" containerID="988a4aea139d70fee8c74440ebe2c28bff716f0cf678b079a97952a5beef6bf4" Oct 01 16:03:06 crc kubenswrapper[4949]: I1001 16:03:06.927690 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 01 16:03:06 crc kubenswrapper[4949]: I1001 16:03:06.950212 4949 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Oct 01 16:03:06 crc kubenswrapper[4949]: I1001 16:03:06.950254 4949 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Oct 01 16:03:06 crc kubenswrapper[4949]: I1001 16:03:06.967084 4949 scope.go:117] "RemoveContainer" containerID="9d232863bad63a6d2e94fa2dacde909b924739702a22f1742d3468ae1c494e77" Oct 01 16:03:06 crc kubenswrapper[4949]: I1001 16:03:06.979726 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 01 16:03:06 crc kubenswrapper[4949]: I1001 16:03:06.992642 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.006677 4949 scope.go:117] "RemoveContainer" containerID="988a4aea139d70fee8c74440ebe2c28bff716f0cf678b079a97952a5beef6bf4" Oct 01 16:03:07 crc kubenswrapper[4949]: E1001 16:03:07.007330 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"988a4aea139d70fee8c74440ebe2c28bff716f0cf678b079a97952a5beef6bf4\": container with ID starting with 988a4aea139d70fee8c74440ebe2c28bff716f0cf678b079a97952a5beef6bf4 not found: ID does not exist" containerID="988a4aea139d70fee8c74440ebe2c28bff716f0cf678b079a97952a5beef6bf4" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.007359 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"988a4aea139d70fee8c74440ebe2c28bff716f0cf678b079a97952a5beef6bf4"} err="failed to get container status \"988a4aea139d70fee8c74440ebe2c28bff716f0cf678b079a97952a5beef6bf4\": rpc error: code = NotFound desc = could not find container \"988a4aea139d70fee8c74440ebe2c28bff716f0cf678b079a97952a5beef6bf4\": container with ID starting with 988a4aea139d70fee8c74440ebe2c28bff716f0cf678b079a97952a5beef6bf4 not found: ID does not exist" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.007379 4949 scope.go:117] "RemoveContainer" containerID="9d232863bad63a6d2e94fa2dacde909b924739702a22f1742d3468ae1c494e77" Oct 01 16:03:07 crc kubenswrapper[4949]: E1001 16:03:07.007609 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d232863bad63a6d2e94fa2dacde909b924739702a22f1742d3468ae1c494e77\": container with ID starting with 9d232863bad63a6d2e94fa2dacde909b924739702a22f1742d3468ae1c494e77 not found: ID does not exist" containerID="9d232863bad63a6d2e94fa2dacde909b924739702a22f1742d3468ae1c494e77" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.007626 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d232863bad63a6d2e94fa2dacde909b924739702a22f1742d3468ae1c494e77"} err="failed to get container status \"9d232863bad63a6d2e94fa2dacde909b924739702a22f1742d3468ae1c494e77\": rpc error: code = NotFound desc = could not find container \"9d232863bad63a6d2e94fa2dacde909b924739702a22f1742d3468ae1c494e77\": container with ID starting with 9d232863bad63a6d2e94fa2dacde909b924739702a22f1742d3468ae1c494e77 not found: ID does not exist" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.012946 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-server-0"] Oct 01 16:03:07 crc kubenswrapper[4949]: E1001 16:03:07.013386 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01cf3bff-bdb8-43f9-bf81-c106d5c5dae2" containerName="setup-container" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.013403 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="01cf3bff-bdb8-43f9-bf81-c106d5c5dae2" containerName="setup-container" Oct 01 16:03:07 crc kubenswrapper[4949]: E1001 16:03:07.013422 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="01cf3bff-bdb8-43f9-bf81-c106d5c5dae2" containerName="rabbitmq" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.013431 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="01cf3bff-bdb8-43f9-bf81-c106d5c5dae2" containerName="rabbitmq" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.013626 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="01cf3bff-bdb8-43f9-bf81-c106d5c5dae2" containerName="rabbitmq" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.014715 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.017713 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-plugins-conf" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.017903 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-default-user" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.018167 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-svc" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.018343 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-erlang-cookie" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.018465 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-config-data" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.018572 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-server-conf" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.018815 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-server-dockercfg-77hcf" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.038756 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.153820 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3e904978-9466-4e56-8e31-c4e06b6f49e2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3e904978-9466-4e56-8e31-c4e06b6f49e2\") " pod="openstack/rabbitmq-server-0" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.153923 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3e904978-9466-4e56-8e31-c4e06b6f49e2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3e904978-9466-4e56-8e31-c4e06b6f49e2\") " pod="openstack/rabbitmq-server-0" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.153951 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3e904978-9466-4e56-8e31-c4e06b6f49e2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3e904978-9466-4e56-8e31-c4e06b6f49e2\") " pod="openstack/rabbitmq-server-0" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.154005 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3e904978-9466-4e56-8e31-c4e06b6f49e2-config-data\") pod \"rabbitmq-server-0\" (UID: \"3e904978-9466-4e56-8e31-c4e06b6f49e2\") " pod="openstack/rabbitmq-server-0" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.154050 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3e904978-9466-4e56-8e31-c4e06b6f49e2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3e904978-9466-4e56-8e31-c4e06b6f49e2\") " pod="openstack/rabbitmq-server-0" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.154113 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dqq8b\" (UniqueName: \"kubernetes.io/projected/3e904978-9466-4e56-8e31-c4e06b6f49e2-kube-api-access-dqq8b\") pod \"rabbitmq-server-0\" (UID: \"3e904978-9466-4e56-8e31-c4e06b6f49e2\") " pod="openstack/rabbitmq-server-0" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.154188 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3e904978-9466-4e56-8e31-c4e06b6f49e2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3e904978-9466-4e56-8e31-c4e06b6f49e2\") " pod="openstack/rabbitmq-server-0" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.154230 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3e904978-9466-4e56-8e31-c4e06b6f49e2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3e904978-9466-4e56-8e31-c4e06b6f49e2\") " pod="openstack/rabbitmq-server-0" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.154302 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3e904978-9466-4e56-8e31-c4e06b6f49e2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3e904978-9466-4e56-8e31-c4e06b6f49e2\") " pod="openstack/rabbitmq-server-0" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.154698 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3e904978-9466-4e56-8e31-c4e06b6f49e2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3e904978-9466-4e56-8e31-c4e06b6f49e2\") " pod="openstack/rabbitmq-server-0" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.154875 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"3e904978-9466-4e56-8e31-c4e06b6f49e2\") " pod="openstack/rabbitmq-server-0" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.257049 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3e904978-9466-4e56-8e31-c4e06b6f49e2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3e904978-9466-4e56-8e31-c4e06b6f49e2\") " pod="openstack/rabbitmq-server-0" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.257482 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"3e904978-9466-4e56-8e31-c4e06b6f49e2\") " pod="openstack/rabbitmq-server-0" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.257528 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3e904978-9466-4e56-8e31-c4e06b6f49e2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3e904978-9466-4e56-8e31-c4e06b6f49e2\") " pod="openstack/rabbitmq-server-0" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.257559 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3e904978-9466-4e56-8e31-c4e06b6f49e2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3e904978-9466-4e56-8e31-c4e06b6f49e2\") " pod="openstack/rabbitmq-server-0" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.257579 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3e904978-9466-4e56-8e31-c4e06b6f49e2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3e904978-9466-4e56-8e31-c4e06b6f49e2\") " pod="openstack/rabbitmq-server-0" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.257610 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3e904978-9466-4e56-8e31-c4e06b6f49e2-config-data\") pod \"rabbitmq-server-0\" (UID: \"3e904978-9466-4e56-8e31-c4e06b6f49e2\") " pod="openstack/rabbitmq-server-0" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.257651 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3e904978-9466-4e56-8e31-c4e06b6f49e2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3e904978-9466-4e56-8e31-c4e06b6f49e2\") " pod="openstack/rabbitmq-server-0" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.257705 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dqq8b\" (UniqueName: \"kubernetes.io/projected/3e904978-9466-4e56-8e31-c4e06b6f49e2-kube-api-access-dqq8b\") pod \"rabbitmq-server-0\" (UID: \"3e904978-9466-4e56-8e31-c4e06b6f49e2\") " pod="openstack/rabbitmq-server-0" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.257729 4949 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"3e904978-9466-4e56-8e31-c4e06b6f49e2\") device mount path \"/mnt/openstack/pv08\"" pod="openstack/rabbitmq-server-0" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.257737 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3e904978-9466-4e56-8e31-c4e06b6f49e2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3e904978-9466-4e56-8e31-c4e06b6f49e2\") " pod="openstack/rabbitmq-server-0" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.258070 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3e904978-9466-4e56-8e31-c4e06b6f49e2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3e904978-9466-4e56-8e31-c4e06b6f49e2\") " pod="openstack/rabbitmq-server-0" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.258103 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3e904978-9466-4e56-8e31-c4e06b6f49e2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3e904978-9466-4e56-8e31-c4e06b6f49e2\") " pod="openstack/rabbitmq-server-0" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.258733 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/3e904978-9466-4e56-8e31-c4e06b6f49e2-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"3e904978-9466-4e56-8e31-c4e06b6f49e2\") " pod="openstack/rabbitmq-server-0" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.259710 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3e904978-9466-4e56-8e31-c4e06b6f49e2-config-data\") pod \"rabbitmq-server-0\" (UID: \"3e904978-9466-4e56-8e31-c4e06b6f49e2\") " pod="openstack/rabbitmq-server-0" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.261425 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/3e904978-9466-4e56-8e31-c4e06b6f49e2-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"3e904978-9466-4e56-8e31-c4e06b6f49e2\") " pod="openstack/rabbitmq-server-0" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.266046 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/3e904978-9466-4e56-8e31-c4e06b6f49e2-server-conf\") pod \"rabbitmq-server-0\" (UID: \"3e904978-9466-4e56-8e31-c4e06b6f49e2\") " pod="openstack/rabbitmq-server-0" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.266284 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/3e904978-9466-4e56-8e31-c4e06b6f49e2-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"3e904978-9466-4e56-8e31-c4e06b6f49e2\") " pod="openstack/rabbitmq-server-0" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.266994 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/3e904978-9466-4e56-8e31-c4e06b6f49e2-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"3e904978-9466-4e56-8e31-c4e06b6f49e2\") " pod="openstack/rabbitmq-server-0" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.268244 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/3e904978-9466-4e56-8e31-c4e06b6f49e2-rabbitmq-tls\") pod \"rabbitmq-server-0\" (UID: \"3e904978-9466-4e56-8e31-c4e06b6f49e2\") " pod="openstack/rabbitmq-server-0" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.270180 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/3e904978-9466-4e56-8e31-c4e06b6f49e2-pod-info\") pod \"rabbitmq-server-0\" (UID: \"3e904978-9466-4e56-8e31-c4e06b6f49e2\") " pod="openstack/rabbitmq-server-0" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.270734 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/3e904978-9466-4e56-8e31-c4e06b6f49e2-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"3e904978-9466-4e56-8e31-c4e06b6f49e2\") " pod="openstack/rabbitmq-server-0" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.281622 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dqq8b\" (UniqueName: \"kubernetes.io/projected/3e904978-9466-4e56-8e31-c4e06b6f49e2-kube-api-access-dqq8b\") pod \"rabbitmq-server-0\" (UID: \"3e904978-9466-4e56-8e31-c4e06b6f49e2\") " pod="openstack/rabbitmq-server-0" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.290071 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"rabbitmq-server-0\" (UID: \"3e904978-9466-4e56-8e31-c4e06b6f49e2\") " pod="openstack/rabbitmq-server-0" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.347215 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-server-0" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.354761 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.460793 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-rabbitmq-erlang-cookie\") pod \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\" (UID: \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\") " Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.460952 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-rabbitmq-plugins\") pod \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\" (UID: \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\") " Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.460985 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-pod-info\") pod \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\" (UID: \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\") " Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.461029 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-rabbitmq-tls\") pod \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\" (UID: \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\") " Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.461061 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\" (UID: \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\") " Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.461088 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-plugins-conf\") pod \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\" (UID: \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\") " Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.461139 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-rabbitmq-confd\") pod \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\" (UID: \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\") " Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.461171 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-config-data\") pod \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\" (UID: \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\") " Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.461223 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-server-conf\") pod \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\" (UID: \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\") " Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.461287 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-erlang-cookie-secret\") pod \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\" (UID: \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\") " Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.461321 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ls2ml\" (UniqueName: \"kubernetes.io/projected/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-kube-api-access-ls2ml\") pod \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\" (UID: \"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a\") " Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.463074 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "b9ca2257-0b8d-4f57-8772-8b6d5d28b10a" (UID: "b9ca2257-0b8d-4f57-8772-8b6d5d28b10a"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.462897 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "b9ca2257-0b8d-4f57-8772-8b6d5d28b10a" (UID: "b9ca2257-0b8d-4f57-8772-8b6d5d28b10a"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.463388 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "b9ca2257-0b8d-4f57-8772-8b6d5d28b10a" (UID: "b9ca2257-0b8d-4f57-8772-8b6d5d28b10a"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.465259 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "persistence") pod "b9ca2257-0b8d-4f57-8772-8b6d5d28b10a" (UID: "b9ca2257-0b8d-4f57-8772-8b6d5d28b10a"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.467319 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "b9ca2257-0b8d-4f57-8772-8b6d5d28b10a" (UID: "b9ca2257-0b8d-4f57-8772-8b6d5d28b10a"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.468059 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-kube-api-access-ls2ml" (OuterVolumeSpecName: "kube-api-access-ls2ml") pod "b9ca2257-0b8d-4f57-8772-8b6d5d28b10a" (UID: "b9ca2257-0b8d-4f57-8772-8b6d5d28b10a"). InnerVolumeSpecName "kube-api-access-ls2ml". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.468236 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-rabbitmq-tls" (OuterVolumeSpecName: "rabbitmq-tls") pod "b9ca2257-0b8d-4f57-8772-8b6d5d28b10a" (UID: "b9ca2257-0b8d-4f57-8772-8b6d5d28b10a"). InnerVolumeSpecName "rabbitmq-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.468819 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-pod-info" (OuterVolumeSpecName: "pod-info") pod "b9ca2257-0b8d-4f57-8772-8b6d5d28b10a" (UID: "b9ca2257-0b8d-4f57-8772-8b6d5d28b10a"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.551996 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-config-data" (OuterVolumeSpecName: "config-data") pod "b9ca2257-0b8d-4f57-8772-8b6d5d28b10a" (UID: "b9ca2257-0b8d-4f57-8772-8b6d5d28b10a"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.567899 4949 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.567941 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ls2ml\" (UniqueName: \"kubernetes.io/projected/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-kube-api-access-ls2ml\") on node \"crc\" DevicePath \"\"" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.567956 4949 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.567969 4949 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.567981 4949 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-pod-info\") on node \"crc\" DevicePath \"\"" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.567993 4949 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-rabbitmq-tls\") on node \"crc\" DevicePath \"\"" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.568017 4949 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.568028 4949 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-plugins-conf\") on node \"crc\" DevicePath \"\"" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.568039 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-config-data\") on node \"crc\" DevicePath \"\"" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.569920 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-server-conf" (OuterVolumeSpecName: "server-conf") pod "b9ca2257-0b8d-4f57-8772-8b6d5d28b10a" (UID: "b9ca2257-0b8d-4f57-8772-8b6d5d28b10a"). InnerVolumeSpecName "server-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.595367 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "b9ca2257-0b8d-4f57-8772-8b6d5d28b10a" (UID: "b9ca2257-0b8d-4f57-8772-8b6d5d28b10a"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.596899 4949 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.614823 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01cf3bff-bdb8-43f9-bf81-c106d5c5dae2" path="/var/lib/kubelet/pods/01cf3bff-bdb8-43f9-bf81-c106d5c5dae2/volumes" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.669996 4949 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.670025 4949 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.670044 4949 reconciler_common.go:293] "Volume detached for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a-server-conf\") on node \"crc\" DevicePath \"\"" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.864588 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-server-0"] Oct 01 16:03:07 crc kubenswrapper[4949]: W1001 16:03:07.872449 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3e904978_9466_4e56_8e31_c4e06b6f49e2.slice/crio-d94ee3e85d6c4c3be1667e00e0b8a19c9ab81d1829a2ec76246fd68367acbb92 WatchSource:0}: Error finding container d94ee3e85d6c4c3be1667e00e0b8a19c9ab81d1829a2ec76246fd68367acbb92: Status 404 returned error can't find the container with id d94ee3e85d6c4c3be1667e00e0b8a19c9ab81d1829a2ec76246fd68367acbb92 Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.958477 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3e904978-9466-4e56-8e31-c4e06b6f49e2","Type":"ContainerStarted","Data":"d94ee3e85d6c4c3be1667e00e0b8a19c9ab81d1829a2ec76246fd68367acbb92"} Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.963821 4949 generic.go:334] "Generic (PLEG): container finished" podID="b9ca2257-0b8d-4f57-8772-8b6d5d28b10a" containerID="2e1045bad23fe7d809b70ff5cc0749488cda57fe86e9807366144059f0c42bde" exitCode=0 Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.963859 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a","Type":"ContainerDied","Data":"2e1045bad23fe7d809b70ff5cc0749488cda57fe86e9807366144059f0c42bde"} Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.963882 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"b9ca2257-0b8d-4f57-8772-8b6d5d28b10a","Type":"ContainerDied","Data":"a542a40fed197c0dd4a77c617b207e79bbc00650bb3e87148e9e2f8a3d0a8bed"} Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.963897 4949 scope.go:117] "RemoveContainer" containerID="2e1045bad23fe7d809b70ff5cc0749488cda57fe86e9807366144059f0c42bde" Oct 01 16:03:07 crc kubenswrapper[4949]: I1001 16:03:07.963984 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.024207 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.061272 4949 scope.go:117] "RemoveContainer" containerID="37122c84d3d23987e156207e87880b9c2d0f918cad6199738cd14ab31f8b51bc" Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.075387 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.116021 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 01 16:03:08 crc kubenswrapper[4949]: E1001 16:03:08.116458 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9ca2257-0b8d-4f57-8772-8b6d5d28b10a" containerName="rabbitmq" Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.116476 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9ca2257-0b8d-4f57-8772-8b6d5d28b10a" containerName="rabbitmq" Oct 01 16:03:08 crc kubenswrapper[4949]: E1001 16:03:08.116506 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b9ca2257-0b8d-4f57-8772-8b6d5d28b10a" containerName="setup-container" Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.116513 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="b9ca2257-0b8d-4f57-8772-8b6d5d28b10a" containerName="setup-container" Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.116685 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="b9ca2257-0b8d-4f57-8772-8b6d5d28b10a" containerName="rabbitmq" Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.117950 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.121755 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-server-dockercfg-bvvb7" Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.121918 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-default-user" Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.122022 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-config-data" Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.122154 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-plugins-conf" Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.122359 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"rabbitmq-cell1-erlang-cookie" Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.122540 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-rabbitmq-cell1-svc" Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.122737 4949 scope.go:117] "RemoveContainer" containerID="2e1045bad23fe7d809b70ff5cc0749488cda57fe86e9807366144059f0c42bde" Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.122860 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"rabbitmq-cell1-server-conf" Oct 01 16:03:08 crc kubenswrapper[4949]: E1001 16:03:08.123329 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e1045bad23fe7d809b70ff5cc0749488cda57fe86e9807366144059f0c42bde\": container with ID starting with 2e1045bad23fe7d809b70ff5cc0749488cda57fe86e9807366144059f0c42bde not found: ID does not exist" containerID="2e1045bad23fe7d809b70ff5cc0749488cda57fe86e9807366144059f0c42bde" Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.123351 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e1045bad23fe7d809b70ff5cc0749488cda57fe86e9807366144059f0c42bde"} err="failed to get container status \"2e1045bad23fe7d809b70ff5cc0749488cda57fe86e9807366144059f0c42bde\": rpc error: code = NotFound desc = could not find container \"2e1045bad23fe7d809b70ff5cc0749488cda57fe86e9807366144059f0c42bde\": container with ID starting with 2e1045bad23fe7d809b70ff5cc0749488cda57fe86e9807366144059f0c42bde not found: ID does not exist" Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.123371 4949 scope.go:117] "RemoveContainer" containerID="37122c84d3d23987e156207e87880b9c2d0f918cad6199738cd14ab31f8b51bc" Oct 01 16:03:08 crc kubenswrapper[4949]: E1001 16:03:08.124540 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"37122c84d3d23987e156207e87880b9c2d0f918cad6199738cd14ab31f8b51bc\": container with ID starting with 37122c84d3d23987e156207e87880b9c2d0f918cad6199738cd14ab31f8b51bc not found: ID does not exist" containerID="37122c84d3d23987e156207e87880b9c2d0f918cad6199738cd14ab31f8b51bc" Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.124562 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"37122c84d3d23987e156207e87880b9c2d0f918cad6199738cd14ab31f8b51bc"} err="failed to get container status \"37122c84d3d23987e156207e87880b9c2d0f918cad6199738cd14ab31f8b51bc\": rpc error: code = NotFound desc = could not find container \"37122c84d3d23987e156207e87880b9c2d0f918cad6199738cd14ab31f8b51bc\": container with ID starting with 37122c84d3d23987e156207e87880b9c2d0f918cad6199738cd14ab31f8b51bc not found: ID does not exist" Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.129234 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.184818 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/620a0468-6462-442e-bfcf-ca26669a638a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"620a0468-6462-442e-bfcf-ca26669a638a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.184862 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"620a0468-6462-442e-bfcf-ca26669a638a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.184904 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/620a0468-6462-442e-bfcf-ca26669a638a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"620a0468-6462-442e-bfcf-ca26669a638a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.184940 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/620a0468-6462-442e-bfcf-ca26669a638a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"620a0468-6462-442e-bfcf-ca26669a638a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.185150 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/620a0468-6462-442e-bfcf-ca26669a638a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"620a0468-6462-442e-bfcf-ca26669a638a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.185224 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/620a0468-6462-442e-bfcf-ca26669a638a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"620a0468-6462-442e-bfcf-ca26669a638a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.185464 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/620a0468-6462-442e-bfcf-ca26669a638a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"620a0468-6462-442e-bfcf-ca26669a638a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.185500 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/620a0468-6462-442e-bfcf-ca26669a638a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"620a0468-6462-442e-bfcf-ca26669a638a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.185533 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/620a0468-6462-442e-bfcf-ca26669a638a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"620a0468-6462-442e-bfcf-ca26669a638a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.185607 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tsfwx\" (UniqueName: \"kubernetes.io/projected/620a0468-6462-442e-bfcf-ca26669a638a-kube-api-access-tsfwx\") pod \"rabbitmq-cell1-server-0\" (UID: \"620a0468-6462-442e-bfcf-ca26669a638a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.185661 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/620a0468-6462-442e-bfcf-ca26669a638a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"620a0468-6462-442e-bfcf-ca26669a638a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.287438 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/620a0468-6462-442e-bfcf-ca26669a638a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"620a0468-6462-442e-bfcf-ca26669a638a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.287503 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/620a0468-6462-442e-bfcf-ca26669a638a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"620a0468-6462-442e-bfcf-ca26669a638a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.287535 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/620a0468-6462-442e-bfcf-ca26669a638a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"620a0468-6462-442e-bfcf-ca26669a638a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.287629 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/620a0468-6462-442e-bfcf-ca26669a638a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"620a0468-6462-442e-bfcf-ca26669a638a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.287656 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/620a0468-6462-442e-bfcf-ca26669a638a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"620a0468-6462-442e-bfcf-ca26669a638a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.287676 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/620a0468-6462-442e-bfcf-ca26669a638a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"620a0468-6462-442e-bfcf-ca26669a638a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.287728 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tsfwx\" (UniqueName: \"kubernetes.io/projected/620a0468-6462-442e-bfcf-ca26669a638a-kube-api-access-tsfwx\") pod \"rabbitmq-cell1-server-0\" (UID: \"620a0468-6462-442e-bfcf-ca26669a638a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.287781 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/620a0468-6462-442e-bfcf-ca26669a638a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"620a0468-6462-442e-bfcf-ca26669a638a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.287818 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/620a0468-6462-442e-bfcf-ca26669a638a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"620a0468-6462-442e-bfcf-ca26669a638a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.287850 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"620a0468-6462-442e-bfcf-ca26669a638a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.287898 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/620a0468-6462-442e-bfcf-ca26669a638a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"620a0468-6462-442e-bfcf-ca26669a638a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.288141 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/620a0468-6462-442e-bfcf-ca26669a638a-rabbitmq-plugins\") pod \"rabbitmq-cell1-server-0\" (UID: \"620a0468-6462-442e-bfcf-ca26669a638a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.288548 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/620a0468-6462-442e-bfcf-ca26669a638a-config-data\") pod \"rabbitmq-cell1-server-0\" (UID: \"620a0468-6462-442e-bfcf-ca26669a638a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.288591 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/620a0468-6462-442e-bfcf-ca26669a638a-plugins-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"620a0468-6462-442e-bfcf-ca26669a638a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.288959 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/620a0468-6462-442e-bfcf-ca26669a638a-rabbitmq-erlang-cookie\") pod \"rabbitmq-cell1-server-0\" (UID: \"620a0468-6462-442e-bfcf-ca26669a638a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.289215 4949 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"620a0468-6462-442e-bfcf-ca26669a638a\") device mount path \"/mnt/openstack/pv04\"" pod="openstack/rabbitmq-cell1-server-0" Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.289880 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"server-conf\" (UniqueName: \"kubernetes.io/configmap/620a0468-6462-442e-bfcf-ca26669a638a-server-conf\") pod \"rabbitmq-cell1-server-0\" (UID: \"620a0468-6462-442e-bfcf-ca26669a638a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.292504 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/620a0468-6462-442e-bfcf-ca26669a638a-pod-info\") pod \"rabbitmq-cell1-server-0\" (UID: \"620a0468-6462-442e-bfcf-ca26669a638a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.292658 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/620a0468-6462-442e-bfcf-ca26669a638a-erlang-cookie-secret\") pod \"rabbitmq-cell1-server-0\" (UID: \"620a0468-6462-442e-bfcf-ca26669a638a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.292761 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-tls\" (UniqueName: \"kubernetes.io/projected/620a0468-6462-442e-bfcf-ca26669a638a-rabbitmq-tls\") pod \"rabbitmq-cell1-server-0\" (UID: \"620a0468-6462-442e-bfcf-ca26669a638a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.294164 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/620a0468-6462-442e-bfcf-ca26669a638a-rabbitmq-confd\") pod \"rabbitmq-cell1-server-0\" (UID: \"620a0468-6462-442e-bfcf-ca26669a638a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.309111 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tsfwx\" (UniqueName: \"kubernetes.io/projected/620a0468-6462-442e-bfcf-ca26669a638a-kube-api-access-tsfwx\") pod \"rabbitmq-cell1-server-0\" (UID: \"620a0468-6462-442e-bfcf-ca26669a638a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.319391 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"rabbitmq-cell1-server-0\" (UID: \"620a0468-6462-442e-bfcf-ca26669a638a\") " pod="openstack/rabbitmq-cell1-server-0" Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.443268 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/rabbitmq-cell1-server-0" Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.938847 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/rabbitmq-cell1-server-0"] Oct 01 16:03:08 crc kubenswrapper[4949]: W1001 16:03:08.946439 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod620a0468_6462_442e_bfcf_ca26669a638a.slice/crio-92c3c965fe3001bbfee91baf111301ef23b8f7ce21feaa4db9eab779cf4b5cef WatchSource:0}: Error finding container 92c3c965fe3001bbfee91baf111301ef23b8f7ce21feaa4db9eab779cf4b5cef: Status 404 returned error can't find the container with id 92c3c965fe3001bbfee91baf111301ef23b8f7ce21feaa4db9eab779cf4b5cef Oct 01 16:03:08 crc kubenswrapper[4949]: I1001 16:03:08.974549 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"620a0468-6462-442e-bfcf-ca26669a638a","Type":"ContainerStarted","Data":"92c3c965fe3001bbfee91baf111301ef23b8f7ce21feaa4db9eab779cf4b5cef"} Oct 01 16:03:09 crc kubenswrapper[4949]: I1001 16:03:09.613323 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b9ca2257-0b8d-4f57-8772-8b6d5d28b10a" path="/var/lib/kubelet/pods/b9ca2257-0b8d-4f57-8772-8b6d5d28b10a/volumes" Oct 01 16:03:09 crc kubenswrapper[4949]: I1001 16:03:09.984540 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3e904978-9466-4e56-8e31-c4e06b6f49e2","Type":"ContainerStarted","Data":"fc2e2bc7796d64353721c2cdcf31430e108a7b50637f3de9cda9f4c81519bf0c"} Oct 01 16:03:10 crc kubenswrapper[4949]: I1001 16:03:10.994494 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"620a0468-6462-442e-bfcf-ca26669a638a","Type":"ContainerStarted","Data":"55e48e8ab951f0ae21851d388536ccf855ad827e1e4fe1dd5d2b1ddf9b0bda7a"} Oct 01 16:03:11 crc kubenswrapper[4949]: I1001 16:03:11.096864 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-zshnf"] Oct 01 16:03:11 crc kubenswrapper[4949]: I1001 16:03:11.098439 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-zshnf" Oct 01 16:03:11 crc kubenswrapper[4949]: I1001 16:03:11.101092 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-edpm-ipam" Oct 01 16:03:11 crc kubenswrapper[4949]: I1001 16:03:11.145707 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-zshnf"] Oct 01 16:03:11 crc kubenswrapper[4949]: I1001 16:03:11.245201 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d650e79-1c23-4c79-8cd7-6ee731bc81e6-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-zshnf\" (UID: \"9d650e79-1c23-4c79-8cd7-6ee731bc81e6\") " pod="openstack/dnsmasq-dns-6447ccbd8f-zshnf" Oct 01 16:03:11 crc kubenswrapper[4949]: I1001 16:03:11.245262 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d650e79-1c23-4c79-8cd7-6ee731bc81e6-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-zshnf\" (UID: \"9d650e79-1c23-4c79-8cd7-6ee731bc81e6\") " pod="openstack/dnsmasq-dns-6447ccbd8f-zshnf" Oct 01 16:03:11 crc kubenswrapper[4949]: I1001 16:03:11.245312 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9d650e79-1c23-4c79-8cd7-6ee731bc81e6-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-zshnf\" (UID: \"9d650e79-1c23-4c79-8cd7-6ee731bc81e6\") " pod="openstack/dnsmasq-dns-6447ccbd8f-zshnf" Oct 01 16:03:11 crc kubenswrapper[4949]: I1001 16:03:11.245349 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d650e79-1c23-4c79-8cd7-6ee731bc81e6-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-zshnf\" (UID: \"9d650e79-1c23-4c79-8cd7-6ee731bc81e6\") " pod="openstack/dnsmasq-dns-6447ccbd8f-zshnf" Oct 01 16:03:11 crc kubenswrapper[4949]: I1001 16:03:11.245392 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d650e79-1c23-4c79-8cd7-6ee731bc81e6-config\") pod \"dnsmasq-dns-6447ccbd8f-zshnf\" (UID: \"9d650e79-1c23-4c79-8cd7-6ee731bc81e6\") " pod="openstack/dnsmasq-dns-6447ccbd8f-zshnf" Oct 01 16:03:11 crc kubenswrapper[4949]: I1001 16:03:11.245462 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cds48\" (UniqueName: \"kubernetes.io/projected/9d650e79-1c23-4c79-8cd7-6ee731bc81e6-kube-api-access-cds48\") pod \"dnsmasq-dns-6447ccbd8f-zshnf\" (UID: \"9d650e79-1c23-4c79-8cd7-6ee731bc81e6\") " pod="openstack/dnsmasq-dns-6447ccbd8f-zshnf" Oct 01 16:03:11 crc kubenswrapper[4949]: I1001 16:03:11.347612 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cds48\" (UniqueName: \"kubernetes.io/projected/9d650e79-1c23-4c79-8cd7-6ee731bc81e6-kube-api-access-cds48\") pod \"dnsmasq-dns-6447ccbd8f-zshnf\" (UID: \"9d650e79-1c23-4c79-8cd7-6ee731bc81e6\") " pod="openstack/dnsmasq-dns-6447ccbd8f-zshnf" Oct 01 16:03:11 crc kubenswrapper[4949]: I1001 16:03:11.347854 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d650e79-1c23-4c79-8cd7-6ee731bc81e6-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-zshnf\" (UID: \"9d650e79-1c23-4c79-8cd7-6ee731bc81e6\") " pod="openstack/dnsmasq-dns-6447ccbd8f-zshnf" Oct 01 16:03:11 crc kubenswrapper[4949]: I1001 16:03:11.347946 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d650e79-1c23-4c79-8cd7-6ee731bc81e6-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-zshnf\" (UID: \"9d650e79-1c23-4c79-8cd7-6ee731bc81e6\") " pod="openstack/dnsmasq-dns-6447ccbd8f-zshnf" Oct 01 16:03:11 crc kubenswrapper[4949]: I1001 16:03:11.348100 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9d650e79-1c23-4c79-8cd7-6ee731bc81e6-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-zshnf\" (UID: \"9d650e79-1c23-4c79-8cd7-6ee731bc81e6\") " pod="openstack/dnsmasq-dns-6447ccbd8f-zshnf" Oct 01 16:03:11 crc kubenswrapper[4949]: I1001 16:03:11.348237 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d650e79-1c23-4c79-8cd7-6ee731bc81e6-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-zshnf\" (UID: \"9d650e79-1c23-4c79-8cd7-6ee731bc81e6\") " pod="openstack/dnsmasq-dns-6447ccbd8f-zshnf" Oct 01 16:03:11 crc kubenswrapper[4949]: I1001 16:03:11.348357 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d650e79-1c23-4c79-8cd7-6ee731bc81e6-config\") pod \"dnsmasq-dns-6447ccbd8f-zshnf\" (UID: \"9d650e79-1c23-4c79-8cd7-6ee731bc81e6\") " pod="openstack/dnsmasq-dns-6447ccbd8f-zshnf" Oct 01 16:03:11 crc kubenswrapper[4949]: I1001 16:03:11.349085 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d650e79-1c23-4c79-8cd7-6ee731bc81e6-ovsdbserver-nb\") pod \"dnsmasq-dns-6447ccbd8f-zshnf\" (UID: \"9d650e79-1c23-4c79-8cd7-6ee731bc81e6\") " pod="openstack/dnsmasq-dns-6447ccbd8f-zshnf" Oct 01 16:03:11 crc kubenswrapper[4949]: I1001 16:03:11.349148 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d650e79-1c23-4c79-8cd7-6ee731bc81e6-dns-svc\") pod \"dnsmasq-dns-6447ccbd8f-zshnf\" (UID: \"9d650e79-1c23-4c79-8cd7-6ee731bc81e6\") " pod="openstack/dnsmasq-dns-6447ccbd8f-zshnf" Oct 01 16:03:11 crc kubenswrapper[4949]: I1001 16:03:11.349151 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9d650e79-1c23-4c79-8cd7-6ee731bc81e6-openstack-edpm-ipam\") pod \"dnsmasq-dns-6447ccbd8f-zshnf\" (UID: \"9d650e79-1c23-4c79-8cd7-6ee731bc81e6\") " pod="openstack/dnsmasq-dns-6447ccbd8f-zshnf" Oct 01 16:03:11 crc kubenswrapper[4949]: I1001 16:03:11.349174 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d650e79-1c23-4c79-8cd7-6ee731bc81e6-ovsdbserver-sb\") pod \"dnsmasq-dns-6447ccbd8f-zshnf\" (UID: \"9d650e79-1c23-4c79-8cd7-6ee731bc81e6\") " pod="openstack/dnsmasq-dns-6447ccbd8f-zshnf" Oct 01 16:03:11 crc kubenswrapper[4949]: I1001 16:03:11.349310 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d650e79-1c23-4c79-8cd7-6ee731bc81e6-config\") pod \"dnsmasq-dns-6447ccbd8f-zshnf\" (UID: \"9d650e79-1c23-4c79-8cd7-6ee731bc81e6\") " pod="openstack/dnsmasq-dns-6447ccbd8f-zshnf" Oct 01 16:03:11 crc kubenswrapper[4949]: I1001 16:03:11.371859 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cds48\" (UniqueName: \"kubernetes.io/projected/9d650e79-1c23-4c79-8cd7-6ee731bc81e6-kube-api-access-cds48\") pod \"dnsmasq-dns-6447ccbd8f-zshnf\" (UID: \"9d650e79-1c23-4c79-8cd7-6ee731bc81e6\") " pod="openstack/dnsmasq-dns-6447ccbd8f-zshnf" Oct 01 16:03:11 crc kubenswrapper[4949]: I1001 16:03:11.448055 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-zshnf" Oct 01 16:03:11 crc kubenswrapper[4949]: I1001 16:03:11.911736 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-zshnf"] Oct 01 16:03:11 crc kubenswrapper[4949]: W1001 16:03:11.916584 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d650e79_1c23_4c79_8cd7_6ee731bc81e6.slice/crio-c543c54467d602913e1bacbdcec051dd0e6df254a9ed21da9877b4f25f399728 WatchSource:0}: Error finding container c543c54467d602913e1bacbdcec051dd0e6df254a9ed21da9877b4f25f399728: Status 404 returned error can't find the container with id c543c54467d602913e1bacbdcec051dd0e6df254a9ed21da9877b4f25f399728 Oct 01 16:03:12 crc kubenswrapper[4949]: I1001 16:03:12.006562 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-zshnf" event={"ID":"9d650e79-1c23-4c79-8cd7-6ee731bc81e6","Type":"ContainerStarted","Data":"c543c54467d602913e1bacbdcec051dd0e6df254a9ed21da9877b4f25f399728"} Oct 01 16:03:13 crc kubenswrapper[4949]: I1001 16:03:13.023314 4949 generic.go:334] "Generic (PLEG): container finished" podID="9d650e79-1c23-4c79-8cd7-6ee731bc81e6" containerID="18ecf4670f51bcb7d2586a1f2368a241e6d0220f5135b01f1122191a274cdbe3" exitCode=0 Oct 01 16:03:13 crc kubenswrapper[4949]: I1001 16:03:13.023377 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-zshnf" event={"ID":"9d650e79-1c23-4c79-8cd7-6ee731bc81e6","Type":"ContainerDied","Data":"18ecf4670f51bcb7d2586a1f2368a241e6d0220f5135b01f1122191a274cdbe3"} Oct 01 16:03:14 crc kubenswrapper[4949]: I1001 16:03:14.035361 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-zshnf" event={"ID":"9d650e79-1c23-4c79-8cd7-6ee731bc81e6","Type":"ContainerStarted","Data":"f7bf67ae781a319d9559b81750773177b4ec84e7daeea0d505f28b6517948252"} Oct 01 16:03:14 crc kubenswrapper[4949]: I1001 16:03:14.035598 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-6447ccbd8f-zshnf" Oct 01 16:03:14 crc kubenswrapper[4949]: I1001 16:03:14.065204 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-6447ccbd8f-zshnf" podStartSLOduration=3.065185938 podStartE2EDuration="3.065185938s" podCreationTimestamp="2025-10-01 16:03:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 16:03:14.05556572 +0000 UTC m=+1293.361171911" watchObservedRunningTime="2025-10-01 16:03:14.065185938 +0000 UTC m=+1293.370792129" Oct 01 16:03:18 crc kubenswrapper[4949]: I1001 16:03:18.038528 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 16:03:18 crc kubenswrapper[4949]: I1001 16:03:18.039245 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 16:03:21 crc kubenswrapper[4949]: I1001 16:03:21.450498 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-6447ccbd8f-zshnf" Oct 01 16:03:21 crc kubenswrapper[4949]: I1001 16:03:21.526347 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-p6rns"] Oct 01 16:03:21 crc kubenswrapper[4949]: I1001 16:03:21.526744 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-5b856c5697-p6rns" podUID="c21b4c0b-7e29-4226-8745-3f942703d8f0" containerName="dnsmasq-dns" containerID="cri-o://19b37063e89002ccef186f1a486b161a26ad16f1bea3c5083d17f2c744a5c1a9" gracePeriod=10 Oct 01 16:03:21 crc kubenswrapper[4949]: I1001 16:03:21.677467 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-5fmz9"] Oct 01 16:03:21 crc kubenswrapper[4949]: I1001 16:03:21.679349 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-5fmz9" Oct 01 16:03:21 crc kubenswrapper[4949]: I1001 16:03:21.696431 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-5fmz9"] Oct 01 16:03:21 crc kubenswrapper[4949]: I1001 16:03:21.844405 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9bkbn\" (UniqueName: \"kubernetes.io/projected/62f9a851-7558-4b9f-86fe-a5412eaf318e-kube-api-access-9bkbn\") pod \"dnsmasq-dns-864d5fc68c-5fmz9\" (UID: \"62f9a851-7558-4b9f-86fe-a5412eaf318e\") " pod="openstack/dnsmasq-dns-864d5fc68c-5fmz9" Oct 01 16:03:21 crc kubenswrapper[4949]: I1001 16:03:21.844747 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/62f9a851-7558-4b9f-86fe-a5412eaf318e-openstack-edpm-ipam\") pod \"dnsmasq-dns-864d5fc68c-5fmz9\" (UID: \"62f9a851-7558-4b9f-86fe-a5412eaf318e\") " pod="openstack/dnsmasq-dns-864d5fc68c-5fmz9" Oct 01 16:03:21 crc kubenswrapper[4949]: I1001 16:03:21.844774 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/62f9a851-7558-4b9f-86fe-a5412eaf318e-ovsdbserver-sb\") pod \"dnsmasq-dns-864d5fc68c-5fmz9\" (UID: \"62f9a851-7558-4b9f-86fe-a5412eaf318e\") " pod="openstack/dnsmasq-dns-864d5fc68c-5fmz9" Oct 01 16:03:21 crc kubenswrapper[4949]: I1001 16:03:21.844794 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62f9a851-7558-4b9f-86fe-a5412eaf318e-dns-svc\") pod \"dnsmasq-dns-864d5fc68c-5fmz9\" (UID: \"62f9a851-7558-4b9f-86fe-a5412eaf318e\") " pod="openstack/dnsmasq-dns-864d5fc68c-5fmz9" Oct 01 16:03:21 crc kubenswrapper[4949]: I1001 16:03:21.844912 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62f9a851-7558-4b9f-86fe-a5412eaf318e-config\") pod \"dnsmasq-dns-864d5fc68c-5fmz9\" (UID: \"62f9a851-7558-4b9f-86fe-a5412eaf318e\") " pod="openstack/dnsmasq-dns-864d5fc68c-5fmz9" Oct 01 16:03:21 crc kubenswrapper[4949]: I1001 16:03:21.844931 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/62f9a851-7558-4b9f-86fe-a5412eaf318e-ovsdbserver-nb\") pod \"dnsmasq-dns-864d5fc68c-5fmz9\" (UID: \"62f9a851-7558-4b9f-86fe-a5412eaf318e\") " pod="openstack/dnsmasq-dns-864d5fc68c-5fmz9" Oct 01 16:03:21 crc kubenswrapper[4949]: I1001 16:03:21.946886 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62f9a851-7558-4b9f-86fe-a5412eaf318e-config\") pod \"dnsmasq-dns-864d5fc68c-5fmz9\" (UID: \"62f9a851-7558-4b9f-86fe-a5412eaf318e\") " pod="openstack/dnsmasq-dns-864d5fc68c-5fmz9" Oct 01 16:03:21 crc kubenswrapper[4949]: I1001 16:03:21.946931 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/62f9a851-7558-4b9f-86fe-a5412eaf318e-ovsdbserver-nb\") pod \"dnsmasq-dns-864d5fc68c-5fmz9\" (UID: \"62f9a851-7558-4b9f-86fe-a5412eaf318e\") " pod="openstack/dnsmasq-dns-864d5fc68c-5fmz9" Oct 01 16:03:21 crc kubenswrapper[4949]: I1001 16:03:21.946959 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9bkbn\" (UniqueName: \"kubernetes.io/projected/62f9a851-7558-4b9f-86fe-a5412eaf318e-kube-api-access-9bkbn\") pod \"dnsmasq-dns-864d5fc68c-5fmz9\" (UID: \"62f9a851-7558-4b9f-86fe-a5412eaf318e\") " pod="openstack/dnsmasq-dns-864d5fc68c-5fmz9" Oct 01 16:03:21 crc kubenswrapper[4949]: I1001 16:03:21.947026 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/62f9a851-7558-4b9f-86fe-a5412eaf318e-openstack-edpm-ipam\") pod \"dnsmasq-dns-864d5fc68c-5fmz9\" (UID: \"62f9a851-7558-4b9f-86fe-a5412eaf318e\") " pod="openstack/dnsmasq-dns-864d5fc68c-5fmz9" Oct 01 16:03:21 crc kubenswrapper[4949]: I1001 16:03:21.947054 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/62f9a851-7558-4b9f-86fe-a5412eaf318e-ovsdbserver-sb\") pod \"dnsmasq-dns-864d5fc68c-5fmz9\" (UID: \"62f9a851-7558-4b9f-86fe-a5412eaf318e\") " pod="openstack/dnsmasq-dns-864d5fc68c-5fmz9" Oct 01 16:03:21 crc kubenswrapper[4949]: I1001 16:03:21.947077 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62f9a851-7558-4b9f-86fe-a5412eaf318e-dns-svc\") pod \"dnsmasq-dns-864d5fc68c-5fmz9\" (UID: \"62f9a851-7558-4b9f-86fe-a5412eaf318e\") " pod="openstack/dnsmasq-dns-864d5fc68c-5fmz9" Oct 01 16:03:21 crc kubenswrapper[4949]: I1001 16:03:21.948440 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62f9a851-7558-4b9f-86fe-a5412eaf318e-dns-svc\") pod \"dnsmasq-dns-864d5fc68c-5fmz9\" (UID: \"62f9a851-7558-4b9f-86fe-a5412eaf318e\") " pod="openstack/dnsmasq-dns-864d5fc68c-5fmz9" Oct 01 16:03:21 crc kubenswrapper[4949]: I1001 16:03:21.950273 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/62f9a851-7558-4b9f-86fe-a5412eaf318e-ovsdbserver-nb\") pod \"dnsmasq-dns-864d5fc68c-5fmz9\" (UID: \"62f9a851-7558-4b9f-86fe-a5412eaf318e\") " pod="openstack/dnsmasq-dns-864d5fc68c-5fmz9" Oct 01 16:03:21 crc kubenswrapper[4949]: I1001 16:03:21.950587 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/62f9a851-7558-4b9f-86fe-a5412eaf318e-openstack-edpm-ipam\") pod \"dnsmasq-dns-864d5fc68c-5fmz9\" (UID: \"62f9a851-7558-4b9f-86fe-a5412eaf318e\") " pod="openstack/dnsmasq-dns-864d5fc68c-5fmz9" Oct 01 16:03:21 crc kubenswrapper[4949]: I1001 16:03:21.950971 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/62f9a851-7558-4b9f-86fe-a5412eaf318e-ovsdbserver-sb\") pod \"dnsmasq-dns-864d5fc68c-5fmz9\" (UID: \"62f9a851-7558-4b9f-86fe-a5412eaf318e\") " pod="openstack/dnsmasq-dns-864d5fc68c-5fmz9" Oct 01 16:03:21 crc kubenswrapper[4949]: I1001 16:03:21.951164 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62f9a851-7558-4b9f-86fe-a5412eaf318e-config\") pod \"dnsmasq-dns-864d5fc68c-5fmz9\" (UID: \"62f9a851-7558-4b9f-86fe-a5412eaf318e\") " pod="openstack/dnsmasq-dns-864d5fc68c-5fmz9" Oct 01 16:03:21 crc kubenswrapper[4949]: I1001 16:03:21.974573 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9bkbn\" (UniqueName: \"kubernetes.io/projected/62f9a851-7558-4b9f-86fe-a5412eaf318e-kube-api-access-9bkbn\") pod \"dnsmasq-dns-864d5fc68c-5fmz9\" (UID: \"62f9a851-7558-4b9f-86fe-a5412eaf318e\") " pod="openstack/dnsmasq-dns-864d5fc68c-5fmz9" Oct 01 16:03:22 crc kubenswrapper[4949]: I1001 16:03:22.048716 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-5fmz9" Oct 01 16:03:22 crc kubenswrapper[4949]: I1001 16:03:22.081331 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-p6rns" Oct 01 16:03:22 crc kubenswrapper[4949]: I1001 16:03:22.139327 4949 generic.go:334] "Generic (PLEG): container finished" podID="c21b4c0b-7e29-4226-8745-3f942703d8f0" containerID="19b37063e89002ccef186f1a486b161a26ad16f1bea3c5083d17f2c744a5c1a9" exitCode=0 Oct 01 16:03:22 crc kubenswrapper[4949]: I1001 16:03:22.139636 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-p6rns" event={"ID":"c21b4c0b-7e29-4226-8745-3f942703d8f0","Type":"ContainerDied","Data":"19b37063e89002ccef186f1a486b161a26ad16f1bea3c5083d17f2c744a5c1a9"} Oct 01 16:03:22 crc kubenswrapper[4949]: I1001 16:03:22.139687 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-5b856c5697-p6rns" Oct 01 16:03:22 crc kubenswrapper[4949]: I1001 16:03:22.139792 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-5b856c5697-p6rns" event={"ID":"c21b4c0b-7e29-4226-8745-3f942703d8f0","Type":"ContainerDied","Data":"9e8465bac9a873cf336b107d28e3ab039f63f9f2ff1c96ba5582fd3946e55575"} Oct 01 16:03:22 crc kubenswrapper[4949]: I1001 16:03:22.139827 4949 scope.go:117] "RemoveContainer" containerID="19b37063e89002ccef186f1a486b161a26ad16f1bea3c5083d17f2c744a5c1a9" Oct 01 16:03:22 crc kubenswrapper[4949]: I1001 16:03:22.190177 4949 scope.go:117] "RemoveContainer" containerID="390b6f1d47afe51b11f7ac9dd6e655003f50307478a665229c01309cd44248e2" Oct 01 16:03:22 crc kubenswrapper[4949]: I1001 16:03:22.212952 4949 scope.go:117] "RemoveContainer" containerID="19b37063e89002ccef186f1a486b161a26ad16f1bea3c5083d17f2c744a5c1a9" Oct 01 16:03:22 crc kubenswrapper[4949]: E1001 16:03:22.213747 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19b37063e89002ccef186f1a486b161a26ad16f1bea3c5083d17f2c744a5c1a9\": container with ID starting with 19b37063e89002ccef186f1a486b161a26ad16f1bea3c5083d17f2c744a5c1a9 not found: ID does not exist" containerID="19b37063e89002ccef186f1a486b161a26ad16f1bea3c5083d17f2c744a5c1a9" Oct 01 16:03:22 crc kubenswrapper[4949]: I1001 16:03:22.213779 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19b37063e89002ccef186f1a486b161a26ad16f1bea3c5083d17f2c744a5c1a9"} err="failed to get container status \"19b37063e89002ccef186f1a486b161a26ad16f1bea3c5083d17f2c744a5c1a9\": rpc error: code = NotFound desc = could not find container \"19b37063e89002ccef186f1a486b161a26ad16f1bea3c5083d17f2c744a5c1a9\": container with ID starting with 19b37063e89002ccef186f1a486b161a26ad16f1bea3c5083d17f2c744a5c1a9 not found: ID does not exist" Oct 01 16:03:22 crc kubenswrapper[4949]: I1001 16:03:22.213800 4949 scope.go:117] "RemoveContainer" containerID="390b6f1d47afe51b11f7ac9dd6e655003f50307478a665229c01309cd44248e2" Oct 01 16:03:22 crc kubenswrapper[4949]: E1001 16:03:22.214173 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"390b6f1d47afe51b11f7ac9dd6e655003f50307478a665229c01309cd44248e2\": container with ID starting with 390b6f1d47afe51b11f7ac9dd6e655003f50307478a665229c01309cd44248e2 not found: ID does not exist" containerID="390b6f1d47afe51b11f7ac9dd6e655003f50307478a665229c01309cd44248e2" Oct 01 16:03:22 crc kubenswrapper[4949]: I1001 16:03:22.214200 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"390b6f1d47afe51b11f7ac9dd6e655003f50307478a665229c01309cd44248e2"} err="failed to get container status \"390b6f1d47afe51b11f7ac9dd6e655003f50307478a665229c01309cd44248e2\": rpc error: code = NotFound desc = could not find container \"390b6f1d47afe51b11f7ac9dd6e655003f50307478a665229c01309cd44248e2\": container with ID starting with 390b6f1d47afe51b11f7ac9dd6e655003f50307478a665229c01309cd44248e2 not found: ID does not exist" Oct 01 16:03:22 crc kubenswrapper[4949]: I1001 16:03:22.253602 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c21b4c0b-7e29-4226-8745-3f942703d8f0-ovsdbserver-sb\") pod \"c21b4c0b-7e29-4226-8745-3f942703d8f0\" (UID: \"c21b4c0b-7e29-4226-8745-3f942703d8f0\") " Oct 01 16:03:22 crc kubenswrapper[4949]: I1001 16:03:22.253708 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2qqdl\" (UniqueName: \"kubernetes.io/projected/c21b4c0b-7e29-4226-8745-3f942703d8f0-kube-api-access-2qqdl\") pod \"c21b4c0b-7e29-4226-8745-3f942703d8f0\" (UID: \"c21b4c0b-7e29-4226-8745-3f942703d8f0\") " Oct 01 16:03:22 crc kubenswrapper[4949]: I1001 16:03:22.253758 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c21b4c0b-7e29-4226-8745-3f942703d8f0-config\") pod \"c21b4c0b-7e29-4226-8745-3f942703d8f0\" (UID: \"c21b4c0b-7e29-4226-8745-3f942703d8f0\") " Oct 01 16:03:22 crc kubenswrapper[4949]: I1001 16:03:22.253785 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c21b4c0b-7e29-4226-8745-3f942703d8f0-ovsdbserver-nb\") pod \"c21b4c0b-7e29-4226-8745-3f942703d8f0\" (UID: \"c21b4c0b-7e29-4226-8745-3f942703d8f0\") " Oct 01 16:03:22 crc kubenswrapper[4949]: I1001 16:03:22.253815 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c21b4c0b-7e29-4226-8745-3f942703d8f0-dns-svc\") pod \"c21b4c0b-7e29-4226-8745-3f942703d8f0\" (UID: \"c21b4c0b-7e29-4226-8745-3f942703d8f0\") " Oct 01 16:03:22 crc kubenswrapper[4949]: I1001 16:03:22.261927 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c21b4c0b-7e29-4226-8745-3f942703d8f0-kube-api-access-2qqdl" (OuterVolumeSpecName: "kube-api-access-2qqdl") pod "c21b4c0b-7e29-4226-8745-3f942703d8f0" (UID: "c21b4c0b-7e29-4226-8745-3f942703d8f0"). InnerVolumeSpecName "kube-api-access-2qqdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:03:22 crc kubenswrapper[4949]: I1001 16:03:22.305555 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c21b4c0b-7e29-4226-8745-3f942703d8f0-config" (OuterVolumeSpecName: "config") pod "c21b4c0b-7e29-4226-8745-3f942703d8f0" (UID: "c21b4c0b-7e29-4226-8745-3f942703d8f0"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 16:03:22 crc kubenswrapper[4949]: I1001 16:03:22.326877 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c21b4c0b-7e29-4226-8745-3f942703d8f0-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "c21b4c0b-7e29-4226-8745-3f942703d8f0" (UID: "c21b4c0b-7e29-4226-8745-3f942703d8f0"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 16:03:22 crc kubenswrapper[4949]: I1001 16:03:22.328037 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c21b4c0b-7e29-4226-8745-3f942703d8f0-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "c21b4c0b-7e29-4226-8745-3f942703d8f0" (UID: "c21b4c0b-7e29-4226-8745-3f942703d8f0"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 16:03:22 crc kubenswrapper[4949]: I1001 16:03:22.329949 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c21b4c0b-7e29-4226-8745-3f942703d8f0-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "c21b4c0b-7e29-4226-8745-3f942703d8f0" (UID: "c21b4c0b-7e29-4226-8745-3f942703d8f0"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 16:03:22 crc kubenswrapper[4949]: I1001 16:03:22.355920 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2qqdl\" (UniqueName: \"kubernetes.io/projected/c21b4c0b-7e29-4226-8745-3f942703d8f0-kube-api-access-2qqdl\") on node \"crc\" DevicePath \"\"" Oct 01 16:03:22 crc kubenswrapper[4949]: I1001 16:03:22.355968 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c21b4c0b-7e29-4226-8745-3f942703d8f0-config\") on node \"crc\" DevicePath \"\"" Oct 01 16:03:22 crc kubenswrapper[4949]: I1001 16:03:22.355982 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/c21b4c0b-7e29-4226-8745-3f942703d8f0-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 01 16:03:22 crc kubenswrapper[4949]: I1001 16:03:22.355994 4949 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/c21b4c0b-7e29-4226-8745-3f942703d8f0-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 01 16:03:22 crc kubenswrapper[4949]: I1001 16:03:22.356004 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/c21b4c0b-7e29-4226-8745-3f942703d8f0-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 01 16:03:22 crc kubenswrapper[4949]: I1001 16:03:22.481924 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-p6rns"] Oct 01 16:03:22 crc kubenswrapper[4949]: I1001 16:03:22.489889 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-5b856c5697-p6rns"] Oct 01 16:03:22 crc kubenswrapper[4949]: I1001 16:03:22.557099 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-5fmz9"] Oct 01 16:03:23 crc kubenswrapper[4949]: I1001 16:03:23.152823 4949 generic.go:334] "Generic (PLEG): container finished" podID="62f9a851-7558-4b9f-86fe-a5412eaf318e" containerID="905c85410f10fc9e6e2dec532d30528a16c6dd291d540c84bddabeb77fb16c3f" exitCode=0 Oct 01 16:03:23 crc kubenswrapper[4949]: I1001 16:03:23.152960 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-5fmz9" event={"ID":"62f9a851-7558-4b9f-86fe-a5412eaf318e","Type":"ContainerDied","Data":"905c85410f10fc9e6e2dec532d30528a16c6dd291d540c84bddabeb77fb16c3f"} Oct 01 16:03:23 crc kubenswrapper[4949]: I1001 16:03:23.153331 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-5fmz9" event={"ID":"62f9a851-7558-4b9f-86fe-a5412eaf318e","Type":"ContainerStarted","Data":"3163a78cb97b0a9c01fa2f0484b1539d2a018e3a05037644fc9d0df576f1f298"} Oct 01 16:03:23 crc kubenswrapper[4949]: I1001 16:03:23.613358 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c21b4c0b-7e29-4226-8745-3f942703d8f0" path="/var/lib/kubelet/pods/c21b4c0b-7e29-4226-8745-3f942703d8f0/volumes" Oct 01 16:03:24 crc kubenswrapper[4949]: I1001 16:03:24.189610 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-5fmz9" event={"ID":"62f9a851-7558-4b9f-86fe-a5412eaf318e","Type":"ContainerStarted","Data":"f9c3396ebafdf1fcf35a308b53799fb788160492967e33ff8b2f4ac1b30813bd"} Oct 01 16:03:24 crc kubenswrapper[4949]: I1001 16:03:24.189910 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-864d5fc68c-5fmz9" Oct 01 16:03:24 crc kubenswrapper[4949]: I1001 16:03:24.214916 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-864d5fc68c-5fmz9" podStartSLOduration=3.214889775 podStartE2EDuration="3.214889775s" podCreationTimestamp="2025-10-01 16:03:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 16:03:24.210938724 +0000 UTC m=+1303.516544965" watchObservedRunningTime="2025-10-01 16:03:24.214889775 +0000 UTC m=+1303.520495976" Oct 01 16:03:32 crc kubenswrapper[4949]: I1001 16:03:32.051256 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-864d5fc68c-5fmz9" Oct 01 16:03:32 crc kubenswrapper[4949]: I1001 16:03:32.110387 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-zshnf"] Oct 01 16:03:32 crc kubenswrapper[4949]: I1001 16:03:32.110674 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-6447ccbd8f-zshnf" podUID="9d650e79-1c23-4c79-8cd7-6ee731bc81e6" containerName="dnsmasq-dns" containerID="cri-o://f7bf67ae781a319d9559b81750773177b4ec84e7daeea0d505f28b6517948252" gracePeriod=10 Oct 01 16:03:32 crc kubenswrapper[4949]: I1001 16:03:32.282624 4949 generic.go:334] "Generic (PLEG): container finished" podID="9d650e79-1c23-4c79-8cd7-6ee731bc81e6" containerID="f7bf67ae781a319d9559b81750773177b4ec84e7daeea0d505f28b6517948252" exitCode=0 Oct 01 16:03:32 crc kubenswrapper[4949]: I1001 16:03:32.282920 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-zshnf" event={"ID":"9d650e79-1c23-4c79-8cd7-6ee731bc81e6","Type":"ContainerDied","Data":"f7bf67ae781a319d9559b81750773177b4ec84e7daeea0d505f28b6517948252"} Oct 01 16:03:32 crc kubenswrapper[4949]: E1001 16:03:32.358762 4949 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9d650e79_1c23_4c79_8cd7_6ee731bc81e6.slice/crio-f7bf67ae781a319d9559b81750773177b4ec84e7daeea0d505f28b6517948252.scope\": RecentStats: unable to find data in memory cache]" Oct 01 16:03:32 crc kubenswrapper[4949]: I1001 16:03:32.562683 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-zshnf" Oct 01 16:03:32 crc kubenswrapper[4949]: I1001 16:03:32.757729 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9d650e79-1c23-4c79-8cd7-6ee731bc81e6-openstack-edpm-ipam\") pod \"9d650e79-1c23-4c79-8cd7-6ee731bc81e6\" (UID: \"9d650e79-1c23-4c79-8cd7-6ee731bc81e6\") " Oct 01 16:03:32 crc kubenswrapper[4949]: I1001 16:03:32.757820 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cds48\" (UniqueName: \"kubernetes.io/projected/9d650e79-1c23-4c79-8cd7-6ee731bc81e6-kube-api-access-cds48\") pod \"9d650e79-1c23-4c79-8cd7-6ee731bc81e6\" (UID: \"9d650e79-1c23-4c79-8cd7-6ee731bc81e6\") " Oct 01 16:03:32 crc kubenswrapper[4949]: I1001 16:03:32.757959 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d650e79-1c23-4c79-8cd7-6ee731bc81e6-ovsdbserver-nb\") pod \"9d650e79-1c23-4c79-8cd7-6ee731bc81e6\" (UID: \"9d650e79-1c23-4c79-8cd7-6ee731bc81e6\") " Oct 01 16:03:32 crc kubenswrapper[4949]: I1001 16:03:32.758028 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d650e79-1c23-4c79-8cd7-6ee731bc81e6-config\") pod \"9d650e79-1c23-4c79-8cd7-6ee731bc81e6\" (UID: \"9d650e79-1c23-4c79-8cd7-6ee731bc81e6\") " Oct 01 16:03:32 crc kubenswrapper[4949]: I1001 16:03:32.758114 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d650e79-1c23-4c79-8cd7-6ee731bc81e6-dns-svc\") pod \"9d650e79-1c23-4c79-8cd7-6ee731bc81e6\" (UID: \"9d650e79-1c23-4c79-8cd7-6ee731bc81e6\") " Oct 01 16:03:32 crc kubenswrapper[4949]: I1001 16:03:32.758193 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d650e79-1c23-4c79-8cd7-6ee731bc81e6-ovsdbserver-sb\") pod \"9d650e79-1c23-4c79-8cd7-6ee731bc81e6\" (UID: \"9d650e79-1c23-4c79-8cd7-6ee731bc81e6\") " Oct 01 16:03:32 crc kubenswrapper[4949]: I1001 16:03:32.763847 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d650e79-1c23-4c79-8cd7-6ee731bc81e6-kube-api-access-cds48" (OuterVolumeSpecName: "kube-api-access-cds48") pod "9d650e79-1c23-4c79-8cd7-6ee731bc81e6" (UID: "9d650e79-1c23-4c79-8cd7-6ee731bc81e6"). InnerVolumeSpecName "kube-api-access-cds48". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:03:32 crc kubenswrapper[4949]: I1001 16:03:32.809372 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d650e79-1c23-4c79-8cd7-6ee731bc81e6-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "9d650e79-1c23-4c79-8cd7-6ee731bc81e6" (UID: "9d650e79-1c23-4c79-8cd7-6ee731bc81e6"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 16:03:32 crc kubenswrapper[4949]: I1001 16:03:32.810730 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d650e79-1c23-4c79-8cd7-6ee731bc81e6-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "9d650e79-1c23-4c79-8cd7-6ee731bc81e6" (UID: "9d650e79-1c23-4c79-8cd7-6ee731bc81e6"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 16:03:32 crc kubenswrapper[4949]: I1001 16:03:32.811757 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d650e79-1c23-4c79-8cd7-6ee731bc81e6-config" (OuterVolumeSpecName: "config") pod "9d650e79-1c23-4c79-8cd7-6ee731bc81e6" (UID: "9d650e79-1c23-4c79-8cd7-6ee731bc81e6"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 16:03:32 crc kubenswrapper[4949]: I1001 16:03:32.817981 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d650e79-1c23-4c79-8cd7-6ee731bc81e6-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "9d650e79-1c23-4c79-8cd7-6ee731bc81e6" (UID: "9d650e79-1c23-4c79-8cd7-6ee731bc81e6"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 16:03:32 crc kubenswrapper[4949]: I1001 16:03:32.819104 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d650e79-1c23-4c79-8cd7-6ee731bc81e6-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "9d650e79-1c23-4c79-8cd7-6ee731bc81e6" (UID: "9d650e79-1c23-4c79-8cd7-6ee731bc81e6"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 16:03:32 crc kubenswrapper[4949]: I1001 16:03:32.860398 4949 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/9d650e79-1c23-4c79-8cd7-6ee731bc81e6-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Oct 01 16:03:32 crc kubenswrapper[4949]: I1001 16:03:32.860748 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cds48\" (UniqueName: \"kubernetes.io/projected/9d650e79-1c23-4c79-8cd7-6ee731bc81e6-kube-api-access-cds48\") on node \"crc\" DevicePath \"\"" Oct 01 16:03:32 crc kubenswrapper[4949]: I1001 16:03:32.860764 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/9d650e79-1c23-4c79-8cd7-6ee731bc81e6-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 01 16:03:32 crc kubenswrapper[4949]: I1001 16:03:32.860777 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d650e79-1c23-4c79-8cd7-6ee731bc81e6-config\") on node \"crc\" DevicePath \"\"" Oct 01 16:03:32 crc kubenswrapper[4949]: I1001 16:03:32.860788 4949 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/9d650e79-1c23-4c79-8cd7-6ee731bc81e6-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 01 16:03:32 crc kubenswrapper[4949]: I1001 16:03:32.860799 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/9d650e79-1c23-4c79-8cd7-6ee731bc81e6-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 01 16:03:33 crc kubenswrapper[4949]: I1001 16:03:33.292953 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-6447ccbd8f-zshnf" event={"ID":"9d650e79-1c23-4c79-8cd7-6ee731bc81e6","Type":"ContainerDied","Data":"c543c54467d602913e1bacbdcec051dd0e6df254a9ed21da9877b4f25f399728"} Oct 01 16:03:33 crc kubenswrapper[4949]: I1001 16:03:33.293029 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-6447ccbd8f-zshnf" Oct 01 16:03:33 crc kubenswrapper[4949]: I1001 16:03:33.294253 4949 scope.go:117] "RemoveContainer" containerID="f7bf67ae781a319d9559b81750773177b4ec84e7daeea0d505f28b6517948252" Oct 01 16:03:33 crc kubenswrapper[4949]: I1001 16:03:33.319873 4949 scope.go:117] "RemoveContainer" containerID="18ecf4670f51bcb7d2586a1f2368a241e6d0220f5135b01f1122191a274cdbe3" Oct 01 16:03:33 crc kubenswrapper[4949]: I1001 16:03:33.324452 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-zshnf"] Oct 01 16:03:33 crc kubenswrapper[4949]: I1001 16:03:33.331656 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-6447ccbd8f-zshnf"] Oct 01 16:03:33 crc kubenswrapper[4949]: I1001 16:03:33.612897 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d650e79-1c23-4c79-8cd7-6ee731bc81e6" path="/var/lib/kubelet/pods/9d650e79-1c23-4c79-8cd7-6ee731bc81e6/volumes" Oct 01 16:03:42 crc kubenswrapper[4949]: I1001 16:03:42.314495 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxjqg"] Oct 01 16:03:42 crc kubenswrapper[4949]: E1001 16:03:42.315923 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c21b4c0b-7e29-4226-8745-3f942703d8f0" containerName="init" Oct 01 16:03:42 crc kubenswrapper[4949]: I1001 16:03:42.315945 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="c21b4c0b-7e29-4226-8745-3f942703d8f0" containerName="init" Oct 01 16:03:42 crc kubenswrapper[4949]: E1001 16:03:42.315993 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d650e79-1c23-4c79-8cd7-6ee731bc81e6" containerName="dnsmasq-dns" Oct 01 16:03:42 crc kubenswrapper[4949]: I1001 16:03:42.316063 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d650e79-1c23-4c79-8cd7-6ee731bc81e6" containerName="dnsmasq-dns" Oct 01 16:03:42 crc kubenswrapper[4949]: E1001 16:03:42.316148 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c21b4c0b-7e29-4226-8745-3f942703d8f0" containerName="dnsmasq-dns" Oct 01 16:03:42 crc kubenswrapper[4949]: I1001 16:03:42.316162 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="c21b4c0b-7e29-4226-8745-3f942703d8f0" containerName="dnsmasq-dns" Oct 01 16:03:42 crc kubenswrapper[4949]: E1001 16:03:42.316179 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d650e79-1c23-4c79-8cd7-6ee731bc81e6" containerName="init" Oct 01 16:03:42 crc kubenswrapper[4949]: I1001 16:03:42.316189 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d650e79-1c23-4c79-8cd7-6ee731bc81e6" containerName="init" Oct 01 16:03:42 crc kubenswrapper[4949]: I1001 16:03:42.317378 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="c21b4c0b-7e29-4226-8745-3f942703d8f0" containerName="dnsmasq-dns" Oct 01 16:03:42 crc kubenswrapper[4949]: I1001 16:03:42.317433 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d650e79-1c23-4c79-8cd7-6ee731bc81e6" containerName="dnsmasq-dns" Oct 01 16:03:42 crc kubenswrapper[4949]: I1001 16:03:42.318419 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxjqg" Oct 01 16:03:42 crc kubenswrapper[4949]: I1001 16:03:42.323050 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 01 16:03:42 crc kubenswrapper[4949]: I1001 16:03:42.324569 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dwhcl" Oct 01 16:03:42 crc kubenswrapper[4949]: I1001 16:03:42.324969 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxjqg"] Oct 01 16:03:42 crc kubenswrapper[4949]: I1001 16:03:42.326026 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0b976daa-cecf-4085-a431-7d5f85d127e2-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wxjqg\" (UID: \"0b976daa-cecf-4085-a431-7d5f85d127e2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxjqg" Oct 01 16:03:42 crc kubenswrapper[4949]: I1001 16:03:42.326068 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4pg8\" (UniqueName: \"kubernetes.io/projected/0b976daa-cecf-4085-a431-7d5f85d127e2-kube-api-access-h4pg8\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wxjqg\" (UID: \"0b976daa-cecf-4085-a431-7d5f85d127e2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxjqg" Oct 01 16:03:42 crc kubenswrapper[4949]: I1001 16:03:42.326139 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b976daa-cecf-4085-a431-7d5f85d127e2-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wxjqg\" (UID: \"0b976daa-cecf-4085-a431-7d5f85d127e2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxjqg" Oct 01 16:03:42 crc kubenswrapper[4949]: I1001 16:03:42.326204 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b976daa-cecf-4085-a431-7d5f85d127e2-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wxjqg\" (UID: \"0b976daa-cecf-4085-a431-7d5f85d127e2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxjqg" Oct 01 16:03:42 crc kubenswrapper[4949]: I1001 16:03:42.331246 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 01 16:03:42 crc kubenswrapper[4949]: I1001 16:03:42.331504 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 01 16:03:42 crc kubenswrapper[4949]: I1001 16:03:42.383739 4949 generic.go:334] "Generic (PLEG): container finished" podID="3e904978-9466-4e56-8e31-c4e06b6f49e2" containerID="fc2e2bc7796d64353721c2cdcf31430e108a7b50637f3de9cda9f4c81519bf0c" exitCode=0 Oct 01 16:03:42 crc kubenswrapper[4949]: I1001 16:03:42.383852 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3e904978-9466-4e56-8e31-c4e06b6f49e2","Type":"ContainerDied","Data":"fc2e2bc7796d64353721c2cdcf31430e108a7b50637f3de9cda9f4c81519bf0c"} Oct 01 16:03:42 crc kubenswrapper[4949]: I1001 16:03:42.428019 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b976daa-cecf-4085-a431-7d5f85d127e2-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wxjqg\" (UID: \"0b976daa-cecf-4085-a431-7d5f85d127e2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxjqg" Oct 01 16:03:42 crc kubenswrapper[4949]: I1001 16:03:42.428166 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b976daa-cecf-4085-a431-7d5f85d127e2-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wxjqg\" (UID: \"0b976daa-cecf-4085-a431-7d5f85d127e2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxjqg" Oct 01 16:03:42 crc kubenswrapper[4949]: I1001 16:03:42.428366 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0b976daa-cecf-4085-a431-7d5f85d127e2-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wxjqg\" (UID: \"0b976daa-cecf-4085-a431-7d5f85d127e2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxjqg" Oct 01 16:03:42 crc kubenswrapper[4949]: I1001 16:03:42.428417 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h4pg8\" (UniqueName: \"kubernetes.io/projected/0b976daa-cecf-4085-a431-7d5f85d127e2-kube-api-access-h4pg8\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wxjqg\" (UID: \"0b976daa-cecf-4085-a431-7d5f85d127e2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxjqg" Oct 01 16:03:42 crc kubenswrapper[4949]: I1001 16:03:42.431656 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b976daa-cecf-4085-a431-7d5f85d127e2-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wxjqg\" (UID: \"0b976daa-cecf-4085-a431-7d5f85d127e2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxjqg" Oct 01 16:03:42 crc kubenswrapper[4949]: I1001 16:03:42.432108 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0b976daa-cecf-4085-a431-7d5f85d127e2-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wxjqg\" (UID: \"0b976daa-cecf-4085-a431-7d5f85d127e2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxjqg" Oct 01 16:03:42 crc kubenswrapper[4949]: I1001 16:03:42.433979 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b976daa-cecf-4085-a431-7d5f85d127e2-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wxjqg\" (UID: \"0b976daa-cecf-4085-a431-7d5f85d127e2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxjqg" Oct 01 16:03:42 crc kubenswrapper[4949]: I1001 16:03:42.449671 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h4pg8\" (UniqueName: \"kubernetes.io/projected/0b976daa-cecf-4085-a431-7d5f85d127e2-kube-api-access-h4pg8\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-wxjqg\" (UID: \"0b976daa-cecf-4085-a431-7d5f85d127e2\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxjqg" Oct 01 16:03:42 crc kubenswrapper[4949]: I1001 16:03:42.633985 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxjqg" Oct 01 16:03:43 crc kubenswrapper[4949]: I1001 16:03:43.009985 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxjqg"] Oct 01 16:03:43 crc kubenswrapper[4949]: W1001 16:03:43.015188 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod0b976daa_cecf_4085_a431_7d5f85d127e2.slice/crio-757baa277601ac43f9719c5e45050a0ec8fad0bbc1cd62acb0b1d1bfd020b121 WatchSource:0}: Error finding container 757baa277601ac43f9719c5e45050a0ec8fad0bbc1cd62acb0b1d1bfd020b121: Status 404 returned error can't find the container with id 757baa277601ac43f9719c5e45050a0ec8fad0bbc1cd62acb0b1d1bfd020b121 Oct 01 16:03:43 crc kubenswrapper[4949]: I1001 16:03:43.394727 4949 generic.go:334] "Generic (PLEG): container finished" podID="620a0468-6462-442e-bfcf-ca26669a638a" containerID="55e48e8ab951f0ae21851d388536ccf855ad827e1e4fe1dd5d2b1ddf9b0bda7a" exitCode=0 Oct 01 16:03:43 crc kubenswrapper[4949]: I1001 16:03:43.394814 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"620a0468-6462-442e-bfcf-ca26669a638a","Type":"ContainerDied","Data":"55e48e8ab951f0ae21851d388536ccf855ad827e1e4fe1dd5d2b1ddf9b0bda7a"} Oct 01 16:03:43 crc kubenswrapper[4949]: I1001 16:03:43.397213 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-server-0" event={"ID":"3e904978-9466-4e56-8e31-c4e06b6f49e2","Type":"ContainerStarted","Data":"3d1db762108addb23677aadd0a30ba381456450207c28fa91b38a4c3a0aa3726"} Oct 01 16:03:43 crc kubenswrapper[4949]: I1001 16:03:43.397445 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-server-0" Oct 01 16:03:43 crc kubenswrapper[4949]: I1001 16:03:43.398573 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxjqg" event={"ID":"0b976daa-cecf-4085-a431-7d5f85d127e2","Type":"ContainerStarted","Data":"757baa277601ac43f9719c5e45050a0ec8fad0bbc1cd62acb0b1d1bfd020b121"} Oct 01 16:03:43 crc kubenswrapper[4949]: I1001 16:03:43.454502 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-server-0" podStartSLOduration=37.454483277 podStartE2EDuration="37.454483277s" podCreationTimestamp="2025-10-01 16:03:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 16:03:43.446489355 +0000 UTC m=+1322.752095556" watchObservedRunningTime="2025-10-01 16:03:43.454483277 +0000 UTC m=+1322.760089468" Oct 01 16:03:44 crc kubenswrapper[4949]: I1001 16:03:44.409723 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/rabbitmq-cell1-server-0" event={"ID":"620a0468-6462-442e-bfcf-ca26669a638a","Type":"ContainerStarted","Data":"f0109cbca3d8320e9a6dbe8dc78b1236260f89c9e189e34cac9222cb7150f9fd"} Oct 01 16:03:44 crc kubenswrapper[4949]: I1001 16:03:44.410833 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/rabbitmq-cell1-server-0" Oct 01 16:03:44 crc kubenswrapper[4949]: I1001 16:03:44.445647 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/rabbitmq-cell1-server-0" podStartSLOduration=36.445623123 podStartE2EDuration="36.445623123s" podCreationTimestamp="2025-10-01 16:03:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 16:03:44.432431107 +0000 UTC m=+1323.738037338" watchObservedRunningTime="2025-10-01 16:03:44.445623123 +0000 UTC m=+1323.751229334" Oct 01 16:03:48 crc kubenswrapper[4949]: I1001 16:03:48.038492 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 16:03:48 crc kubenswrapper[4949]: I1001 16:03:48.038969 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 16:03:48 crc kubenswrapper[4949]: I1001 16:03:48.039010 4949 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l6287" Oct 01 16:03:48 crc kubenswrapper[4949]: I1001 16:03:48.039611 4949 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"77e05f9b2cb8b18e2f508e8c0aeda04f4ef4751cece8ee8551cb51dbf8eabbd8"} pod="openshift-machine-config-operator/machine-config-daemon-l6287" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 01 16:03:48 crc kubenswrapper[4949]: I1001 16:03:48.039655 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" containerID="cri-o://77e05f9b2cb8b18e2f508e8c0aeda04f4ef4751cece8ee8551cb51dbf8eabbd8" gracePeriod=600 Oct 01 16:03:48 crc kubenswrapper[4949]: I1001 16:03:48.455017 4949 generic.go:334] "Generic (PLEG): container finished" podID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerID="77e05f9b2cb8b18e2f508e8c0aeda04f4ef4751cece8ee8551cb51dbf8eabbd8" exitCode=0 Oct 01 16:03:48 crc kubenswrapper[4949]: I1001 16:03:48.455052 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" event={"ID":"0e15cd67-d4ad-49b8-96a6-da114105e558","Type":"ContainerDied","Data":"77e05f9b2cb8b18e2f508e8c0aeda04f4ef4751cece8ee8551cb51dbf8eabbd8"} Oct 01 16:03:48 crc kubenswrapper[4949]: I1001 16:03:48.455108 4949 scope.go:117] "RemoveContainer" containerID="7dd8953e4b2c2c8892c46fcb9ba2ef1fa5099f63e2374f27e45351f899f750d3" Oct 01 16:03:53 crc kubenswrapper[4949]: I1001 16:03:53.506398 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" event={"ID":"0e15cd67-d4ad-49b8-96a6-da114105e558","Type":"ContainerStarted","Data":"8c210bfce86fb45b4a25ef5c865680ea34bee887c4c2e606bab8c9c8e7ddd056"} Oct 01 16:03:53 crc kubenswrapper[4949]: I1001 16:03:53.508430 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxjqg" event={"ID":"0b976daa-cecf-4085-a431-7d5f85d127e2","Type":"ContainerStarted","Data":"19919642c99bf230bd71c2bad3373b7df3cd13641c11af2fcabf44579840e112"} Oct 01 16:03:53 crc kubenswrapper[4949]: I1001 16:03:53.545324 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxjqg" podStartSLOduration=1.61981452 podStartE2EDuration="11.545303074s" podCreationTimestamp="2025-10-01 16:03:42 +0000 UTC" firstStartedPulling="2025-10-01 16:03:43.017803059 +0000 UTC m=+1322.323409240" lastFinishedPulling="2025-10-01 16:03:52.943291603 +0000 UTC m=+1332.248897794" observedRunningTime="2025-10-01 16:03:53.545246583 +0000 UTC m=+1332.850852794" watchObservedRunningTime="2025-10-01 16:03:53.545303074 +0000 UTC m=+1332.850909265" Oct 01 16:03:57 crc kubenswrapper[4949]: I1001 16:03:57.351422 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-server-0" Oct 01 16:03:58 crc kubenswrapper[4949]: I1001 16:03:58.447398 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/rabbitmq-cell1-server-0" Oct 01 16:04:04 crc kubenswrapper[4949]: I1001 16:04:04.614713 4949 generic.go:334] "Generic (PLEG): container finished" podID="0b976daa-cecf-4085-a431-7d5f85d127e2" containerID="19919642c99bf230bd71c2bad3373b7df3cd13641c11af2fcabf44579840e112" exitCode=0 Oct 01 16:04:04 crc kubenswrapper[4949]: I1001 16:04:04.614795 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxjqg" event={"ID":"0b976daa-cecf-4085-a431-7d5f85d127e2","Type":"ContainerDied","Data":"19919642c99bf230bd71c2bad3373b7df3cd13641c11af2fcabf44579840e112"} Oct 01 16:04:06 crc kubenswrapper[4949]: I1001 16:04:06.024614 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxjqg" Oct 01 16:04:06 crc kubenswrapper[4949]: I1001 16:04:06.111614 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b976daa-cecf-4085-a431-7d5f85d127e2-inventory\") pod \"0b976daa-cecf-4085-a431-7d5f85d127e2\" (UID: \"0b976daa-cecf-4085-a431-7d5f85d127e2\") " Oct 01 16:04:06 crc kubenswrapper[4949]: I1001 16:04:06.111761 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h4pg8\" (UniqueName: \"kubernetes.io/projected/0b976daa-cecf-4085-a431-7d5f85d127e2-kube-api-access-h4pg8\") pod \"0b976daa-cecf-4085-a431-7d5f85d127e2\" (UID: \"0b976daa-cecf-4085-a431-7d5f85d127e2\") " Oct 01 16:04:06 crc kubenswrapper[4949]: I1001 16:04:06.111889 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b976daa-cecf-4085-a431-7d5f85d127e2-repo-setup-combined-ca-bundle\") pod \"0b976daa-cecf-4085-a431-7d5f85d127e2\" (UID: \"0b976daa-cecf-4085-a431-7d5f85d127e2\") " Oct 01 16:04:06 crc kubenswrapper[4949]: I1001 16:04:06.111924 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0b976daa-cecf-4085-a431-7d5f85d127e2-ssh-key\") pod \"0b976daa-cecf-4085-a431-7d5f85d127e2\" (UID: \"0b976daa-cecf-4085-a431-7d5f85d127e2\") " Oct 01 16:04:06 crc kubenswrapper[4949]: I1001 16:04:06.117385 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b976daa-cecf-4085-a431-7d5f85d127e2-kube-api-access-h4pg8" (OuterVolumeSpecName: "kube-api-access-h4pg8") pod "0b976daa-cecf-4085-a431-7d5f85d127e2" (UID: "0b976daa-cecf-4085-a431-7d5f85d127e2"). InnerVolumeSpecName "kube-api-access-h4pg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:04:06 crc kubenswrapper[4949]: I1001 16:04:06.123771 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b976daa-cecf-4085-a431-7d5f85d127e2-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "0b976daa-cecf-4085-a431-7d5f85d127e2" (UID: "0b976daa-cecf-4085-a431-7d5f85d127e2"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:04:06 crc kubenswrapper[4949]: I1001 16:04:06.139999 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b976daa-cecf-4085-a431-7d5f85d127e2-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "0b976daa-cecf-4085-a431-7d5f85d127e2" (UID: "0b976daa-cecf-4085-a431-7d5f85d127e2"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:04:06 crc kubenswrapper[4949]: I1001 16:04:06.142853 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b976daa-cecf-4085-a431-7d5f85d127e2-inventory" (OuterVolumeSpecName: "inventory") pod "0b976daa-cecf-4085-a431-7d5f85d127e2" (UID: "0b976daa-cecf-4085-a431-7d5f85d127e2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:04:06 crc kubenswrapper[4949]: I1001 16:04:06.214405 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h4pg8\" (UniqueName: \"kubernetes.io/projected/0b976daa-cecf-4085-a431-7d5f85d127e2-kube-api-access-h4pg8\") on node \"crc\" DevicePath \"\"" Oct 01 16:04:06 crc kubenswrapper[4949]: I1001 16:04:06.214442 4949 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/0b976daa-cecf-4085-a431-7d5f85d127e2-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 16:04:06 crc kubenswrapper[4949]: I1001 16:04:06.214454 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/0b976daa-cecf-4085-a431-7d5f85d127e2-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 01 16:04:06 crc kubenswrapper[4949]: I1001 16:04:06.214465 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/0b976daa-cecf-4085-a431-7d5f85d127e2-inventory\") on node \"crc\" DevicePath \"\"" Oct 01 16:04:06 crc kubenswrapper[4949]: I1001 16:04:06.633151 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxjqg" event={"ID":"0b976daa-cecf-4085-a431-7d5f85d127e2","Type":"ContainerDied","Data":"757baa277601ac43f9719c5e45050a0ec8fad0bbc1cd62acb0b1d1bfd020b121"} Oct 01 16:04:06 crc kubenswrapper[4949]: I1001 16:04:06.633651 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="757baa277601ac43f9719c5e45050a0ec8fad0bbc1cd62acb0b1d1bfd020b121" Oct 01 16:04:06 crc kubenswrapper[4949]: I1001 16:04:06.633218 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxjqg" Oct 01 16:04:06 crc kubenswrapper[4949]: I1001 16:04:06.720338 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nkzv4"] Oct 01 16:04:06 crc kubenswrapper[4949]: E1001 16:04:06.720720 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0b976daa-cecf-4085-a431-7d5f85d127e2" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Oct 01 16:04:06 crc kubenswrapper[4949]: I1001 16:04:06.720734 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="0b976daa-cecf-4085-a431-7d5f85d127e2" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Oct 01 16:04:06 crc kubenswrapper[4949]: I1001 16:04:06.720914 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="0b976daa-cecf-4085-a431-7d5f85d127e2" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Oct 01 16:04:06 crc kubenswrapper[4949]: I1001 16:04:06.721470 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nkzv4" Oct 01 16:04:06 crc kubenswrapper[4949]: I1001 16:04:06.723887 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 01 16:04:06 crc kubenswrapper[4949]: I1001 16:04:06.724027 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 01 16:04:06 crc kubenswrapper[4949]: I1001 16:04:06.725393 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 01 16:04:06 crc kubenswrapper[4949]: I1001 16:04:06.726912 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dwhcl" Oct 01 16:04:06 crc kubenswrapper[4949]: I1001 16:04:06.739097 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nkzv4"] Oct 01 16:04:06 crc kubenswrapper[4949]: I1001 16:04:06.827673 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaf20575-fb73-4f03-9b71-e9cf7f76710b-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nkzv4\" (UID: \"eaf20575-fb73-4f03-9b71-e9cf7f76710b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nkzv4" Oct 01 16:04:06 crc kubenswrapper[4949]: I1001 16:04:06.827759 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eaf20575-fb73-4f03-9b71-e9cf7f76710b-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nkzv4\" (UID: \"eaf20575-fb73-4f03-9b71-e9cf7f76710b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nkzv4" Oct 01 16:04:06 crc kubenswrapper[4949]: I1001 16:04:06.827801 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eaf20575-fb73-4f03-9b71-e9cf7f76710b-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nkzv4\" (UID: \"eaf20575-fb73-4f03-9b71-e9cf7f76710b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nkzv4" Oct 01 16:04:06 crc kubenswrapper[4949]: I1001 16:04:06.827952 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9phnd\" (UniqueName: \"kubernetes.io/projected/eaf20575-fb73-4f03-9b71-e9cf7f76710b-kube-api-access-9phnd\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nkzv4\" (UID: \"eaf20575-fb73-4f03-9b71-e9cf7f76710b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nkzv4" Oct 01 16:04:06 crc kubenswrapper[4949]: I1001 16:04:06.929386 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eaf20575-fb73-4f03-9b71-e9cf7f76710b-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nkzv4\" (UID: \"eaf20575-fb73-4f03-9b71-e9cf7f76710b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nkzv4" Oct 01 16:04:06 crc kubenswrapper[4949]: I1001 16:04:06.929478 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9phnd\" (UniqueName: \"kubernetes.io/projected/eaf20575-fb73-4f03-9b71-e9cf7f76710b-kube-api-access-9phnd\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nkzv4\" (UID: \"eaf20575-fb73-4f03-9b71-e9cf7f76710b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nkzv4" Oct 01 16:04:06 crc kubenswrapper[4949]: I1001 16:04:06.929600 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaf20575-fb73-4f03-9b71-e9cf7f76710b-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nkzv4\" (UID: \"eaf20575-fb73-4f03-9b71-e9cf7f76710b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nkzv4" Oct 01 16:04:06 crc kubenswrapper[4949]: I1001 16:04:06.929651 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eaf20575-fb73-4f03-9b71-e9cf7f76710b-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nkzv4\" (UID: \"eaf20575-fb73-4f03-9b71-e9cf7f76710b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nkzv4" Oct 01 16:04:06 crc kubenswrapper[4949]: I1001 16:04:06.934962 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eaf20575-fb73-4f03-9b71-e9cf7f76710b-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nkzv4\" (UID: \"eaf20575-fb73-4f03-9b71-e9cf7f76710b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nkzv4" Oct 01 16:04:06 crc kubenswrapper[4949]: I1001 16:04:06.938959 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaf20575-fb73-4f03-9b71-e9cf7f76710b-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nkzv4\" (UID: \"eaf20575-fb73-4f03-9b71-e9cf7f76710b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nkzv4" Oct 01 16:04:06 crc kubenswrapper[4949]: I1001 16:04:06.948577 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eaf20575-fb73-4f03-9b71-e9cf7f76710b-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nkzv4\" (UID: \"eaf20575-fb73-4f03-9b71-e9cf7f76710b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nkzv4" Oct 01 16:04:06 crc kubenswrapper[4949]: I1001 16:04:06.949347 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9phnd\" (UniqueName: \"kubernetes.io/projected/eaf20575-fb73-4f03-9b71-e9cf7f76710b-kube-api-access-9phnd\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nkzv4\" (UID: \"eaf20575-fb73-4f03-9b71-e9cf7f76710b\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nkzv4" Oct 01 16:04:07 crc kubenswrapper[4949]: I1001 16:04:07.040366 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nkzv4" Oct 01 16:04:07 crc kubenswrapper[4949]: I1001 16:04:07.569053 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nkzv4"] Oct 01 16:04:07 crc kubenswrapper[4949]: I1001 16:04:07.643979 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nkzv4" event={"ID":"eaf20575-fb73-4f03-9b71-e9cf7f76710b","Type":"ContainerStarted","Data":"b6b20062cbaa1a93ca26424244c6834d9dbe75dda9c1b0abafc3de6984268089"} Oct 01 16:04:08 crc kubenswrapper[4949]: I1001 16:04:08.654896 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nkzv4" event={"ID":"eaf20575-fb73-4f03-9b71-e9cf7f76710b","Type":"ContainerStarted","Data":"b279d97a6d69c40fcafc36962f6d15a8dfe7c218281a10d92eead4e772378c37"} Oct 01 16:04:08 crc kubenswrapper[4949]: I1001 16:04:08.692765 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nkzv4" podStartSLOduration=2.284821365 podStartE2EDuration="2.692746432s" podCreationTimestamp="2025-10-01 16:04:06 +0000 UTC" firstStartedPulling="2025-10-01 16:04:07.57535611 +0000 UTC m=+1346.880962301" lastFinishedPulling="2025-10-01 16:04:07.983281177 +0000 UTC m=+1347.288887368" observedRunningTime="2025-10-01 16:04:08.676737016 +0000 UTC m=+1347.982343227" watchObservedRunningTime="2025-10-01 16:04:08.692746432 +0000 UTC m=+1347.998352633" Oct 01 16:04:44 crc kubenswrapper[4949]: I1001 16:04:44.007903 4949 scope.go:117] "RemoveContainer" containerID="75d734ecd766ed27af83402c8abe1a9e364172aed99922df783b9a5c8c7227a8" Oct 01 16:04:44 crc kubenswrapper[4949]: I1001 16:04:44.052660 4949 scope.go:117] "RemoveContainer" containerID="ec2d0370e6be55fc2f394b7ef2a90e67e3323a05ffe1e940ef749a614cc67a4b" Oct 01 16:05:42 crc kubenswrapper[4949]: I1001 16:05:42.193858 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-bjk9h"] Oct 01 16:05:42 crc kubenswrapper[4949]: I1001 16:05:42.196219 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bjk9h" Oct 01 16:05:42 crc kubenswrapper[4949]: I1001 16:05:42.202556 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bjk9h"] Oct 01 16:05:42 crc kubenswrapper[4949]: I1001 16:05:42.220120 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7945a6b-409e-4f94-b32b-9b61f9aaff7b-utilities\") pod \"redhat-marketplace-bjk9h\" (UID: \"c7945a6b-409e-4f94-b32b-9b61f9aaff7b\") " pod="openshift-marketplace/redhat-marketplace-bjk9h" Oct 01 16:05:42 crc kubenswrapper[4949]: I1001 16:05:42.220711 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7945a6b-409e-4f94-b32b-9b61f9aaff7b-catalog-content\") pod \"redhat-marketplace-bjk9h\" (UID: \"c7945a6b-409e-4f94-b32b-9b61f9aaff7b\") " pod="openshift-marketplace/redhat-marketplace-bjk9h" Oct 01 16:05:42 crc kubenswrapper[4949]: I1001 16:05:42.220759 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7fwx6\" (UniqueName: \"kubernetes.io/projected/c7945a6b-409e-4f94-b32b-9b61f9aaff7b-kube-api-access-7fwx6\") pod \"redhat-marketplace-bjk9h\" (UID: \"c7945a6b-409e-4f94-b32b-9b61f9aaff7b\") " pod="openshift-marketplace/redhat-marketplace-bjk9h" Oct 01 16:05:42 crc kubenswrapper[4949]: I1001 16:05:42.322247 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7945a6b-409e-4f94-b32b-9b61f9aaff7b-utilities\") pod \"redhat-marketplace-bjk9h\" (UID: \"c7945a6b-409e-4f94-b32b-9b61f9aaff7b\") " pod="openshift-marketplace/redhat-marketplace-bjk9h" Oct 01 16:05:42 crc kubenswrapper[4949]: I1001 16:05:42.322403 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7945a6b-409e-4f94-b32b-9b61f9aaff7b-catalog-content\") pod \"redhat-marketplace-bjk9h\" (UID: \"c7945a6b-409e-4f94-b32b-9b61f9aaff7b\") " pod="openshift-marketplace/redhat-marketplace-bjk9h" Oct 01 16:05:42 crc kubenswrapper[4949]: I1001 16:05:42.322437 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7fwx6\" (UniqueName: \"kubernetes.io/projected/c7945a6b-409e-4f94-b32b-9b61f9aaff7b-kube-api-access-7fwx6\") pod \"redhat-marketplace-bjk9h\" (UID: \"c7945a6b-409e-4f94-b32b-9b61f9aaff7b\") " pod="openshift-marketplace/redhat-marketplace-bjk9h" Oct 01 16:05:42 crc kubenswrapper[4949]: I1001 16:05:42.322857 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7945a6b-409e-4f94-b32b-9b61f9aaff7b-utilities\") pod \"redhat-marketplace-bjk9h\" (UID: \"c7945a6b-409e-4f94-b32b-9b61f9aaff7b\") " pod="openshift-marketplace/redhat-marketplace-bjk9h" Oct 01 16:05:42 crc kubenswrapper[4949]: I1001 16:05:42.322920 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7945a6b-409e-4f94-b32b-9b61f9aaff7b-catalog-content\") pod \"redhat-marketplace-bjk9h\" (UID: \"c7945a6b-409e-4f94-b32b-9b61f9aaff7b\") " pod="openshift-marketplace/redhat-marketplace-bjk9h" Oct 01 16:05:42 crc kubenswrapper[4949]: I1001 16:05:42.343799 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7fwx6\" (UniqueName: \"kubernetes.io/projected/c7945a6b-409e-4f94-b32b-9b61f9aaff7b-kube-api-access-7fwx6\") pod \"redhat-marketplace-bjk9h\" (UID: \"c7945a6b-409e-4f94-b32b-9b61f9aaff7b\") " pod="openshift-marketplace/redhat-marketplace-bjk9h" Oct 01 16:05:42 crc kubenswrapper[4949]: I1001 16:05:42.516823 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bjk9h" Oct 01 16:05:43 crc kubenswrapper[4949]: I1001 16:05:43.024192 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-bjk9h"] Oct 01 16:05:43 crc kubenswrapper[4949]: I1001 16:05:43.664003 4949 generic.go:334] "Generic (PLEG): container finished" podID="c7945a6b-409e-4f94-b32b-9b61f9aaff7b" containerID="092288cb0665938c3283f34f0c3b2bd1c6ca2c11db7b8730998a5f96000ef456" exitCode=0 Oct 01 16:05:43 crc kubenswrapper[4949]: I1001 16:05:43.664040 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bjk9h" event={"ID":"c7945a6b-409e-4f94-b32b-9b61f9aaff7b","Type":"ContainerDied","Data":"092288cb0665938c3283f34f0c3b2bd1c6ca2c11db7b8730998a5f96000ef456"} Oct 01 16:05:43 crc kubenswrapper[4949]: I1001 16:05:43.664330 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bjk9h" event={"ID":"c7945a6b-409e-4f94-b32b-9b61f9aaff7b","Type":"ContainerStarted","Data":"e113f390d86948c2d89164584f990b42c1f812108859dc9fe76bc6205869c656"} Oct 01 16:05:44 crc kubenswrapper[4949]: I1001 16:05:44.164640 4949 scope.go:117] "RemoveContainer" containerID="4fa4f79ebcfd549e4d4b251e9b94b398074ed3501b9947b7e40828532899f1ca" Oct 01 16:05:44 crc kubenswrapper[4949]: I1001 16:05:44.679554 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bjk9h" event={"ID":"c7945a6b-409e-4f94-b32b-9b61f9aaff7b","Type":"ContainerStarted","Data":"112b6a09d55e670b9a1ca73bbcfe2ce26126855dbef83645ebd4a4c1be58e17c"} Oct 01 16:05:45 crc kubenswrapper[4949]: I1001 16:05:45.690721 4949 generic.go:334] "Generic (PLEG): container finished" podID="c7945a6b-409e-4f94-b32b-9b61f9aaff7b" containerID="112b6a09d55e670b9a1ca73bbcfe2ce26126855dbef83645ebd4a4c1be58e17c" exitCode=0 Oct 01 16:05:45 crc kubenswrapper[4949]: I1001 16:05:45.690763 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bjk9h" event={"ID":"c7945a6b-409e-4f94-b32b-9b61f9aaff7b","Type":"ContainerDied","Data":"112b6a09d55e670b9a1ca73bbcfe2ce26126855dbef83645ebd4a4c1be58e17c"} Oct 01 16:05:46 crc kubenswrapper[4949]: I1001 16:05:46.704316 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bjk9h" event={"ID":"c7945a6b-409e-4f94-b32b-9b61f9aaff7b","Type":"ContainerStarted","Data":"5150f2bdd9d2672a000547c0c17451bf6792b2d346c0dd33f1d1f36f36ec99dc"} Oct 01 16:05:52 crc kubenswrapper[4949]: I1001 16:05:52.517779 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-bjk9h" Oct 01 16:05:52 crc kubenswrapper[4949]: I1001 16:05:52.518330 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-bjk9h" Oct 01 16:05:52 crc kubenswrapper[4949]: I1001 16:05:52.583407 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-bjk9h" Oct 01 16:05:52 crc kubenswrapper[4949]: I1001 16:05:52.616242 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-bjk9h" podStartSLOduration=8.030309705 podStartE2EDuration="10.616213309s" podCreationTimestamp="2025-10-01 16:05:42 +0000 UTC" firstStartedPulling="2025-10-01 16:05:43.665775191 +0000 UTC m=+1442.971381382" lastFinishedPulling="2025-10-01 16:05:46.251678775 +0000 UTC m=+1445.557284986" observedRunningTime="2025-10-01 16:05:46.724570574 +0000 UTC m=+1446.030176775" watchObservedRunningTime="2025-10-01 16:05:52.616213309 +0000 UTC m=+1451.921819540" Oct 01 16:05:52 crc kubenswrapper[4949]: I1001 16:05:52.854063 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-bjk9h" Oct 01 16:05:52 crc kubenswrapper[4949]: I1001 16:05:52.898675 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bjk9h"] Oct 01 16:05:54 crc kubenswrapper[4949]: I1001 16:05:54.819616 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-bjk9h" podUID="c7945a6b-409e-4f94-b32b-9b61f9aaff7b" containerName="registry-server" containerID="cri-o://5150f2bdd9d2672a000547c0c17451bf6792b2d346c0dd33f1d1f36f36ec99dc" gracePeriod=2 Oct 01 16:05:55 crc kubenswrapper[4949]: I1001 16:05:55.230432 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wz4dm"] Oct 01 16:05:55 crc kubenswrapper[4949]: I1001 16:05:55.233441 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wz4dm" Oct 01 16:05:55 crc kubenswrapper[4949]: I1001 16:05:55.247295 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wz4dm"] Oct 01 16:05:55 crc kubenswrapper[4949]: I1001 16:05:55.309108 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bjk9h" Oct 01 16:05:55 crc kubenswrapper[4949]: I1001 16:05:55.382326 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7wq6\" (UniqueName: \"kubernetes.io/projected/9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0-kube-api-access-t7wq6\") pod \"certified-operators-wz4dm\" (UID: \"9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0\") " pod="openshift-marketplace/certified-operators-wz4dm" Oct 01 16:05:55 crc kubenswrapper[4949]: I1001 16:05:55.382403 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0-utilities\") pod \"certified-operators-wz4dm\" (UID: \"9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0\") " pod="openshift-marketplace/certified-operators-wz4dm" Oct 01 16:05:55 crc kubenswrapper[4949]: I1001 16:05:55.382428 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0-catalog-content\") pod \"certified-operators-wz4dm\" (UID: \"9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0\") " pod="openshift-marketplace/certified-operators-wz4dm" Oct 01 16:05:55 crc kubenswrapper[4949]: I1001 16:05:55.484347 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7fwx6\" (UniqueName: \"kubernetes.io/projected/c7945a6b-409e-4f94-b32b-9b61f9aaff7b-kube-api-access-7fwx6\") pod \"c7945a6b-409e-4f94-b32b-9b61f9aaff7b\" (UID: \"c7945a6b-409e-4f94-b32b-9b61f9aaff7b\") " Oct 01 16:05:55 crc kubenswrapper[4949]: I1001 16:05:55.484457 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7945a6b-409e-4f94-b32b-9b61f9aaff7b-catalog-content\") pod \"c7945a6b-409e-4f94-b32b-9b61f9aaff7b\" (UID: \"c7945a6b-409e-4f94-b32b-9b61f9aaff7b\") " Oct 01 16:05:55 crc kubenswrapper[4949]: I1001 16:05:55.484718 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7945a6b-409e-4f94-b32b-9b61f9aaff7b-utilities\") pod \"c7945a6b-409e-4f94-b32b-9b61f9aaff7b\" (UID: \"c7945a6b-409e-4f94-b32b-9b61f9aaff7b\") " Oct 01 16:05:55 crc kubenswrapper[4949]: I1001 16:05:55.485568 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t7wq6\" (UniqueName: \"kubernetes.io/projected/9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0-kube-api-access-t7wq6\") pod \"certified-operators-wz4dm\" (UID: \"9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0\") " pod="openshift-marketplace/certified-operators-wz4dm" Oct 01 16:05:55 crc kubenswrapper[4949]: I1001 16:05:55.485649 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7945a6b-409e-4f94-b32b-9b61f9aaff7b-utilities" (OuterVolumeSpecName: "utilities") pod "c7945a6b-409e-4f94-b32b-9b61f9aaff7b" (UID: "c7945a6b-409e-4f94-b32b-9b61f9aaff7b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:05:55 crc kubenswrapper[4949]: I1001 16:05:55.485692 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0-utilities\") pod \"certified-operators-wz4dm\" (UID: \"9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0\") " pod="openshift-marketplace/certified-operators-wz4dm" Oct 01 16:05:55 crc kubenswrapper[4949]: I1001 16:05:55.486273 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0-utilities\") pod \"certified-operators-wz4dm\" (UID: \"9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0\") " pod="openshift-marketplace/certified-operators-wz4dm" Oct 01 16:05:55 crc kubenswrapper[4949]: I1001 16:05:55.486747 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0-catalog-content\") pod \"certified-operators-wz4dm\" (UID: \"9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0\") " pod="openshift-marketplace/certified-operators-wz4dm" Oct 01 16:05:55 crc kubenswrapper[4949]: I1001 16:05:55.486382 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0-catalog-content\") pod \"certified-operators-wz4dm\" (UID: \"9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0\") " pod="openshift-marketplace/certified-operators-wz4dm" Oct 01 16:05:55 crc kubenswrapper[4949]: I1001 16:05:55.487010 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7945a6b-409e-4f94-b32b-9b61f9aaff7b-utilities\") on node \"crc\" DevicePath \"\"" Oct 01 16:05:55 crc kubenswrapper[4949]: I1001 16:05:55.491038 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7945a6b-409e-4f94-b32b-9b61f9aaff7b-kube-api-access-7fwx6" (OuterVolumeSpecName: "kube-api-access-7fwx6") pod "c7945a6b-409e-4f94-b32b-9b61f9aaff7b" (UID: "c7945a6b-409e-4f94-b32b-9b61f9aaff7b"). InnerVolumeSpecName "kube-api-access-7fwx6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:05:55 crc kubenswrapper[4949]: I1001 16:05:55.506286 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7wq6\" (UniqueName: \"kubernetes.io/projected/9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0-kube-api-access-t7wq6\") pod \"certified-operators-wz4dm\" (UID: \"9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0\") " pod="openshift-marketplace/certified-operators-wz4dm" Oct 01 16:05:55 crc kubenswrapper[4949]: I1001 16:05:55.506321 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7945a6b-409e-4f94-b32b-9b61f9aaff7b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c7945a6b-409e-4f94-b32b-9b61f9aaff7b" (UID: "c7945a6b-409e-4f94-b32b-9b61f9aaff7b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:05:55 crc kubenswrapper[4949]: I1001 16:05:55.588418 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7945a6b-409e-4f94-b32b-9b61f9aaff7b-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 01 16:05:55 crc kubenswrapper[4949]: I1001 16:05:55.588452 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7fwx6\" (UniqueName: \"kubernetes.io/projected/c7945a6b-409e-4f94-b32b-9b61f9aaff7b-kube-api-access-7fwx6\") on node \"crc\" DevicePath \"\"" Oct 01 16:05:55 crc kubenswrapper[4949]: I1001 16:05:55.622834 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wz4dm" Oct 01 16:05:55 crc kubenswrapper[4949]: I1001 16:05:55.870539 4949 generic.go:334] "Generic (PLEG): container finished" podID="c7945a6b-409e-4f94-b32b-9b61f9aaff7b" containerID="5150f2bdd9d2672a000547c0c17451bf6792b2d346c0dd33f1d1f36f36ec99dc" exitCode=0 Oct 01 16:05:55 crc kubenswrapper[4949]: I1001 16:05:55.870939 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-bjk9h" Oct 01 16:05:55 crc kubenswrapper[4949]: I1001 16:05:55.871898 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bjk9h" event={"ID":"c7945a6b-409e-4f94-b32b-9b61f9aaff7b","Type":"ContainerDied","Data":"5150f2bdd9d2672a000547c0c17451bf6792b2d346c0dd33f1d1f36f36ec99dc"} Oct 01 16:05:55 crc kubenswrapper[4949]: I1001 16:05:55.871930 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-bjk9h" event={"ID":"c7945a6b-409e-4f94-b32b-9b61f9aaff7b","Type":"ContainerDied","Data":"e113f390d86948c2d89164584f990b42c1f812108859dc9fe76bc6205869c656"} Oct 01 16:05:55 crc kubenswrapper[4949]: I1001 16:05:55.871950 4949 scope.go:117] "RemoveContainer" containerID="5150f2bdd9d2672a000547c0c17451bf6792b2d346c0dd33f1d1f36f36ec99dc" Oct 01 16:05:55 crc kubenswrapper[4949]: I1001 16:05:55.897292 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-bjk9h"] Oct 01 16:05:55 crc kubenswrapper[4949]: I1001 16:05:55.904024 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-bjk9h"] Oct 01 16:05:55 crc kubenswrapper[4949]: I1001 16:05:55.907757 4949 scope.go:117] "RemoveContainer" containerID="112b6a09d55e670b9a1ca73bbcfe2ce26126855dbef83645ebd4a4c1be58e17c" Oct 01 16:05:55 crc kubenswrapper[4949]: I1001 16:05:55.928065 4949 scope.go:117] "RemoveContainer" containerID="092288cb0665938c3283f34f0c3b2bd1c6ca2c11db7b8730998a5f96000ef456" Oct 01 16:05:55 crc kubenswrapper[4949]: I1001 16:05:55.981715 4949 scope.go:117] "RemoveContainer" containerID="5150f2bdd9d2672a000547c0c17451bf6792b2d346c0dd33f1d1f36f36ec99dc" Oct 01 16:05:55 crc kubenswrapper[4949]: E1001 16:05:55.982315 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5150f2bdd9d2672a000547c0c17451bf6792b2d346c0dd33f1d1f36f36ec99dc\": container with ID starting with 5150f2bdd9d2672a000547c0c17451bf6792b2d346c0dd33f1d1f36f36ec99dc not found: ID does not exist" containerID="5150f2bdd9d2672a000547c0c17451bf6792b2d346c0dd33f1d1f36f36ec99dc" Oct 01 16:05:55 crc kubenswrapper[4949]: I1001 16:05:55.982358 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5150f2bdd9d2672a000547c0c17451bf6792b2d346c0dd33f1d1f36f36ec99dc"} err="failed to get container status \"5150f2bdd9d2672a000547c0c17451bf6792b2d346c0dd33f1d1f36f36ec99dc\": rpc error: code = NotFound desc = could not find container \"5150f2bdd9d2672a000547c0c17451bf6792b2d346c0dd33f1d1f36f36ec99dc\": container with ID starting with 5150f2bdd9d2672a000547c0c17451bf6792b2d346c0dd33f1d1f36f36ec99dc not found: ID does not exist" Oct 01 16:05:55 crc kubenswrapper[4949]: I1001 16:05:55.982385 4949 scope.go:117] "RemoveContainer" containerID="112b6a09d55e670b9a1ca73bbcfe2ce26126855dbef83645ebd4a4c1be58e17c" Oct 01 16:05:55 crc kubenswrapper[4949]: E1001 16:05:55.982939 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"112b6a09d55e670b9a1ca73bbcfe2ce26126855dbef83645ebd4a4c1be58e17c\": container with ID starting with 112b6a09d55e670b9a1ca73bbcfe2ce26126855dbef83645ebd4a4c1be58e17c not found: ID does not exist" containerID="112b6a09d55e670b9a1ca73bbcfe2ce26126855dbef83645ebd4a4c1be58e17c" Oct 01 16:05:55 crc kubenswrapper[4949]: I1001 16:05:55.982978 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"112b6a09d55e670b9a1ca73bbcfe2ce26126855dbef83645ebd4a4c1be58e17c"} err="failed to get container status \"112b6a09d55e670b9a1ca73bbcfe2ce26126855dbef83645ebd4a4c1be58e17c\": rpc error: code = NotFound desc = could not find container \"112b6a09d55e670b9a1ca73bbcfe2ce26126855dbef83645ebd4a4c1be58e17c\": container with ID starting with 112b6a09d55e670b9a1ca73bbcfe2ce26126855dbef83645ebd4a4c1be58e17c not found: ID does not exist" Oct 01 16:05:55 crc kubenswrapper[4949]: I1001 16:05:55.983001 4949 scope.go:117] "RemoveContainer" containerID="092288cb0665938c3283f34f0c3b2bd1c6ca2c11db7b8730998a5f96000ef456" Oct 01 16:05:55 crc kubenswrapper[4949]: E1001 16:05:55.983367 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"092288cb0665938c3283f34f0c3b2bd1c6ca2c11db7b8730998a5f96000ef456\": container with ID starting with 092288cb0665938c3283f34f0c3b2bd1c6ca2c11db7b8730998a5f96000ef456 not found: ID does not exist" containerID="092288cb0665938c3283f34f0c3b2bd1c6ca2c11db7b8730998a5f96000ef456" Oct 01 16:05:55 crc kubenswrapper[4949]: I1001 16:05:55.983395 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"092288cb0665938c3283f34f0c3b2bd1c6ca2c11db7b8730998a5f96000ef456"} err="failed to get container status \"092288cb0665938c3283f34f0c3b2bd1c6ca2c11db7b8730998a5f96000ef456\": rpc error: code = NotFound desc = could not find container \"092288cb0665938c3283f34f0c3b2bd1c6ca2c11db7b8730998a5f96000ef456\": container with ID starting with 092288cb0665938c3283f34f0c3b2bd1c6ca2c11db7b8730998a5f96000ef456 not found: ID does not exist" Oct 01 16:05:56 crc kubenswrapper[4949]: I1001 16:05:56.091756 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wz4dm"] Oct 01 16:05:56 crc kubenswrapper[4949]: W1001 16:05:56.099229 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9f70c3f4_8ef8_4dc5_9d33_2a8755da71e0.slice/crio-6eefec8c35fb1d0cabfb4e5d789b01d69fc798f2c8d3f63587c0a2ebdfbb48c3 WatchSource:0}: Error finding container 6eefec8c35fb1d0cabfb4e5d789b01d69fc798f2c8d3f63587c0a2ebdfbb48c3: Status 404 returned error can't find the container with id 6eefec8c35fb1d0cabfb4e5d789b01d69fc798f2c8d3f63587c0a2ebdfbb48c3 Oct 01 16:05:56 crc kubenswrapper[4949]: I1001 16:05:56.887914 4949 generic.go:334] "Generic (PLEG): container finished" podID="9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0" containerID="e0d318122ae057da14a82cc4ea3f577a516fded8bc0947850c7a31421d638303" exitCode=0 Oct 01 16:05:56 crc kubenswrapper[4949]: I1001 16:05:56.887958 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wz4dm" event={"ID":"9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0","Type":"ContainerDied","Data":"e0d318122ae057da14a82cc4ea3f577a516fded8bc0947850c7a31421d638303"} Oct 01 16:05:56 crc kubenswrapper[4949]: I1001 16:05:56.887985 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wz4dm" event={"ID":"9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0","Type":"ContainerStarted","Data":"6eefec8c35fb1d0cabfb4e5d789b01d69fc798f2c8d3f63587c0a2ebdfbb48c3"} Oct 01 16:05:57 crc kubenswrapper[4949]: I1001 16:05:57.619629 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7945a6b-409e-4f94-b32b-9b61f9aaff7b" path="/var/lib/kubelet/pods/c7945a6b-409e-4f94-b32b-9b61f9aaff7b/volumes" Oct 01 16:06:00 crc kubenswrapper[4949]: I1001 16:06:00.941166 4949 generic.go:334] "Generic (PLEG): container finished" podID="9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0" containerID="6201e3e7b3f0df707446ac9874d9f3e639e644fb8bbecfed63ef910ae3b6b7e7" exitCode=0 Oct 01 16:06:00 crc kubenswrapper[4949]: I1001 16:06:00.941375 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wz4dm" event={"ID":"9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0","Type":"ContainerDied","Data":"6201e3e7b3f0df707446ac9874d9f3e639e644fb8bbecfed63ef910ae3b6b7e7"} Oct 01 16:06:03 crc kubenswrapper[4949]: I1001 16:06:03.982750 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wz4dm" event={"ID":"9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0","Type":"ContainerStarted","Data":"05c2155b080e8b3e0f64678856465a04c41d3cc84918ba3976577fd83b80c218"} Oct 01 16:06:04 crc kubenswrapper[4949]: I1001 16:06:04.005416 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wz4dm" podStartSLOduration=3.056381583 podStartE2EDuration="9.005401274s" podCreationTimestamp="2025-10-01 16:05:55 +0000 UTC" firstStartedPulling="2025-10-01 16:05:56.891395622 +0000 UTC m=+1456.197001853" lastFinishedPulling="2025-10-01 16:06:02.840415353 +0000 UTC m=+1462.146021544" observedRunningTime="2025-10-01 16:06:04.003421269 +0000 UTC m=+1463.309027460" watchObservedRunningTime="2025-10-01 16:06:04.005401274 +0000 UTC m=+1463.311007465" Oct 01 16:06:05 crc kubenswrapper[4949]: I1001 16:06:05.623194 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wz4dm" Oct 01 16:06:05 crc kubenswrapper[4949]: I1001 16:06:05.623493 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wz4dm" Oct 01 16:06:05 crc kubenswrapper[4949]: I1001 16:06:05.691888 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wz4dm" Oct 01 16:06:15 crc kubenswrapper[4949]: I1001 16:06:15.694392 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wz4dm" Oct 01 16:06:15 crc kubenswrapper[4949]: I1001 16:06:15.771239 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wz4dm"] Oct 01 16:06:16 crc kubenswrapper[4949]: I1001 16:06:16.111259 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wz4dm" podUID="9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0" containerName="registry-server" containerID="cri-o://05c2155b080e8b3e0f64678856465a04c41d3cc84918ba3976577fd83b80c218" gracePeriod=2 Oct 01 16:06:16 crc kubenswrapper[4949]: I1001 16:06:16.547527 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wz4dm" Oct 01 16:06:16 crc kubenswrapper[4949]: I1001 16:06:16.695989 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7wq6\" (UniqueName: \"kubernetes.io/projected/9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0-kube-api-access-t7wq6\") pod \"9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0\" (UID: \"9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0\") " Oct 01 16:06:16 crc kubenswrapper[4949]: I1001 16:06:16.696100 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0-utilities\") pod \"9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0\" (UID: \"9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0\") " Oct 01 16:06:16 crc kubenswrapper[4949]: I1001 16:06:16.696175 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0-catalog-content\") pod \"9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0\" (UID: \"9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0\") " Oct 01 16:06:16 crc kubenswrapper[4949]: I1001 16:06:16.697061 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0-utilities" (OuterVolumeSpecName: "utilities") pod "9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0" (UID: "9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:06:16 crc kubenswrapper[4949]: I1001 16:06:16.701258 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0-kube-api-access-t7wq6" (OuterVolumeSpecName: "kube-api-access-t7wq6") pod "9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0" (UID: "9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0"). InnerVolumeSpecName "kube-api-access-t7wq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:06:16 crc kubenswrapper[4949]: I1001 16:06:16.755027 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0" (UID: "9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:06:16 crc kubenswrapper[4949]: I1001 16:06:16.798964 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t7wq6\" (UniqueName: \"kubernetes.io/projected/9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0-kube-api-access-t7wq6\") on node \"crc\" DevicePath \"\"" Oct 01 16:06:16 crc kubenswrapper[4949]: I1001 16:06:16.799218 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0-utilities\") on node \"crc\" DevicePath \"\"" Oct 01 16:06:16 crc kubenswrapper[4949]: I1001 16:06:16.799279 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 01 16:06:17 crc kubenswrapper[4949]: I1001 16:06:17.123727 4949 generic.go:334] "Generic (PLEG): container finished" podID="9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0" containerID="05c2155b080e8b3e0f64678856465a04c41d3cc84918ba3976577fd83b80c218" exitCode=0 Oct 01 16:06:17 crc kubenswrapper[4949]: I1001 16:06:17.123782 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wz4dm" event={"ID":"9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0","Type":"ContainerDied","Data":"05c2155b080e8b3e0f64678856465a04c41d3cc84918ba3976577fd83b80c218"} Oct 01 16:06:17 crc kubenswrapper[4949]: I1001 16:06:17.123845 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wz4dm" Oct 01 16:06:17 crc kubenswrapper[4949]: I1001 16:06:17.124114 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wz4dm" event={"ID":"9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0","Type":"ContainerDied","Data":"6eefec8c35fb1d0cabfb4e5d789b01d69fc798f2c8d3f63587c0a2ebdfbb48c3"} Oct 01 16:06:17 crc kubenswrapper[4949]: I1001 16:06:17.124152 4949 scope.go:117] "RemoveContainer" containerID="05c2155b080e8b3e0f64678856465a04c41d3cc84918ba3976577fd83b80c218" Oct 01 16:06:17 crc kubenswrapper[4949]: I1001 16:06:17.153887 4949 scope.go:117] "RemoveContainer" containerID="6201e3e7b3f0df707446ac9874d9f3e639e644fb8bbecfed63ef910ae3b6b7e7" Oct 01 16:06:17 crc kubenswrapper[4949]: I1001 16:06:17.181384 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wz4dm"] Oct 01 16:06:17 crc kubenswrapper[4949]: I1001 16:06:17.192285 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wz4dm"] Oct 01 16:06:17 crc kubenswrapper[4949]: I1001 16:06:17.204906 4949 scope.go:117] "RemoveContainer" containerID="e0d318122ae057da14a82cc4ea3f577a516fded8bc0947850c7a31421d638303" Oct 01 16:06:17 crc kubenswrapper[4949]: I1001 16:06:17.249328 4949 scope.go:117] "RemoveContainer" containerID="05c2155b080e8b3e0f64678856465a04c41d3cc84918ba3976577fd83b80c218" Oct 01 16:06:17 crc kubenswrapper[4949]: E1001 16:06:17.250012 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"05c2155b080e8b3e0f64678856465a04c41d3cc84918ba3976577fd83b80c218\": container with ID starting with 05c2155b080e8b3e0f64678856465a04c41d3cc84918ba3976577fd83b80c218 not found: ID does not exist" containerID="05c2155b080e8b3e0f64678856465a04c41d3cc84918ba3976577fd83b80c218" Oct 01 16:06:17 crc kubenswrapper[4949]: I1001 16:06:17.250047 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"05c2155b080e8b3e0f64678856465a04c41d3cc84918ba3976577fd83b80c218"} err="failed to get container status \"05c2155b080e8b3e0f64678856465a04c41d3cc84918ba3976577fd83b80c218\": rpc error: code = NotFound desc = could not find container \"05c2155b080e8b3e0f64678856465a04c41d3cc84918ba3976577fd83b80c218\": container with ID starting with 05c2155b080e8b3e0f64678856465a04c41d3cc84918ba3976577fd83b80c218 not found: ID does not exist" Oct 01 16:06:17 crc kubenswrapper[4949]: I1001 16:06:17.250072 4949 scope.go:117] "RemoveContainer" containerID="6201e3e7b3f0df707446ac9874d9f3e639e644fb8bbecfed63ef910ae3b6b7e7" Oct 01 16:06:17 crc kubenswrapper[4949]: E1001 16:06:17.250353 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6201e3e7b3f0df707446ac9874d9f3e639e644fb8bbecfed63ef910ae3b6b7e7\": container with ID starting with 6201e3e7b3f0df707446ac9874d9f3e639e644fb8bbecfed63ef910ae3b6b7e7 not found: ID does not exist" containerID="6201e3e7b3f0df707446ac9874d9f3e639e644fb8bbecfed63ef910ae3b6b7e7" Oct 01 16:06:17 crc kubenswrapper[4949]: I1001 16:06:17.250378 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6201e3e7b3f0df707446ac9874d9f3e639e644fb8bbecfed63ef910ae3b6b7e7"} err="failed to get container status \"6201e3e7b3f0df707446ac9874d9f3e639e644fb8bbecfed63ef910ae3b6b7e7\": rpc error: code = NotFound desc = could not find container \"6201e3e7b3f0df707446ac9874d9f3e639e644fb8bbecfed63ef910ae3b6b7e7\": container with ID starting with 6201e3e7b3f0df707446ac9874d9f3e639e644fb8bbecfed63ef910ae3b6b7e7 not found: ID does not exist" Oct 01 16:06:17 crc kubenswrapper[4949]: I1001 16:06:17.250396 4949 scope.go:117] "RemoveContainer" containerID="e0d318122ae057da14a82cc4ea3f577a516fded8bc0947850c7a31421d638303" Oct 01 16:06:17 crc kubenswrapper[4949]: E1001 16:06:17.250702 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0d318122ae057da14a82cc4ea3f577a516fded8bc0947850c7a31421d638303\": container with ID starting with e0d318122ae057da14a82cc4ea3f577a516fded8bc0947850c7a31421d638303 not found: ID does not exist" containerID="e0d318122ae057da14a82cc4ea3f577a516fded8bc0947850c7a31421d638303" Oct 01 16:06:17 crc kubenswrapper[4949]: I1001 16:06:17.250729 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0d318122ae057da14a82cc4ea3f577a516fded8bc0947850c7a31421d638303"} err="failed to get container status \"e0d318122ae057da14a82cc4ea3f577a516fded8bc0947850c7a31421d638303\": rpc error: code = NotFound desc = could not find container \"e0d318122ae057da14a82cc4ea3f577a516fded8bc0947850c7a31421d638303\": container with ID starting with e0d318122ae057da14a82cc4ea3f577a516fded8bc0947850c7a31421d638303 not found: ID does not exist" Oct 01 16:06:17 crc kubenswrapper[4949]: I1001 16:06:17.618160 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0" path="/var/lib/kubelet/pods/9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0/volumes" Oct 01 16:06:18 crc kubenswrapper[4949]: I1001 16:06:18.039325 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 16:06:18 crc kubenswrapper[4949]: I1001 16:06:18.039418 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 16:06:25 crc kubenswrapper[4949]: I1001 16:06:25.173311 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-6x5pl"] Oct 01 16:06:25 crc kubenswrapper[4949]: E1001 16:06:25.174609 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7945a6b-409e-4f94-b32b-9b61f9aaff7b" containerName="extract-content" Oct 01 16:06:25 crc kubenswrapper[4949]: I1001 16:06:25.174632 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7945a6b-409e-4f94-b32b-9b61f9aaff7b" containerName="extract-content" Oct 01 16:06:25 crc kubenswrapper[4949]: E1001 16:06:25.174650 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7945a6b-409e-4f94-b32b-9b61f9aaff7b" containerName="extract-utilities" Oct 01 16:06:25 crc kubenswrapper[4949]: I1001 16:06:25.174658 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7945a6b-409e-4f94-b32b-9b61f9aaff7b" containerName="extract-utilities" Oct 01 16:06:25 crc kubenswrapper[4949]: E1001 16:06:25.174679 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0" containerName="extract-content" Oct 01 16:06:25 crc kubenswrapper[4949]: I1001 16:06:25.174688 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0" containerName="extract-content" Oct 01 16:06:25 crc kubenswrapper[4949]: E1001 16:06:25.174712 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0" containerName="extract-utilities" Oct 01 16:06:25 crc kubenswrapper[4949]: I1001 16:06:25.174719 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0" containerName="extract-utilities" Oct 01 16:06:25 crc kubenswrapper[4949]: E1001 16:06:25.174728 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7945a6b-409e-4f94-b32b-9b61f9aaff7b" containerName="registry-server" Oct 01 16:06:25 crc kubenswrapper[4949]: I1001 16:06:25.174736 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7945a6b-409e-4f94-b32b-9b61f9aaff7b" containerName="registry-server" Oct 01 16:06:25 crc kubenswrapper[4949]: E1001 16:06:25.174757 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0" containerName="registry-server" Oct 01 16:06:25 crc kubenswrapper[4949]: I1001 16:06:25.174765 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0" containerName="registry-server" Oct 01 16:06:25 crc kubenswrapper[4949]: I1001 16:06:25.174987 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f70c3f4-8ef8-4dc5-9d33-2a8755da71e0" containerName="registry-server" Oct 01 16:06:25 crc kubenswrapper[4949]: I1001 16:06:25.175007 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7945a6b-409e-4f94-b32b-9b61f9aaff7b" containerName="registry-server" Oct 01 16:06:25 crc kubenswrapper[4949]: I1001 16:06:25.177198 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6x5pl" Oct 01 16:06:25 crc kubenswrapper[4949]: I1001 16:06:25.184509 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6x5pl"] Oct 01 16:06:25 crc kubenswrapper[4949]: I1001 16:06:25.360193 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e-utilities\") pod \"community-operators-6x5pl\" (UID: \"62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e\") " pod="openshift-marketplace/community-operators-6x5pl" Oct 01 16:06:25 crc kubenswrapper[4949]: I1001 16:06:25.360274 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e-catalog-content\") pod \"community-operators-6x5pl\" (UID: \"62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e\") " pod="openshift-marketplace/community-operators-6x5pl" Oct 01 16:06:25 crc kubenswrapper[4949]: I1001 16:06:25.360302 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-brcqv\" (UniqueName: \"kubernetes.io/projected/62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e-kube-api-access-brcqv\") pod \"community-operators-6x5pl\" (UID: \"62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e\") " pod="openshift-marketplace/community-operators-6x5pl" Oct 01 16:06:25 crc kubenswrapper[4949]: I1001 16:06:25.461751 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e-utilities\") pod \"community-operators-6x5pl\" (UID: \"62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e\") " pod="openshift-marketplace/community-operators-6x5pl" Oct 01 16:06:25 crc kubenswrapper[4949]: I1001 16:06:25.461808 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e-catalog-content\") pod \"community-operators-6x5pl\" (UID: \"62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e\") " pod="openshift-marketplace/community-operators-6x5pl" Oct 01 16:06:25 crc kubenswrapper[4949]: I1001 16:06:25.461830 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-brcqv\" (UniqueName: \"kubernetes.io/projected/62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e-kube-api-access-brcqv\") pod \"community-operators-6x5pl\" (UID: \"62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e\") " pod="openshift-marketplace/community-operators-6x5pl" Oct 01 16:06:25 crc kubenswrapper[4949]: I1001 16:06:25.462534 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e-utilities\") pod \"community-operators-6x5pl\" (UID: \"62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e\") " pod="openshift-marketplace/community-operators-6x5pl" Oct 01 16:06:25 crc kubenswrapper[4949]: I1001 16:06:25.462561 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e-catalog-content\") pod \"community-operators-6x5pl\" (UID: \"62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e\") " pod="openshift-marketplace/community-operators-6x5pl" Oct 01 16:06:25 crc kubenswrapper[4949]: I1001 16:06:25.483847 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-brcqv\" (UniqueName: \"kubernetes.io/projected/62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e-kube-api-access-brcqv\") pod \"community-operators-6x5pl\" (UID: \"62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e\") " pod="openshift-marketplace/community-operators-6x5pl" Oct 01 16:06:25 crc kubenswrapper[4949]: I1001 16:06:25.504335 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6x5pl" Oct 01 16:06:25 crc kubenswrapper[4949]: I1001 16:06:25.785754 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-6x5pl"] Oct 01 16:06:26 crc kubenswrapper[4949]: I1001 16:06:26.231063 4949 generic.go:334] "Generic (PLEG): container finished" podID="62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e" containerID="e10b7fb2dae8bd5662d6ad35f8a7b3cbb68a544a205b5fd0efc44c618c2aa034" exitCode=0 Oct 01 16:06:26 crc kubenswrapper[4949]: I1001 16:06:26.231285 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6x5pl" event={"ID":"62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e","Type":"ContainerDied","Data":"e10b7fb2dae8bd5662d6ad35f8a7b3cbb68a544a205b5fd0efc44c618c2aa034"} Oct 01 16:06:26 crc kubenswrapper[4949]: I1001 16:06:26.231377 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6x5pl" event={"ID":"62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e","Type":"ContainerStarted","Data":"ba7633b3aa1be8044f40c2c1cbbf03d8bf65fa19cc8a45658203958eb5ceb3db"} Oct 01 16:06:27 crc kubenswrapper[4949]: I1001 16:06:27.242424 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6x5pl" event={"ID":"62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e","Type":"ContainerStarted","Data":"ff0aa15654c391e921c11af0de3facdb009ef1143970f09ea71561c90d704c2d"} Oct 01 16:06:28 crc kubenswrapper[4949]: I1001 16:06:28.256395 4949 generic.go:334] "Generic (PLEG): container finished" podID="62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e" containerID="ff0aa15654c391e921c11af0de3facdb009ef1143970f09ea71561c90d704c2d" exitCode=0 Oct 01 16:06:28 crc kubenswrapper[4949]: I1001 16:06:28.256754 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6x5pl" event={"ID":"62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e","Type":"ContainerDied","Data":"ff0aa15654c391e921c11af0de3facdb009ef1143970f09ea71561c90d704c2d"} Oct 01 16:06:29 crc kubenswrapper[4949]: I1001 16:06:29.269181 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6x5pl" event={"ID":"62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e","Type":"ContainerStarted","Data":"48a2e1a156db100f6c91ba1e5c9d9cd321b310cb2999bbe369bfeccd852ee5bd"} Oct 01 16:06:29 crc kubenswrapper[4949]: I1001 16:06:29.298763 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-6x5pl" podStartSLOduration=1.856210837 podStartE2EDuration="4.29873427s" podCreationTimestamp="2025-10-01 16:06:25 +0000 UTC" firstStartedPulling="2025-10-01 16:06:26.233066584 +0000 UTC m=+1485.538672775" lastFinishedPulling="2025-10-01 16:06:28.675590007 +0000 UTC m=+1487.981196208" observedRunningTime="2025-10-01 16:06:29.293930516 +0000 UTC m=+1488.599536727" watchObservedRunningTime="2025-10-01 16:06:29.29873427 +0000 UTC m=+1488.604340481" Oct 01 16:06:35 crc kubenswrapper[4949]: I1001 16:06:35.505172 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-6x5pl" Oct 01 16:06:35 crc kubenswrapper[4949]: I1001 16:06:35.505662 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-6x5pl" Oct 01 16:06:35 crc kubenswrapper[4949]: I1001 16:06:35.555221 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-6x5pl" Oct 01 16:06:36 crc kubenswrapper[4949]: I1001 16:06:36.396991 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-6x5pl" Oct 01 16:06:36 crc kubenswrapper[4949]: I1001 16:06:36.471385 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6x5pl"] Oct 01 16:06:38 crc kubenswrapper[4949]: I1001 16:06:38.357151 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-6x5pl" podUID="62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e" containerName="registry-server" containerID="cri-o://48a2e1a156db100f6c91ba1e5c9d9cd321b310cb2999bbe369bfeccd852ee5bd" gracePeriod=2 Oct 01 16:06:38 crc kubenswrapper[4949]: I1001 16:06:38.836043 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6x5pl" Oct 01 16:06:38 crc kubenswrapper[4949]: I1001 16:06:38.928160 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e-catalog-content\") pod \"62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e\" (UID: \"62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e\") " Oct 01 16:06:38 crc kubenswrapper[4949]: I1001 16:06:38.928662 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e-utilities\") pod \"62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e\" (UID: \"62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e\") " Oct 01 16:06:38 crc kubenswrapper[4949]: I1001 16:06:38.929021 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-brcqv\" (UniqueName: \"kubernetes.io/projected/62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e-kube-api-access-brcqv\") pod \"62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e\" (UID: \"62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e\") " Oct 01 16:06:38 crc kubenswrapper[4949]: I1001 16:06:38.929483 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e-utilities" (OuterVolumeSpecName: "utilities") pod "62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e" (UID: "62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:06:38 crc kubenswrapper[4949]: I1001 16:06:38.929952 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e-utilities\") on node \"crc\" DevicePath \"\"" Oct 01 16:06:38 crc kubenswrapper[4949]: I1001 16:06:38.934596 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e-kube-api-access-brcqv" (OuterVolumeSpecName: "kube-api-access-brcqv") pod "62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e" (UID: "62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e"). InnerVolumeSpecName "kube-api-access-brcqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:06:38 crc kubenswrapper[4949]: I1001 16:06:38.975933 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e" (UID: "62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:06:39 crc kubenswrapper[4949]: I1001 16:06:39.032905 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-brcqv\" (UniqueName: \"kubernetes.io/projected/62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e-kube-api-access-brcqv\") on node \"crc\" DevicePath \"\"" Oct 01 16:06:39 crc kubenswrapper[4949]: I1001 16:06:39.032960 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 01 16:06:39 crc kubenswrapper[4949]: I1001 16:06:39.367970 4949 generic.go:334] "Generic (PLEG): container finished" podID="62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e" containerID="48a2e1a156db100f6c91ba1e5c9d9cd321b310cb2999bbe369bfeccd852ee5bd" exitCode=0 Oct 01 16:06:39 crc kubenswrapper[4949]: I1001 16:06:39.368021 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6x5pl" event={"ID":"62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e","Type":"ContainerDied","Data":"48a2e1a156db100f6c91ba1e5c9d9cd321b310cb2999bbe369bfeccd852ee5bd"} Oct 01 16:06:39 crc kubenswrapper[4949]: I1001 16:06:39.368054 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-6x5pl" event={"ID":"62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e","Type":"ContainerDied","Data":"ba7633b3aa1be8044f40c2c1cbbf03d8bf65fa19cc8a45658203958eb5ceb3db"} Oct 01 16:06:39 crc kubenswrapper[4949]: I1001 16:06:39.368082 4949 scope.go:117] "RemoveContainer" containerID="48a2e1a156db100f6c91ba1e5c9d9cd321b310cb2999bbe369bfeccd852ee5bd" Oct 01 16:06:39 crc kubenswrapper[4949]: I1001 16:06:39.368093 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-6x5pl" Oct 01 16:06:39 crc kubenswrapper[4949]: I1001 16:06:39.390748 4949 scope.go:117] "RemoveContainer" containerID="ff0aa15654c391e921c11af0de3facdb009ef1143970f09ea71561c90d704c2d" Oct 01 16:06:39 crc kubenswrapper[4949]: I1001 16:06:39.422463 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-6x5pl"] Oct 01 16:06:39 crc kubenswrapper[4949]: I1001 16:06:39.433183 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-6x5pl"] Oct 01 16:06:39 crc kubenswrapper[4949]: I1001 16:06:39.446210 4949 scope.go:117] "RemoveContainer" containerID="e10b7fb2dae8bd5662d6ad35f8a7b3cbb68a544a205b5fd0efc44c618c2aa034" Oct 01 16:06:39 crc kubenswrapper[4949]: I1001 16:06:39.471936 4949 scope.go:117] "RemoveContainer" containerID="48a2e1a156db100f6c91ba1e5c9d9cd321b310cb2999bbe369bfeccd852ee5bd" Oct 01 16:06:39 crc kubenswrapper[4949]: E1001 16:06:39.472376 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48a2e1a156db100f6c91ba1e5c9d9cd321b310cb2999bbe369bfeccd852ee5bd\": container with ID starting with 48a2e1a156db100f6c91ba1e5c9d9cd321b310cb2999bbe369bfeccd852ee5bd not found: ID does not exist" containerID="48a2e1a156db100f6c91ba1e5c9d9cd321b310cb2999bbe369bfeccd852ee5bd" Oct 01 16:06:39 crc kubenswrapper[4949]: I1001 16:06:39.472507 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48a2e1a156db100f6c91ba1e5c9d9cd321b310cb2999bbe369bfeccd852ee5bd"} err="failed to get container status \"48a2e1a156db100f6c91ba1e5c9d9cd321b310cb2999bbe369bfeccd852ee5bd\": rpc error: code = NotFound desc = could not find container \"48a2e1a156db100f6c91ba1e5c9d9cd321b310cb2999bbe369bfeccd852ee5bd\": container with ID starting with 48a2e1a156db100f6c91ba1e5c9d9cd321b310cb2999bbe369bfeccd852ee5bd not found: ID does not exist" Oct 01 16:06:39 crc kubenswrapper[4949]: I1001 16:06:39.472597 4949 scope.go:117] "RemoveContainer" containerID="ff0aa15654c391e921c11af0de3facdb009ef1143970f09ea71561c90d704c2d" Oct 01 16:06:39 crc kubenswrapper[4949]: E1001 16:06:39.472949 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ff0aa15654c391e921c11af0de3facdb009ef1143970f09ea71561c90d704c2d\": container with ID starting with ff0aa15654c391e921c11af0de3facdb009ef1143970f09ea71561c90d704c2d not found: ID does not exist" containerID="ff0aa15654c391e921c11af0de3facdb009ef1143970f09ea71561c90d704c2d" Oct 01 16:06:39 crc kubenswrapper[4949]: I1001 16:06:39.472983 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ff0aa15654c391e921c11af0de3facdb009ef1143970f09ea71561c90d704c2d"} err="failed to get container status \"ff0aa15654c391e921c11af0de3facdb009ef1143970f09ea71561c90d704c2d\": rpc error: code = NotFound desc = could not find container \"ff0aa15654c391e921c11af0de3facdb009ef1143970f09ea71561c90d704c2d\": container with ID starting with ff0aa15654c391e921c11af0de3facdb009ef1143970f09ea71561c90d704c2d not found: ID does not exist" Oct 01 16:06:39 crc kubenswrapper[4949]: I1001 16:06:39.473004 4949 scope.go:117] "RemoveContainer" containerID="e10b7fb2dae8bd5662d6ad35f8a7b3cbb68a544a205b5fd0efc44c618c2aa034" Oct 01 16:06:39 crc kubenswrapper[4949]: E1001 16:06:39.473252 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e10b7fb2dae8bd5662d6ad35f8a7b3cbb68a544a205b5fd0efc44c618c2aa034\": container with ID starting with e10b7fb2dae8bd5662d6ad35f8a7b3cbb68a544a205b5fd0efc44c618c2aa034 not found: ID does not exist" containerID="e10b7fb2dae8bd5662d6ad35f8a7b3cbb68a544a205b5fd0efc44c618c2aa034" Oct 01 16:06:39 crc kubenswrapper[4949]: I1001 16:06:39.473350 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e10b7fb2dae8bd5662d6ad35f8a7b3cbb68a544a205b5fd0efc44c618c2aa034"} err="failed to get container status \"e10b7fb2dae8bd5662d6ad35f8a7b3cbb68a544a205b5fd0efc44c618c2aa034\": rpc error: code = NotFound desc = could not find container \"e10b7fb2dae8bd5662d6ad35f8a7b3cbb68a544a205b5fd0efc44c618c2aa034\": container with ID starting with e10b7fb2dae8bd5662d6ad35f8a7b3cbb68a544a205b5fd0efc44c618c2aa034 not found: ID does not exist" Oct 01 16:06:39 crc kubenswrapper[4949]: I1001 16:06:39.617918 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e" path="/var/lib/kubelet/pods/62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e/volumes" Oct 01 16:06:44 crc kubenswrapper[4949]: I1001 16:06:44.253348 4949 scope.go:117] "RemoveContainer" containerID="e9daf6bfce93f60aa08e2e7040b37d57982a080a4a28ee2cc5b1c4e4495e42bd" Oct 01 16:06:44 crc kubenswrapper[4949]: I1001 16:06:44.290309 4949 scope.go:117] "RemoveContainer" containerID="0b96858e818a4c6e6c8d294ff21ec55ebe8ae2b732eb598b470be263475628c0" Oct 01 16:06:44 crc kubenswrapper[4949]: I1001 16:06:44.326639 4949 scope.go:117] "RemoveContainer" containerID="7e87c790fabda9344b33ba74b81cd835661c19db51a895a0d3d2ef30ca000edd" Oct 01 16:06:44 crc kubenswrapper[4949]: I1001 16:06:44.354030 4949 scope.go:117] "RemoveContainer" containerID="1a9341233116f1a5bc9b6a797bd0e03a795a5a97034377639e714ac5b8cf3965" Oct 01 16:06:48 crc kubenswrapper[4949]: I1001 16:06:48.038562 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 16:06:48 crc kubenswrapper[4949]: I1001 16:06:48.039368 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 16:07:18 crc kubenswrapper[4949]: I1001 16:07:18.038259 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 16:07:18 crc kubenswrapper[4949]: I1001 16:07:18.038798 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 16:07:18 crc kubenswrapper[4949]: I1001 16:07:18.038842 4949 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l6287" Oct 01 16:07:18 crc kubenswrapper[4949]: I1001 16:07:18.039536 4949 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8c210bfce86fb45b4a25ef5c865680ea34bee887c4c2e606bab8c9c8e7ddd056"} pod="openshift-machine-config-operator/machine-config-daemon-l6287" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 01 16:07:18 crc kubenswrapper[4949]: I1001 16:07:18.039604 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" containerID="cri-o://8c210bfce86fb45b4a25ef5c865680ea34bee887c4c2e606bab8c9c8e7ddd056" gracePeriod=600 Oct 01 16:07:18 crc kubenswrapper[4949]: E1001 16:07:18.161633 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:07:18 crc kubenswrapper[4949]: I1001 16:07:18.755098 4949 generic.go:334] "Generic (PLEG): container finished" podID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerID="8c210bfce86fb45b4a25ef5c865680ea34bee887c4c2e606bab8c9c8e7ddd056" exitCode=0 Oct 01 16:07:18 crc kubenswrapper[4949]: I1001 16:07:18.755185 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" event={"ID":"0e15cd67-d4ad-49b8-96a6-da114105e558","Type":"ContainerDied","Data":"8c210bfce86fb45b4a25ef5c865680ea34bee887c4c2e606bab8c9c8e7ddd056"} Oct 01 16:07:18 crc kubenswrapper[4949]: I1001 16:07:18.755969 4949 scope.go:117] "RemoveContainer" containerID="77e05f9b2cb8b18e2f508e8c0aeda04f4ef4751cece8ee8551cb51dbf8eabbd8" Oct 01 16:07:18 crc kubenswrapper[4949]: I1001 16:07:18.757225 4949 scope.go:117] "RemoveContainer" containerID="8c210bfce86fb45b4a25ef5c865680ea34bee887c4c2e606bab8c9c8e7ddd056" Oct 01 16:07:18 crc kubenswrapper[4949]: E1001 16:07:18.757958 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:07:30 crc kubenswrapper[4949]: I1001 16:07:30.601926 4949 scope.go:117] "RemoveContainer" containerID="8c210bfce86fb45b4a25ef5c865680ea34bee887c4c2e606bab8c9c8e7ddd056" Oct 01 16:07:30 crc kubenswrapper[4949]: E1001 16:07:30.603105 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:07:32 crc kubenswrapper[4949]: I1001 16:07:32.905840 4949 generic.go:334] "Generic (PLEG): container finished" podID="eaf20575-fb73-4f03-9b71-e9cf7f76710b" containerID="b279d97a6d69c40fcafc36962f6d15a8dfe7c218281a10d92eead4e772378c37" exitCode=0 Oct 01 16:07:32 crc kubenswrapper[4949]: I1001 16:07:32.905916 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nkzv4" event={"ID":"eaf20575-fb73-4f03-9b71-e9cf7f76710b","Type":"ContainerDied","Data":"b279d97a6d69c40fcafc36962f6d15a8dfe7c218281a10d92eead4e772378c37"} Oct 01 16:07:34 crc kubenswrapper[4949]: I1001 16:07:34.362947 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nkzv4" Oct 01 16:07:34 crc kubenswrapper[4949]: I1001 16:07:34.449003 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaf20575-fb73-4f03-9b71-e9cf7f76710b-bootstrap-combined-ca-bundle\") pod \"eaf20575-fb73-4f03-9b71-e9cf7f76710b\" (UID: \"eaf20575-fb73-4f03-9b71-e9cf7f76710b\") " Oct 01 16:07:34 crc kubenswrapper[4949]: I1001 16:07:34.449680 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9phnd\" (UniqueName: \"kubernetes.io/projected/eaf20575-fb73-4f03-9b71-e9cf7f76710b-kube-api-access-9phnd\") pod \"eaf20575-fb73-4f03-9b71-e9cf7f76710b\" (UID: \"eaf20575-fb73-4f03-9b71-e9cf7f76710b\") " Oct 01 16:07:34 crc kubenswrapper[4949]: I1001 16:07:34.449863 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eaf20575-fb73-4f03-9b71-e9cf7f76710b-inventory\") pod \"eaf20575-fb73-4f03-9b71-e9cf7f76710b\" (UID: \"eaf20575-fb73-4f03-9b71-e9cf7f76710b\") " Oct 01 16:07:34 crc kubenswrapper[4949]: I1001 16:07:34.450015 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eaf20575-fb73-4f03-9b71-e9cf7f76710b-ssh-key\") pod \"eaf20575-fb73-4f03-9b71-e9cf7f76710b\" (UID: \"eaf20575-fb73-4f03-9b71-e9cf7f76710b\") " Oct 01 16:07:34 crc kubenswrapper[4949]: I1001 16:07:34.454723 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eaf20575-fb73-4f03-9b71-e9cf7f76710b-kube-api-access-9phnd" (OuterVolumeSpecName: "kube-api-access-9phnd") pod "eaf20575-fb73-4f03-9b71-e9cf7f76710b" (UID: "eaf20575-fb73-4f03-9b71-e9cf7f76710b"). InnerVolumeSpecName "kube-api-access-9phnd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:07:34 crc kubenswrapper[4949]: I1001 16:07:34.455281 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaf20575-fb73-4f03-9b71-e9cf7f76710b-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "eaf20575-fb73-4f03-9b71-e9cf7f76710b" (UID: "eaf20575-fb73-4f03-9b71-e9cf7f76710b"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:07:34 crc kubenswrapper[4949]: I1001 16:07:34.475330 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaf20575-fb73-4f03-9b71-e9cf7f76710b-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "eaf20575-fb73-4f03-9b71-e9cf7f76710b" (UID: "eaf20575-fb73-4f03-9b71-e9cf7f76710b"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:07:34 crc kubenswrapper[4949]: I1001 16:07:34.477883 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eaf20575-fb73-4f03-9b71-e9cf7f76710b-inventory" (OuterVolumeSpecName: "inventory") pod "eaf20575-fb73-4f03-9b71-e9cf7f76710b" (UID: "eaf20575-fb73-4f03-9b71-e9cf7f76710b"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:07:34 crc kubenswrapper[4949]: I1001 16:07:34.553467 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9phnd\" (UniqueName: \"kubernetes.io/projected/eaf20575-fb73-4f03-9b71-e9cf7f76710b-kube-api-access-9phnd\") on node \"crc\" DevicePath \"\"" Oct 01 16:07:34 crc kubenswrapper[4949]: I1001 16:07:34.553504 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eaf20575-fb73-4f03-9b71-e9cf7f76710b-inventory\") on node \"crc\" DevicePath \"\"" Oct 01 16:07:34 crc kubenswrapper[4949]: I1001 16:07:34.553516 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eaf20575-fb73-4f03-9b71-e9cf7f76710b-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 01 16:07:34 crc kubenswrapper[4949]: I1001 16:07:34.553529 4949 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eaf20575-fb73-4f03-9b71-e9cf7f76710b-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 16:07:34 crc kubenswrapper[4949]: I1001 16:07:34.926785 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nkzv4" event={"ID":"eaf20575-fb73-4f03-9b71-e9cf7f76710b","Type":"ContainerDied","Data":"b6b20062cbaa1a93ca26424244c6834d9dbe75dda9c1b0abafc3de6984268089"} Oct 01 16:07:34 crc kubenswrapper[4949]: I1001 16:07:34.926988 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6b20062cbaa1a93ca26424244c6834d9dbe75dda9c1b0abafc3de6984268089" Oct 01 16:07:34 crc kubenswrapper[4949]: I1001 16:07:34.926898 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nkzv4" Oct 01 16:07:35 crc kubenswrapper[4949]: I1001 16:07:35.016395 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2rxmk"] Oct 01 16:07:35 crc kubenswrapper[4949]: E1001 16:07:35.016814 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e" containerName="registry-server" Oct 01 16:07:35 crc kubenswrapper[4949]: I1001 16:07:35.016830 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e" containerName="registry-server" Oct 01 16:07:35 crc kubenswrapper[4949]: E1001 16:07:35.016843 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e" containerName="extract-utilities" Oct 01 16:07:35 crc kubenswrapper[4949]: I1001 16:07:35.016850 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e" containerName="extract-utilities" Oct 01 16:07:35 crc kubenswrapper[4949]: E1001 16:07:35.016871 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eaf20575-fb73-4f03-9b71-e9cf7f76710b" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 01 16:07:35 crc kubenswrapper[4949]: I1001 16:07:35.016879 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="eaf20575-fb73-4f03-9b71-e9cf7f76710b" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 01 16:07:35 crc kubenswrapper[4949]: E1001 16:07:35.016899 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e" containerName="extract-content" Oct 01 16:07:35 crc kubenswrapper[4949]: I1001 16:07:35.016906 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e" containerName="extract-content" Oct 01 16:07:35 crc kubenswrapper[4949]: I1001 16:07:35.017084 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="62cd5bd0-ca7b-4591-9aa0-f9b78d715c6e" containerName="registry-server" Oct 01 16:07:35 crc kubenswrapper[4949]: I1001 16:07:35.017098 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="eaf20575-fb73-4f03-9b71-e9cf7f76710b" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 01 16:07:35 crc kubenswrapper[4949]: I1001 16:07:35.017678 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2rxmk" Oct 01 16:07:35 crc kubenswrapper[4949]: I1001 16:07:35.024812 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 01 16:07:35 crc kubenswrapper[4949]: I1001 16:07:35.024836 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dwhcl" Oct 01 16:07:35 crc kubenswrapper[4949]: I1001 16:07:35.029823 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 01 16:07:35 crc kubenswrapper[4949]: I1001 16:07:35.030004 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 01 16:07:35 crc kubenswrapper[4949]: I1001 16:07:35.039333 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2rxmk"] Oct 01 16:07:35 crc kubenswrapper[4949]: I1001 16:07:35.184611 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df6fb3b0-6b6b-40d2-b610-9a393a89d502-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-2rxmk\" (UID: \"df6fb3b0-6b6b-40d2-b610-9a393a89d502\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2rxmk" Oct 01 16:07:35 crc kubenswrapper[4949]: I1001 16:07:35.184729 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9frrv\" (UniqueName: \"kubernetes.io/projected/df6fb3b0-6b6b-40d2-b610-9a393a89d502-kube-api-access-9frrv\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-2rxmk\" (UID: \"df6fb3b0-6b6b-40d2-b610-9a393a89d502\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2rxmk" Oct 01 16:07:35 crc kubenswrapper[4949]: I1001 16:07:35.184762 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/df6fb3b0-6b6b-40d2-b610-9a393a89d502-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-2rxmk\" (UID: \"df6fb3b0-6b6b-40d2-b610-9a393a89d502\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2rxmk" Oct 01 16:07:35 crc kubenswrapper[4949]: I1001 16:07:35.286953 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df6fb3b0-6b6b-40d2-b610-9a393a89d502-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-2rxmk\" (UID: \"df6fb3b0-6b6b-40d2-b610-9a393a89d502\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2rxmk" Oct 01 16:07:35 crc kubenswrapper[4949]: I1001 16:07:35.287371 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9frrv\" (UniqueName: \"kubernetes.io/projected/df6fb3b0-6b6b-40d2-b610-9a393a89d502-kube-api-access-9frrv\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-2rxmk\" (UID: \"df6fb3b0-6b6b-40d2-b610-9a393a89d502\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2rxmk" Oct 01 16:07:35 crc kubenswrapper[4949]: I1001 16:07:35.287555 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/df6fb3b0-6b6b-40d2-b610-9a393a89d502-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-2rxmk\" (UID: \"df6fb3b0-6b6b-40d2-b610-9a393a89d502\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2rxmk" Oct 01 16:07:35 crc kubenswrapper[4949]: I1001 16:07:35.295510 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/df6fb3b0-6b6b-40d2-b610-9a393a89d502-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-2rxmk\" (UID: \"df6fb3b0-6b6b-40d2-b610-9a393a89d502\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2rxmk" Oct 01 16:07:35 crc kubenswrapper[4949]: I1001 16:07:35.295556 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df6fb3b0-6b6b-40d2-b610-9a393a89d502-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-2rxmk\" (UID: \"df6fb3b0-6b6b-40d2-b610-9a393a89d502\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2rxmk" Oct 01 16:07:35 crc kubenswrapper[4949]: I1001 16:07:35.309974 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9frrv\" (UniqueName: \"kubernetes.io/projected/df6fb3b0-6b6b-40d2-b610-9a393a89d502-kube-api-access-9frrv\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-2rxmk\" (UID: \"df6fb3b0-6b6b-40d2-b610-9a393a89d502\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2rxmk" Oct 01 16:07:35 crc kubenswrapper[4949]: I1001 16:07:35.392917 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2rxmk" Oct 01 16:07:35 crc kubenswrapper[4949]: I1001 16:07:35.925700 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2rxmk"] Oct 01 16:07:35 crc kubenswrapper[4949]: I1001 16:07:35.931213 4949 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 01 16:07:35 crc kubenswrapper[4949]: I1001 16:07:35.940483 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2rxmk" event={"ID":"df6fb3b0-6b6b-40d2-b610-9a393a89d502","Type":"ContainerStarted","Data":"20b99c85e27dba2739a6c0e766f58d61f4122aa89ee119d72786f5ae532849f3"} Oct 01 16:07:36 crc kubenswrapper[4949]: I1001 16:07:36.950743 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2rxmk" event={"ID":"df6fb3b0-6b6b-40d2-b610-9a393a89d502","Type":"ContainerStarted","Data":"763f5fe8db341a31d7ad986a69dd845062f9987afe5eac9513fb28f8e97131b9"} Oct 01 16:07:36 crc kubenswrapper[4949]: I1001 16:07:36.977493 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2rxmk" podStartSLOduration=2.528659587 podStartE2EDuration="2.977470313s" podCreationTimestamp="2025-10-01 16:07:34 +0000 UTC" firstStartedPulling="2025-10-01 16:07:35.930929245 +0000 UTC m=+1555.236535456" lastFinishedPulling="2025-10-01 16:07:36.379739931 +0000 UTC m=+1555.685346182" observedRunningTime="2025-10-01 16:07:36.973255646 +0000 UTC m=+1556.278861877" watchObservedRunningTime="2025-10-01 16:07:36.977470313 +0000 UTC m=+1556.283076514" Oct 01 16:07:44 crc kubenswrapper[4949]: I1001 16:07:44.470330 4949 scope.go:117] "RemoveContainer" containerID="c10c04897d2fcea21ba258dbd462d7f62a5343064bda9b186b8c5869e7bb2c92" Oct 01 16:07:44 crc kubenswrapper[4949]: I1001 16:07:44.491215 4949 scope.go:117] "RemoveContainer" containerID="68c9a01d9d3ee3c696025febb9d560b621b39d962eb04d4034bafee6e73cd0f5" Oct 01 16:07:45 crc kubenswrapper[4949]: I1001 16:07:45.602690 4949 scope.go:117] "RemoveContainer" containerID="8c210bfce86fb45b4a25ef5c865680ea34bee887c4c2e606bab8c9c8e7ddd056" Oct 01 16:07:45 crc kubenswrapper[4949]: E1001 16:07:45.603252 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:08:00 crc kubenswrapper[4949]: I1001 16:08:00.602281 4949 scope.go:117] "RemoveContainer" containerID="8c210bfce86fb45b4a25ef5c865680ea34bee887c4c2e606bab8c9c8e7ddd056" Oct 01 16:08:00 crc kubenswrapper[4949]: E1001 16:08:00.603608 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:08:08 crc kubenswrapper[4949]: I1001 16:08:08.058809 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-create-zzmsh"] Oct 01 16:08:08 crc kubenswrapper[4949]: I1001 16:08:08.072614 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-create-zzmsh"] Oct 01 16:08:09 crc kubenswrapper[4949]: I1001 16:08:09.620844 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ff8b95e-dfbc-4a49-b845-7a598a5acb7d" path="/var/lib/kubelet/pods/1ff8b95e-dfbc-4a49-b845-7a598a5acb7d/volumes" Oct 01 16:08:14 crc kubenswrapper[4949]: I1001 16:08:14.027162 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-create-jc87k"] Oct 01 16:08:14 crc kubenswrapper[4949]: I1001 16:08:14.036311 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-create-jc87k"] Oct 01 16:08:14 crc kubenswrapper[4949]: I1001 16:08:14.601826 4949 scope.go:117] "RemoveContainer" containerID="8c210bfce86fb45b4a25ef5c865680ea34bee887c4c2e606bab8c9c8e7ddd056" Oct 01 16:08:14 crc kubenswrapper[4949]: E1001 16:08:14.602981 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:08:15 crc kubenswrapper[4949]: I1001 16:08:15.622118 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0f2a5755-0788-460c-bc1e-0a261a9a6e0f" path="/var/lib/kubelet/pods/0f2a5755-0788-460c-bc1e-0a261a9a6e0f/volumes" Oct 01 16:08:18 crc kubenswrapper[4949]: I1001 16:08:18.049789 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-create-xsqct"] Oct 01 16:08:18 crc kubenswrapper[4949]: I1001 16:08:18.065233 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-da20-account-create-hjm96"] Oct 01 16:08:18 crc kubenswrapper[4949]: I1001 16:08:18.074043 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-create-xsqct"] Oct 01 16:08:18 crc kubenswrapper[4949]: I1001 16:08:18.082653 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-da20-account-create-hjm96"] Oct 01 16:08:19 crc kubenswrapper[4949]: I1001 16:08:19.620317 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1337a138-272f-49d7-b806-2f097cfb71b1" path="/var/lib/kubelet/pods/1337a138-272f-49d7-b806-2f097cfb71b1/volumes" Oct 01 16:08:19 crc kubenswrapper[4949]: I1001 16:08:19.621410 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fe6ee0ce-f7e0-43b6-a591-e75632e2cf00" path="/var/lib/kubelet/pods/fe6ee0ce-f7e0-43b6-a591-e75632e2cf00/volumes" Oct 01 16:08:23 crc kubenswrapper[4949]: I1001 16:08:23.027874 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-6107-account-create-2hp27"] Oct 01 16:08:23 crc kubenswrapper[4949]: I1001 16:08:23.034761 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-6107-account-create-2hp27"] Oct 01 16:08:23 crc kubenswrapper[4949]: I1001 16:08:23.614084 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ed0c84f-caed-42c8-bf16-0091acce0f6b" path="/var/lib/kubelet/pods/6ed0c84f-caed-42c8-bf16-0091acce0f6b/volumes" Oct 01 16:08:28 crc kubenswrapper[4949]: I1001 16:08:28.023686 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-57a7-account-create-lnbg5"] Oct 01 16:08:28 crc kubenswrapper[4949]: I1001 16:08:28.031364 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-57a7-account-create-lnbg5"] Oct 01 16:08:29 crc kubenswrapper[4949]: I1001 16:08:29.602287 4949 scope.go:117] "RemoveContainer" containerID="8c210bfce86fb45b4a25ef5c865680ea34bee887c4c2e606bab8c9c8e7ddd056" Oct 01 16:08:29 crc kubenswrapper[4949]: E1001 16:08:29.602659 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:08:29 crc kubenswrapper[4949]: I1001 16:08:29.612439 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce6e3295-9412-4b03-a51e-2311ec5922aa" path="/var/lib/kubelet/pods/ce6e3295-9412-4b03-a51e-2311ec5922aa/volumes" Oct 01 16:08:42 crc kubenswrapper[4949]: I1001 16:08:42.600983 4949 scope.go:117] "RemoveContainer" containerID="8c210bfce86fb45b4a25ef5c865680ea34bee887c4c2e606bab8c9c8e7ddd056" Oct 01 16:08:42 crc kubenswrapper[4949]: E1001 16:08:42.601848 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:08:44 crc kubenswrapper[4949]: I1001 16:08:44.542542 4949 scope.go:117] "RemoveContainer" containerID="0b05e9e3781ab725a7a1cbcdd714b6c8b99f34418c32227558eca0f90dff8adc" Oct 01 16:08:44 crc kubenswrapper[4949]: I1001 16:08:44.574405 4949 scope.go:117] "RemoveContainer" containerID="6c32a17103cde505f4bf541b6fc983bb674e52d6a85dbed5e9173a599e671813" Oct 01 16:08:44 crc kubenswrapper[4949]: I1001 16:08:44.610472 4949 scope.go:117] "RemoveContainer" containerID="f303af1128e13f4226b028c9c93eeb28cfe40722f1830db8be37439cfa7047db" Oct 01 16:08:44 crc kubenswrapper[4949]: I1001 16:08:44.654060 4949 scope.go:117] "RemoveContainer" containerID="eac26c162ffa85b7dec0072552cd752e3f6c83e90644ca7299e7fb92239e443f" Oct 01 16:08:44 crc kubenswrapper[4949]: I1001 16:08:44.683518 4949 scope.go:117] "RemoveContainer" containerID="3c8d984c8dbb96e2cc5f2b1b532cbeb14776817532ffbe6329c3568bdcf08790" Oct 01 16:08:44 crc kubenswrapper[4949]: I1001 16:08:44.720336 4949 scope.go:117] "RemoveContainer" containerID="89ff5a4754b612a7bc103620e753b679367d62a4dbd39f5a9dd5230b82d3713e" Oct 01 16:08:45 crc kubenswrapper[4949]: I1001 16:08:45.051504 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-create-rrf4c"] Oct 01 16:08:45 crc kubenswrapper[4949]: I1001 16:08:45.062537 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-create-rrf4c"] Oct 01 16:08:45 crc kubenswrapper[4949]: I1001 16:08:45.617822 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b92402c1-7aa8-4a18-8603-a88c7d5b3735" path="/var/lib/kubelet/pods/b92402c1-7aa8-4a18-8603-a88c7d5b3735/volumes" Oct 01 16:08:46 crc kubenswrapper[4949]: I1001 16:08:46.052765 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-create-nrvpq"] Oct 01 16:08:46 crc kubenswrapper[4949]: I1001 16:08:46.065025 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-create-dv4j7"] Oct 01 16:08:46 crc kubenswrapper[4949]: I1001 16:08:46.075103 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-create-nrvpq"] Oct 01 16:08:46 crc kubenswrapper[4949]: I1001 16:08:46.085813 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-create-dv4j7"] Oct 01 16:08:47 crc kubenswrapper[4949]: I1001 16:08:47.612614 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c5b1392-74a7-44b0-8475-7795e34531ca" path="/var/lib/kubelet/pods/1c5b1392-74a7-44b0-8475-7795e34531ca/volumes" Oct 01 16:08:47 crc kubenswrapper[4949]: I1001 16:08:47.613371 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f41fa71-2325-4f08-9fe0-1268868683cf" path="/var/lib/kubelet/pods/8f41fa71-2325-4f08-9fe0-1268868683cf/volumes" Oct 01 16:08:47 crc kubenswrapper[4949]: I1001 16:08:47.640111 4949 generic.go:334] "Generic (PLEG): container finished" podID="df6fb3b0-6b6b-40d2-b610-9a393a89d502" containerID="763f5fe8db341a31d7ad986a69dd845062f9987afe5eac9513fb28f8e97131b9" exitCode=0 Oct 01 16:08:47 crc kubenswrapper[4949]: I1001 16:08:47.640170 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2rxmk" event={"ID":"df6fb3b0-6b6b-40d2-b610-9a393a89d502","Type":"ContainerDied","Data":"763f5fe8db341a31d7ad986a69dd845062f9987afe5eac9513fb28f8e97131b9"} Oct 01 16:08:49 crc kubenswrapper[4949]: I1001 16:08:49.023504 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2rxmk" Oct 01 16:08:49 crc kubenswrapper[4949]: I1001 16:08:49.191645 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/df6fb3b0-6b6b-40d2-b610-9a393a89d502-ssh-key\") pod \"df6fb3b0-6b6b-40d2-b610-9a393a89d502\" (UID: \"df6fb3b0-6b6b-40d2-b610-9a393a89d502\") " Oct 01 16:08:49 crc kubenswrapper[4949]: I1001 16:08:49.191816 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df6fb3b0-6b6b-40d2-b610-9a393a89d502-inventory\") pod \"df6fb3b0-6b6b-40d2-b610-9a393a89d502\" (UID: \"df6fb3b0-6b6b-40d2-b610-9a393a89d502\") " Oct 01 16:08:49 crc kubenswrapper[4949]: I1001 16:08:49.192457 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9frrv\" (UniqueName: \"kubernetes.io/projected/df6fb3b0-6b6b-40d2-b610-9a393a89d502-kube-api-access-9frrv\") pod \"df6fb3b0-6b6b-40d2-b610-9a393a89d502\" (UID: \"df6fb3b0-6b6b-40d2-b610-9a393a89d502\") " Oct 01 16:08:49 crc kubenswrapper[4949]: I1001 16:08:49.197642 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df6fb3b0-6b6b-40d2-b610-9a393a89d502-kube-api-access-9frrv" (OuterVolumeSpecName: "kube-api-access-9frrv") pod "df6fb3b0-6b6b-40d2-b610-9a393a89d502" (UID: "df6fb3b0-6b6b-40d2-b610-9a393a89d502"). InnerVolumeSpecName "kube-api-access-9frrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:08:49 crc kubenswrapper[4949]: I1001 16:08:49.221917 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df6fb3b0-6b6b-40d2-b610-9a393a89d502-inventory" (OuterVolumeSpecName: "inventory") pod "df6fb3b0-6b6b-40d2-b610-9a393a89d502" (UID: "df6fb3b0-6b6b-40d2-b610-9a393a89d502"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:08:49 crc kubenswrapper[4949]: I1001 16:08:49.226614 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df6fb3b0-6b6b-40d2-b610-9a393a89d502-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "df6fb3b0-6b6b-40d2-b610-9a393a89d502" (UID: "df6fb3b0-6b6b-40d2-b610-9a393a89d502"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:08:49 crc kubenswrapper[4949]: I1001 16:08:49.295077 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/df6fb3b0-6b6b-40d2-b610-9a393a89d502-inventory\") on node \"crc\" DevicePath \"\"" Oct 01 16:08:49 crc kubenswrapper[4949]: I1001 16:08:49.295154 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9frrv\" (UniqueName: \"kubernetes.io/projected/df6fb3b0-6b6b-40d2-b610-9a393a89d502-kube-api-access-9frrv\") on node \"crc\" DevicePath \"\"" Oct 01 16:08:49 crc kubenswrapper[4949]: I1001 16:08:49.295188 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/df6fb3b0-6b6b-40d2-b610-9a393a89d502-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 01 16:08:49 crc kubenswrapper[4949]: I1001 16:08:49.700296 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2rxmk" event={"ID":"df6fb3b0-6b6b-40d2-b610-9a393a89d502","Type":"ContainerDied","Data":"20b99c85e27dba2739a6c0e766f58d61f4122aa89ee119d72786f5ae532849f3"} Oct 01 16:08:49 crc kubenswrapper[4949]: I1001 16:08:49.700614 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="20b99c85e27dba2739a6c0e766f58d61f4122aa89ee119d72786f5ae532849f3" Oct 01 16:08:49 crc kubenswrapper[4949]: I1001 16:08:49.703416 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2rxmk" Oct 01 16:08:49 crc kubenswrapper[4949]: I1001 16:08:49.744629 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bct5c"] Oct 01 16:08:49 crc kubenswrapper[4949]: E1001 16:08:49.745081 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df6fb3b0-6b6b-40d2-b610-9a393a89d502" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Oct 01 16:08:49 crc kubenswrapper[4949]: I1001 16:08:49.745099 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="df6fb3b0-6b6b-40d2-b610-9a393a89d502" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Oct 01 16:08:49 crc kubenswrapper[4949]: I1001 16:08:49.745368 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="df6fb3b0-6b6b-40d2-b610-9a393a89d502" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Oct 01 16:08:49 crc kubenswrapper[4949]: I1001 16:08:49.746082 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bct5c" Oct 01 16:08:49 crc kubenswrapper[4949]: I1001 16:08:49.750811 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 01 16:08:49 crc kubenswrapper[4949]: I1001 16:08:49.751304 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 01 16:08:49 crc kubenswrapper[4949]: I1001 16:08:49.751377 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 01 16:08:49 crc kubenswrapper[4949]: I1001 16:08:49.751567 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dwhcl" Oct 01 16:08:49 crc kubenswrapper[4949]: I1001 16:08:49.759812 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bct5c"] Oct 01 16:08:49 crc kubenswrapper[4949]: I1001 16:08:49.905546 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k84d\" (UniqueName: \"kubernetes.io/projected/fabdc1d2-59b5-4699-b280-e35380873dc2-kube-api-access-9k84d\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bct5c\" (UID: \"fabdc1d2-59b5-4699-b280-e35380873dc2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bct5c" Oct 01 16:08:49 crc kubenswrapper[4949]: I1001 16:08:49.905607 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fabdc1d2-59b5-4699-b280-e35380873dc2-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bct5c\" (UID: \"fabdc1d2-59b5-4699-b280-e35380873dc2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bct5c" Oct 01 16:08:49 crc kubenswrapper[4949]: I1001 16:08:49.905628 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fabdc1d2-59b5-4699-b280-e35380873dc2-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bct5c\" (UID: \"fabdc1d2-59b5-4699-b280-e35380873dc2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bct5c" Oct 01 16:08:50 crc kubenswrapper[4949]: I1001 16:08:50.007113 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9k84d\" (UniqueName: \"kubernetes.io/projected/fabdc1d2-59b5-4699-b280-e35380873dc2-kube-api-access-9k84d\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bct5c\" (UID: \"fabdc1d2-59b5-4699-b280-e35380873dc2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bct5c" Oct 01 16:08:50 crc kubenswrapper[4949]: I1001 16:08:50.007201 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fabdc1d2-59b5-4699-b280-e35380873dc2-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bct5c\" (UID: \"fabdc1d2-59b5-4699-b280-e35380873dc2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bct5c" Oct 01 16:08:50 crc kubenswrapper[4949]: I1001 16:08:50.007228 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fabdc1d2-59b5-4699-b280-e35380873dc2-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bct5c\" (UID: \"fabdc1d2-59b5-4699-b280-e35380873dc2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bct5c" Oct 01 16:08:50 crc kubenswrapper[4949]: I1001 16:08:50.012229 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fabdc1d2-59b5-4699-b280-e35380873dc2-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bct5c\" (UID: \"fabdc1d2-59b5-4699-b280-e35380873dc2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bct5c" Oct 01 16:08:50 crc kubenswrapper[4949]: I1001 16:08:50.012821 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fabdc1d2-59b5-4699-b280-e35380873dc2-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bct5c\" (UID: \"fabdc1d2-59b5-4699-b280-e35380873dc2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bct5c" Oct 01 16:08:50 crc kubenswrapper[4949]: I1001 16:08:50.026518 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9k84d\" (UniqueName: \"kubernetes.io/projected/fabdc1d2-59b5-4699-b280-e35380873dc2-kube-api-access-9k84d\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-bct5c\" (UID: \"fabdc1d2-59b5-4699-b280-e35380873dc2\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bct5c" Oct 01 16:08:50 crc kubenswrapper[4949]: I1001 16:08:50.060665 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bct5c" Oct 01 16:08:50 crc kubenswrapper[4949]: I1001 16:08:50.606740 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bct5c"] Oct 01 16:08:50 crc kubenswrapper[4949]: I1001 16:08:50.709878 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bct5c" event={"ID":"fabdc1d2-59b5-4699-b280-e35380873dc2","Type":"ContainerStarted","Data":"e706bddb6482f2806950f05b02dca32eb8e6027a4d0ad7574804662ccae8d465"} Oct 01 16:08:51 crc kubenswrapper[4949]: I1001 16:08:51.031266 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-db-sync-76shp"] Oct 01 16:08:51 crc kubenswrapper[4949]: I1001 16:08:51.041754 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-db-sync-76shp"] Oct 01 16:08:51 crc kubenswrapper[4949]: I1001 16:08:51.626304 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="21738a3b-69a2-4955-b45d-fe1f31585951" path="/var/lib/kubelet/pods/21738a3b-69a2-4955-b45d-fe1f31585951/volumes" Oct 01 16:08:51 crc kubenswrapper[4949]: I1001 16:08:51.722036 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bct5c" event={"ID":"fabdc1d2-59b5-4699-b280-e35380873dc2","Type":"ContainerStarted","Data":"ecd50d8a74fd99fafd9394d1ad96e7e74d04fcc11ecb5071005123039991ab7c"} Oct 01 16:08:51 crc kubenswrapper[4949]: I1001 16:08:51.764022 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bct5c" podStartSLOduration=2.131621292 podStartE2EDuration="2.763997921s" podCreationTimestamp="2025-10-01 16:08:49 +0000 UTC" firstStartedPulling="2025-10-01 16:08:50.621316059 +0000 UTC m=+1629.926922250" lastFinishedPulling="2025-10-01 16:08:51.253692648 +0000 UTC m=+1630.559298879" observedRunningTime="2025-10-01 16:08:51.736950399 +0000 UTC m=+1631.042556630" watchObservedRunningTime="2025-10-01 16:08:51.763997921 +0000 UTC m=+1631.069604132" Oct 01 16:08:55 crc kubenswrapper[4949]: I1001 16:08:55.602402 4949 scope.go:117] "RemoveContainer" containerID="8c210bfce86fb45b4a25ef5c865680ea34bee887c4c2e606bab8c9c8e7ddd056" Oct 01 16:08:55 crc kubenswrapper[4949]: E1001 16:08:55.602936 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:08:56 crc kubenswrapper[4949]: I1001 16:08:56.061937 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-fd00-account-create-rg9pt"] Oct 01 16:08:56 crc kubenswrapper[4949]: I1001 16:08:56.071194 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-5d51-account-create-ctjvl"] Oct 01 16:08:56 crc kubenswrapper[4949]: I1001 16:08:56.081814 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-6041-account-create-67kfb"] Oct 01 16:08:56 crc kubenswrapper[4949]: I1001 16:08:56.091983 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-fd00-account-create-rg9pt"] Oct 01 16:08:56 crc kubenswrapper[4949]: I1001 16:08:56.099808 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-6041-account-create-67kfb"] Oct 01 16:08:56 crc kubenswrapper[4949]: I1001 16:08:56.107490 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-db-sync-t2pgk"] Oct 01 16:08:56 crc kubenswrapper[4949]: I1001 16:08:56.113879 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-5d51-account-create-ctjvl"] Oct 01 16:08:56 crc kubenswrapper[4949]: I1001 16:08:56.119661 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-db-sync-t2pgk"] Oct 01 16:08:56 crc kubenswrapper[4949]: I1001 16:08:56.574412 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-w6vf6"] Oct 01 16:08:56 crc kubenswrapper[4949]: I1001 16:08:56.577674 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w6vf6" Oct 01 16:08:56 crc kubenswrapper[4949]: I1001 16:08:56.592588 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w6vf6"] Oct 01 16:08:56 crc kubenswrapper[4949]: I1001 16:08:56.724240 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9bfd\" (UniqueName: \"kubernetes.io/projected/d4228f73-ff18-43f5-b6f9-4b30ddee8845-kube-api-access-s9bfd\") pod \"redhat-operators-w6vf6\" (UID: \"d4228f73-ff18-43f5-b6f9-4b30ddee8845\") " pod="openshift-marketplace/redhat-operators-w6vf6" Oct 01 16:08:56 crc kubenswrapper[4949]: I1001 16:08:56.724353 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4228f73-ff18-43f5-b6f9-4b30ddee8845-catalog-content\") pod \"redhat-operators-w6vf6\" (UID: \"d4228f73-ff18-43f5-b6f9-4b30ddee8845\") " pod="openshift-marketplace/redhat-operators-w6vf6" Oct 01 16:08:56 crc kubenswrapper[4949]: I1001 16:08:56.725042 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4228f73-ff18-43f5-b6f9-4b30ddee8845-utilities\") pod \"redhat-operators-w6vf6\" (UID: \"d4228f73-ff18-43f5-b6f9-4b30ddee8845\") " pod="openshift-marketplace/redhat-operators-w6vf6" Oct 01 16:08:56 crc kubenswrapper[4949]: I1001 16:08:56.782534 4949 generic.go:334] "Generic (PLEG): container finished" podID="fabdc1d2-59b5-4699-b280-e35380873dc2" containerID="ecd50d8a74fd99fafd9394d1ad96e7e74d04fcc11ecb5071005123039991ab7c" exitCode=0 Oct 01 16:08:56 crc kubenswrapper[4949]: I1001 16:08:56.782584 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bct5c" event={"ID":"fabdc1d2-59b5-4699-b280-e35380873dc2","Type":"ContainerDied","Data":"ecd50d8a74fd99fafd9394d1ad96e7e74d04fcc11ecb5071005123039991ab7c"} Oct 01 16:08:56 crc kubenswrapper[4949]: I1001 16:08:56.826704 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9bfd\" (UniqueName: \"kubernetes.io/projected/d4228f73-ff18-43f5-b6f9-4b30ddee8845-kube-api-access-s9bfd\") pod \"redhat-operators-w6vf6\" (UID: \"d4228f73-ff18-43f5-b6f9-4b30ddee8845\") " pod="openshift-marketplace/redhat-operators-w6vf6" Oct 01 16:08:56 crc kubenswrapper[4949]: I1001 16:08:56.826799 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4228f73-ff18-43f5-b6f9-4b30ddee8845-catalog-content\") pod \"redhat-operators-w6vf6\" (UID: \"d4228f73-ff18-43f5-b6f9-4b30ddee8845\") " pod="openshift-marketplace/redhat-operators-w6vf6" Oct 01 16:08:56 crc kubenswrapper[4949]: I1001 16:08:56.826869 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4228f73-ff18-43f5-b6f9-4b30ddee8845-utilities\") pod \"redhat-operators-w6vf6\" (UID: \"d4228f73-ff18-43f5-b6f9-4b30ddee8845\") " pod="openshift-marketplace/redhat-operators-w6vf6" Oct 01 16:08:56 crc kubenswrapper[4949]: I1001 16:08:56.827382 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4228f73-ff18-43f5-b6f9-4b30ddee8845-utilities\") pod \"redhat-operators-w6vf6\" (UID: \"d4228f73-ff18-43f5-b6f9-4b30ddee8845\") " pod="openshift-marketplace/redhat-operators-w6vf6" Oct 01 16:08:56 crc kubenswrapper[4949]: I1001 16:08:56.827439 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4228f73-ff18-43f5-b6f9-4b30ddee8845-catalog-content\") pod \"redhat-operators-w6vf6\" (UID: \"d4228f73-ff18-43f5-b6f9-4b30ddee8845\") " pod="openshift-marketplace/redhat-operators-w6vf6" Oct 01 16:08:56 crc kubenswrapper[4949]: I1001 16:08:56.847313 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9bfd\" (UniqueName: \"kubernetes.io/projected/d4228f73-ff18-43f5-b6f9-4b30ddee8845-kube-api-access-s9bfd\") pod \"redhat-operators-w6vf6\" (UID: \"d4228f73-ff18-43f5-b6f9-4b30ddee8845\") " pod="openshift-marketplace/redhat-operators-w6vf6" Oct 01 16:08:56 crc kubenswrapper[4949]: I1001 16:08:56.900063 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w6vf6" Oct 01 16:08:57 crc kubenswrapper[4949]: I1001 16:08:57.357225 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-w6vf6"] Oct 01 16:08:57 crc kubenswrapper[4949]: I1001 16:08:57.612407 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="24061dd1-dc8e-4fb2-b372-25983f927a74" path="/var/lib/kubelet/pods/24061dd1-dc8e-4fb2-b372-25983f927a74/volumes" Oct 01 16:08:57 crc kubenswrapper[4949]: I1001 16:08:57.613197 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4b1a3c6f-40a7-4816-8404-4910abf14478" path="/var/lib/kubelet/pods/4b1a3c6f-40a7-4816-8404-4910abf14478/volumes" Oct 01 16:08:57 crc kubenswrapper[4949]: I1001 16:08:57.613691 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f548f94-239b-4712-bc59-5dfa63311d7f" path="/var/lib/kubelet/pods/9f548f94-239b-4712-bc59-5dfa63311d7f/volumes" Oct 01 16:08:57 crc kubenswrapper[4949]: I1001 16:08:57.614174 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db479bb0-2fc9-4a74-aeaa-8bb8446f2657" path="/var/lib/kubelet/pods/db479bb0-2fc9-4a74-aeaa-8bb8446f2657/volumes" Oct 01 16:08:57 crc kubenswrapper[4949]: I1001 16:08:57.794593 4949 generic.go:334] "Generic (PLEG): container finished" podID="d4228f73-ff18-43f5-b6f9-4b30ddee8845" containerID="841ae0db983577189592648ce8dc0763af9c6da2702f6d8ce62573bab6d1a1d2" exitCode=0 Oct 01 16:08:57 crc kubenswrapper[4949]: I1001 16:08:57.794673 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w6vf6" event={"ID":"d4228f73-ff18-43f5-b6f9-4b30ddee8845","Type":"ContainerDied","Data":"841ae0db983577189592648ce8dc0763af9c6da2702f6d8ce62573bab6d1a1d2"} Oct 01 16:08:57 crc kubenswrapper[4949]: I1001 16:08:57.794727 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w6vf6" event={"ID":"d4228f73-ff18-43f5-b6f9-4b30ddee8845","Type":"ContainerStarted","Data":"21e1dc50fda3c9f280ffa043859b97f3e9b5535d56d09fd5f43b7df29988d11b"} Oct 01 16:08:58 crc kubenswrapper[4949]: I1001 16:08:58.230265 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bct5c" Oct 01 16:08:58 crc kubenswrapper[4949]: I1001 16:08:58.357775 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fabdc1d2-59b5-4699-b280-e35380873dc2-inventory\") pod \"fabdc1d2-59b5-4699-b280-e35380873dc2\" (UID: \"fabdc1d2-59b5-4699-b280-e35380873dc2\") " Oct 01 16:08:58 crc kubenswrapper[4949]: I1001 16:08:58.357902 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9k84d\" (UniqueName: \"kubernetes.io/projected/fabdc1d2-59b5-4699-b280-e35380873dc2-kube-api-access-9k84d\") pod \"fabdc1d2-59b5-4699-b280-e35380873dc2\" (UID: \"fabdc1d2-59b5-4699-b280-e35380873dc2\") " Oct 01 16:08:58 crc kubenswrapper[4949]: I1001 16:08:58.357953 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fabdc1d2-59b5-4699-b280-e35380873dc2-ssh-key\") pod \"fabdc1d2-59b5-4699-b280-e35380873dc2\" (UID: \"fabdc1d2-59b5-4699-b280-e35380873dc2\") " Oct 01 16:08:58 crc kubenswrapper[4949]: I1001 16:08:58.362671 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fabdc1d2-59b5-4699-b280-e35380873dc2-kube-api-access-9k84d" (OuterVolumeSpecName: "kube-api-access-9k84d") pod "fabdc1d2-59b5-4699-b280-e35380873dc2" (UID: "fabdc1d2-59b5-4699-b280-e35380873dc2"). InnerVolumeSpecName "kube-api-access-9k84d". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:08:58 crc kubenswrapper[4949]: I1001 16:08:58.384089 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fabdc1d2-59b5-4699-b280-e35380873dc2-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "fabdc1d2-59b5-4699-b280-e35380873dc2" (UID: "fabdc1d2-59b5-4699-b280-e35380873dc2"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:08:58 crc kubenswrapper[4949]: I1001 16:08:58.387200 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fabdc1d2-59b5-4699-b280-e35380873dc2-inventory" (OuterVolumeSpecName: "inventory") pod "fabdc1d2-59b5-4699-b280-e35380873dc2" (UID: "fabdc1d2-59b5-4699-b280-e35380873dc2"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:08:58 crc kubenswrapper[4949]: I1001 16:08:58.460306 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/fabdc1d2-59b5-4699-b280-e35380873dc2-inventory\") on node \"crc\" DevicePath \"\"" Oct 01 16:08:58 crc kubenswrapper[4949]: I1001 16:08:58.460341 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9k84d\" (UniqueName: \"kubernetes.io/projected/fabdc1d2-59b5-4699-b280-e35380873dc2-kube-api-access-9k84d\") on node \"crc\" DevicePath \"\"" Oct 01 16:08:58 crc kubenswrapper[4949]: I1001 16:08:58.460350 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/fabdc1d2-59b5-4699-b280-e35380873dc2-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 01 16:08:58 crc kubenswrapper[4949]: I1001 16:08:58.806664 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bct5c" event={"ID":"fabdc1d2-59b5-4699-b280-e35380873dc2","Type":"ContainerDied","Data":"e706bddb6482f2806950f05b02dca32eb8e6027a4d0ad7574804662ccae8d465"} Oct 01 16:08:58 crc kubenswrapper[4949]: I1001 16:08:58.807002 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e706bddb6482f2806950f05b02dca32eb8e6027a4d0ad7574804662ccae8d465" Oct 01 16:08:58 crc kubenswrapper[4949]: I1001 16:08:58.806672 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bct5c" Oct 01 16:08:58 crc kubenswrapper[4949]: I1001 16:08:58.810260 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w6vf6" event={"ID":"d4228f73-ff18-43f5-b6f9-4b30ddee8845","Type":"ContainerStarted","Data":"b90e52708eea4358c137fe0cd51d6373c6551ebaedb8cdb8ba1d045894ded226"} Oct 01 16:08:58 crc kubenswrapper[4949]: I1001 16:08:58.897977 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-k85xw"] Oct 01 16:08:58 crc kubenswrapper[4949]: E1001 16:08:58.898635 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fabdc1d2-59b5-4699-b280-e35380873dc2" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Oct 01 16:08:58 crc kubenswrapper[4949]: I1001 16:08:58.898664 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="fabdc1d2-59b5-4699-b280-e35380873dc2" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Oct 01 16:08:58 crc kubenswrapper[4949]: I1001 16:08:58.898990 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="fabdc1d2-59b5-4699-b280-e35380873dc2" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Oct 01 16:08:58 crc kubenswrapper[4949]: I1001 16:08:58.899905 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-k85xw" Oct 01 16:08:58 crc kubenswrapper[4949]: I1001 16:08:58.902377 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dwhcl" Oct 01 16:08:58 crc kubenswrapper[4949]: I1001 16:08:58.903266 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 01 16:08:58 crc kubenswrapper[4949]: I1001 16:08:58.903459 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 01 16:08:58 crc kubenswrapper[4949]: I1001 16:08:58.903612 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 01 16:08:58 crc kubenswrapper[4949]: I1001 16:08:58.910222 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-k85xw"] Oct 01 16:08:59 crc kubenswrapper[4949]: I1001 16:08:59.071072 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/67fe130c-27e6-4d46-8f3e-58bc9a9e94a7-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-k85xw\" (UID: \"67fe130c-27e6-4d46-8f3e-58bc9a9e94a7\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-k85xw" Oct 01 16:08:59 crc kubenswrapper[4949]: I1001 16:08:59.071255 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67fe130c-27e6-4d46-8f3e-58bc9a9e94a7-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-k85xw\" (UID: \"67fe130c-27e6-4d46-8f3e-58bc9a9e94a7\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-k85xw" Oct 01 16:08:59 crc kubenswrapper[4949]: I1001 16:08:59.071806 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rj8xx\" (UniqueName: \"kubernetes.io/projected/67fe130c-27e6-4d46-8f3e-58bc9a9e94a7-kube-api-access-rj8xx\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-k85xw\" (UID: \"67fe130c-27e6-4d46-8f3e-58bc9a9e94a7\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-k85xw" Oct 01 16:08:59 crc kubenswrapper[4949]: I1001 16:08:59.173463 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67fe130c-27e6-4d46-8f3e-58bc9a9e94a7-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-k85xw\" (UID: \"67fe130c-27e6-4d46-8f3e-58bc9a9e94a7\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-k85xw" Oct 01 16:08:59 crc kubenswrapper[4949]: I1001 16:08:59.173505 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rj8xx\" (UniqueName: \"kubernetes.io/projected/67fe130c-27e6-4d46-8f3e-58bc9a9e94a7-kube-api-access-rj8xx\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-k85xw\" (UID: \"67fe130c-27e6-4d46-8f3e-58bc9a9e94a7\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-k85xw" Oct 01 16:08:59 crc kubenswrapper[4949]: I1001 16:08:59.173630 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/67fe130c-27e6-4d46-8f3e-58bc9a9e94a7-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-k85xw\" (UID: \"67fe130c-27e6-4d46-8f3e-58bc9a9e94a7\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-k85xw" Oct 01 16:08:59 crc kubenswrapper[4949]: I1001 16:08:59.184023 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67fe130c-27e6-4d46-8f3e-58bc9a9e94a7-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-k85xw\" (UID: \"67fe130c-27e6-4d46-8f3e-58bc9a9e94a7\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-k85xw" Oct 01 16:08:59 crc kubenswrapper[4949]: I1001 16:08:59.186995 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/67fe130c-27e6-4d46-8f3e-58bc9a9e94a7-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-k85xw\" (UID: \"67fe130c-27e6-4d46-8f3e-58bc9a9e94a7\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-k85xw" Oct 01 16:08:59 crc kubenswrapper[4949]: I1001 16:08:59.189450 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rj8xx\" (UniqueName: \"kubernetes.io/projected/67fe130c-27e6-4d46-8f3e-58bc9a9e94a7-kube-api-access-rj8xx\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-k85xw\" (UID: \"67fe130c-27e6-4d46-8f3e-58bc9a9e94a7\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-k85xw" Oct 01 16:08:59 crc kubenswrapper[4949]: I1001 16:08:59.215466 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-k85xw" Oct 01 16:08:59 crc kubenswrapper[4949]: I1001 16:08:59.744233 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-k85xw"] Oct 01 16:08:59 crc kubenswrapper[4949]: W1001 16:08:59.748427 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67fe130c_27e6_4d46_8f3e_58bc9a9e94a7.slice/crio-e35408e3e72b240d51c8937c42f865b6f4c92dbbc9602002ef3af3fa07230d14 WatchSource:0}: Error finding container e35408e3e72b240d51c8937c42f865b6f4c92dbbc9602002ef3af3fa07230d14: Status 404 returned error can't find the container with id e35408e3e72b240d51c8937c42f865b6f4c92dbbc9602002ef3af3fa07230d14 Oct 01 16:08:59 crc kubenswrapper[4949]: I1001 16:08:59.824774 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-k85xw" event={"ID":"67fe130c-27e6-4d46-8f3e-58bc9a9e94a7","Type":"ContainerStarted","Data":"e35408e3e72b240d51c8937c42f865b6f4c92dbbc9602002ef3af3fa07230d14"} Oct 01 16:08:59 crc kubenswrapper[4949]: I1001 16:08:59.826936 4949 generic.go:334] "Generic (PLEG): container finished" podID="d4228f73-ff18-43f5-b6f9-4b30ddee8845" containerID="b90e52708eea4358c137fe0cd51d6373c6551ebaedb8cdb8ba1d045894ded226" exitCode=0 Oct 01 16:08:59 crc kubenswrapper[4949]: I1001 16:08:59.826983 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w6vf6" event={"ID":"d4228f73-ff18-43f5-b6f9-4b30ddee8845","Type":"ContainerDied","Data":"b90e52708eea4358c137fe0cd51d6373c6551ebaedb8cdb8ba1d045894ded226"} Oct 01 16:09:00 crc kubenswrapper[4949]: I1001 16:09:00.835342 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-k85xw" event={"ID":"67fe130c-27e6-4d46-8f3e-58bc9a9e94a7","Type":"ContainerStarted","Data":"027e40d88a9075c2c3ed621f6735e0c1d1351a2981c7ec512524851a01f994d3"} Oct 01 16:09:00 crc kubenswrapper[4949]: I1001 16:09:00.837910 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w6vf6" event={"ID":"d4228f73-ff18-43f5-b6f9-4b30ddee8845","Type":"ContainerStarted","Data":"eae3d81d08c053fef139c5d2300ad3259d34ebaecbae54979299d82ec9d8ef66"} Oct 01 16:09:00 crc kubenswrapper[4949]: I1001 16:09:00.872316 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-k85xw" podStartSLOduration=2.235016758 podStartE2EDuration="2.872297354s" podCreationTimestamp="2025-10-01 16:08:58 +0000 UTC" firstStartedPulling="2025-10-01 16:08:59.752034395 +0000 UTC m=+1639.057640586" lastFinishedPulling="2025-10-01 16:09:00.389314991 +0000 UTC m=+1639.694921182" observedRunningTime="2025-10-01 16:09:00.85597368 +0000 UTC m=+1640.161579881" watchObservedRunningTime="2025-10-01 16:09:00.872297354 +0000 UTC m=+1640.177903545" Oct 01 16:09:00 crc kubenswrapper[4949]: I1001 16:09:00.887153 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-w6vf6" podStartSLOduration=2.39516611 podStartE2EDuration="4.887111466s" podCreationTimestamp="2025-10-01 16:08:56 +0000 UTC" firstStartedPulling="2025-10-01 16:08:57.796427796 +0000 UTC m=+1637.102034007" lastFinishedPulling="2025-10-01 16:09:00.288373172 +0000 UTC m=+1639.593979363" observedRunningTime="2025-10-01 16:09:00.876933213 +0000 UTC m=+1640.182539404" watchObservedRunningTime="2025-10-01 16:09:00.887111466 +0000 UTC m=+1640.192717647" Oct 01 16:09:06 crc kubenswrapper[4949]: I1001 16:09:06.900722 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-w6vf6" Oct 01 16:09:06 crc kubenswrapper[4949]: I1001 16:09:06.901615 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-w6vf6" Oct 01 16:09:06 crc kubenswrapper[4949]: I1001 16:09:06.968242 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-w6vf6" Oct 01 16:09:07 crc kubenswrapper[4949]: I1001 16:09:07.602493 4949 scope.go:117] "RemoveContainer" containerID="8c210bfce86fb45b4a25ef5c865680ea34bee887c4c2e606bab8c9c8e7ddd056" Oct 01 16:09:07 crc kubenswrapper[4949]: E1001 16:09:07.602953 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:09:07 crc kubenswrapper[4949]: I1001 16:09:07.968414 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-w6vf6" Oct 01 16:09:08 crc kubenswrapper[4949]: I1001 16:09:08.029495 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w6vf6"] Oct 01 16:09:09 crc kubenswrapper[4949]: I1001 16:09:09.931386 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-w6vf6" podUID="d4228f73-ff18-43f5-b6f9-4b30ddee8845" containerName="registry-server" containerID="cri-o://eae3d81d08c053fef139c5d2300ad3259d34ebaecbae54979299d82ec9d8ef66" gracePeriod=2 Oct 01 16:09:10 crc kubenswrapper[4949]: I1001 16:09:10.420259 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w6vf6" Oct 01 16:09:10 crc kubenswrapper[4949]: I1001 16:09:10.586106 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9bfd\" (UniqueName: \"kubernetes.io/projected/d4228f73-ff18-43f5-b6f9-4b30ddee8845-kube-api-access-s9bfd\") pod \"d4228f73-ff18-43f5-b6f9-4b30ddee8845\" (UID: \"d4228f73-ff18-43f5-b6f9-4b30ddee8845\") " Oct 01 16:09:10 crc kubenswrapper[4949]: I1001 16:09:10.586273 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4228f73-ff18-43f5-b6f9-4b30ddee8845-catalog-content\") pod \"d4228f73-ff18-43f5-b6f9-4b30ddee8845\" (UID: \"d4228f73-ff18-43f5-b6f9-4b30ddee8845\") " Oct 01 16:09:10 crc kubenswrapper[4949]: I1001 16:09:10.586606 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4228f73-ff18-43f5-b6f9-4b30ddee8845-utilities\") pod \"d4228f73-ff18-43f5-b6f9-4b30ddee8845\" (UID: \"d4228f73-ff18-43f5-b6f9-4b30ddee8845\") " Oct 01 16:09:10 crc kubenswrapper[4949]: I1001 16:09:10.587550 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4228f73-ff18-43f5-b6f9-4b30ddee8845-utilities" (OuterVolumeSpecName: "utilities") pod "d4228f73-ff18-43f5-b6f9-4b30ddee8845" (UID: "d4228f73-ff18-43f5-b6f9-4b30ddee8845"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:09:10 crc kubenswrapper[4949]: I1001 16:09:10.592034 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4228f73-ff18-43f5-b6f9-4b30ddee8845-kube-api-access-s9bfd" (OuterVolumeSpecName: "kube-api-access-s9bfd") pod "d4228f73-ff18-43f5-b6f9-4b30ddee8845" (UID: "d4228f73-ff18-43f5-b6f9-4b30ddee8845"). InnerVolumeSpecName "kube-api-access-s9bfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:09:10 crc kubenswrapper[4949]: I1001 16:09:10.669601 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4228f73-ff18-43f5-b6f9-4b30ddee8845-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d4228f73-ff18-43f5-b6f9-4b30ddee8845" (UID: "d4228f73-ff18-43f5-b6f9-4b30ddee8845"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:09:10 crc kubenswrapper[4949]: I1001 16:09:10.688957 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4228f73-ff18-43f5-b6f9-4b30ddee8845-utilities\") on node \"crc\" DevicePath \"\"" Oct 01 16:09:10 crc kubenswrapper[4949]: I1001 16:09:10.688991 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9bfd\" (UniqueName: \"kubernetes.io/projected/d4228f73-ff18-43f5-b6f9-4b30ddee8845-kube-api-access-s9bfd\") on node \"crc\" DevicePath \"\"" Oct 01 16:09:10 crc kubenswrapper[4949]: I1001 16:09:10.689000 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4228f73-ff18-43f5-b6f9-4b30ddee8845-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 01 16:09:10 crc kubenswrapper[4949]: I1001 16:09:10.941554 4949 generic.go:334] "Generic (PLEG): container finished" podID="d4228f73-ff18-43f5-b6f9-4b30ddee8845" containerID="eae3d81d08c053fef139c5d2300ad3259d34ebaecbae54979299d82ec9d8ef66" exitCode=0 Oct 01 16:09:10 crc kubenswrapper[4949]: I1001 16:09:10.941595 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w6vf6" event={"ID":"d4228f73-ff18-43f5-b6f9-4b30ddee8845","Type":"ContainerDied","Data":"eae3d81d08c053fef139c5d2300ad3259d34ebaecbae54979299d82ec9d8ef66"} Oct 01 16:09:10 crc kubenswrapper[4949]: I1001 16:09:10.941901 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-w6vf6" event={"ID":"d4228f73-ff18-43f5-b6f9-4b30ddee8845","Type":"ContainerDied","Data":"21e1dc50fda3c9f280ffa043859b97f3e9b5535d56d09fd5f43b7df29988d11b"} Oct 01 16:09:10 crc kubenswrapper[4949]: I1001 16:09:10.941922 4949 scope.go:117] "RemoveContainer" containerID="eae3d81d08c053fef139c5d2300ad3259d34ebaecbae54979299d82ec9d8ef66" Oct 01 16:09:10 crc kubenswrapper[4949]: I1001 16:09:10.941621 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-w6vf6" Oct 01 16:09:10 crc kubenswrapper[4949]: I1001 16:09:10.983213 4949 scope.go:117] "RemoveContainer" containerID="b90e52708eea4358c137fe0cd51d6373c6551ebaedb8cdb8ba1d045894ded226" Oct 01 16:09:10 crc kubenswrapper[4949]: I1001 16:09:10.985094 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-w6vf6"] Oct 01 16:09:10 crc kubenswrapper[4949]: I1001 16:09:10.994893 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-w6vf6"] Oct 01 16:09:11 crc kubenswrapper[4949]: I1001 16:09:11.007415 4949 scope.go:117] "RemoveContainer" containerID="841ae0db983577189592648ce8dc0763af9c6da2702f6d8ce62573bab6d1a1d2" Oct 01 16:09:11 crc kubenswrapper[4949]: I1001 16:09:11.049946 4949 scope.go:117] "RemoveContainer" containerID="eae3d81d08c053fef139c5d2300ad3259d34ebaecbae54979299d82ec9d8ef66" Oct 01 16:09:11 crc kubenswrapper[4949]: E1001 16:09:11.050756 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eae3d81d08c053fef139c5d2300ad3259d34ebaecbae54979299d82ec9d8ef66\": container with ID starting with eae3d81d08c053fef139c5d2300ad3259d34ebaecbae54979299d82ec9d8ef66 not found: ID does not exist" containerID="eae3d81d08c053fef139c5d2300ad3259d34ebaecbae54979299d82ec9d8ef66" Oct 01 16:09:11 crc kubenswrapper[4949]: I1001 16:09:11.050782 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eae3d81d08c053fef139c5d2300ad3259d34ebaecbae54979299d82ec9d8ef66"} err="failed to get container status \"eae3d81d08c053fef139c5d2300ad3259d34ebaecbae54979299d82ec9d8ef66\": rpc error: code = NotFound desc = could not find container \"eae3d81d08c053fef139c5d2300ad3259d34ebaecbae54979299d82ec9d8ef66\": container with ID starting with eae3d81d08c053fef139c5d2300ad3259d34ebaecbae54979299d82ec9d8ef66 not found: ID does not exist" Oct 01 16:09:11 crc kubenswrapper[4949]: I1001 16:09:11.050806 4949 scope.go:117] "RemoveContainer" containerID="b90e52708eea4358c137fe0cd51d6373c6551ebaedb8cdb8ba1d045894ded226" Oct 01 16:09:11 crc kubenswrapper[4949]: E1001 16:09:11.051735 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b90e52708eea4358c137fe0cd51d6373c6551ebaedb8cdb8ba1d045894ded226\": container with ID starting with b90e52708eea4358c137fe0cd51d6373c6551ebaedb8cdb8ba1d045894ded226 not found: ID does not exist" containerID="b90e52708eea4358c137fe0cd51d6373c6551ebaedb8cdb8ba1d045894ded226" Oct 01 16:09:11 crc kubenswrapper[4949]: I1001 16:09:11.051762 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b90e52708eea4358c137fe0cd51d6373c6551ebaedb8cdb8ba1d045894ded226"} err="failed to get container status \"b90e52708eea4358c137fe0cd51d6373c6551ebaedb8cdb8ba1d045894ded226\": rpc error: code = NotFound desc = could not find container \"b90e52708eea4358c137fe0cd51d6373c6551ebaedb8cdb8ba1d045894ded226\": container with ID starting with b90e52708eea4358c137fe0cd51d6373c6551ebaedb8cdb8ba1d045894ded226 not found: ID does not exist" Oct 01 16:09:11 crc kubenswrapper[4949]: I1001 16:09:11.051779 4949 scope.go:117] "RemoveContainer" containerID="841ae0db983577189592648ce8dc0763af9c6da2702f6d8ce62573bab6d1a1d2" Oct 01 16:09:11 crc kubenswrapper[4949]: E1001 16:09:11.056618 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"841ae0db983577189592648ce8dc0763af9c6da2702f6d8ce62573bab6d1a1d2\": container with ID starting with 841ae0db983577189592648ce8dc0763af9c6da2702f6d8ce62573bab6d1a1d2 not found: ID does not exist" containerID="841ae0db983577189592648ce8dc0763af9c6da2702f6d8ce62573bab6d1a1d2" Oct 01 16:09:11 crc kubenswrapper[4949]: I1001 16:09:11.056664 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"841ae0db983577189592648ce8dc0763af9c6da2702f6d8ce62573bab6d1a1d2"} err="failed to get container status \"841ae0db983577189592648ce8dc0763af9c6da2702f6d8ce62573bab6d1a1d2\": rpc error: code = NotFound desc = could not find container \"841ae0db983577189592648ce8dc0763af9c6da2702f6d8ce62573bab6d1a1d2\": container with ID starting with 841ae0db983577189592648ce8dc0763af9c6da2702f6d8ce62573bab6d1a1d2 not found: ID does not exist" Oct 01 16:09:11 crc kubenswrapper[4949]: I1001 16:09:11.616969 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4228f73-ff18-43f5-b6f9-4b30ddee8845" path="/var/lib/kubelet/pods/d4228f73-ff18-43f5-b6f9-4b30ddee8845/volumes" Oct 01 16:09:22 crc kubenswrapper[4949]: I1001 16:09:22.601993 4949 scope.go:117] "RemoveContainer" containerID="8c210bfce86fb45b4a25ef5c865680ea34bee887c4c2e606bab8c9c8e7ddd056" Oct 01 16:09:22 crc kubenswrapper[4949]: E1001 16:09:22.602857 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:09:33 crc kubenswrapper[4949]: I1001 16:09:33.602758 4949 scope.go:117] "RemoveContainer" containerID="8c210bfce86fb45b4a25ef5c865680ea34bee887c4c2e606bab8c9c8e7ddd056" Oct 01 16:09:33 crc kubenswrapper[4949]: E1001 16:09:33.603717 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:09:35 crc kubenswrapper[4949]: I1001 16:09:35.043675 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/keystone-bootstrap-26kd8"] Oct 01 16:09:35 crc kubenswrapper[4949]: I1001 16:09:35.052259 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/keystone-bootstrap-26kd8"] Oct 01 16:09:35 crc kubenswrapper[4949]: I1001 16:09:35.617063 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4940aa0-8f70-4b3e-b9b4-b1e299993441" path="/var/lib/kubelet/pods/f4940aa0-8f70-4b3e-b9b4-b1e299993441/volumes" Oct 01 16:09:38 crc kubenswrapper[4949]: I1001 16:09:38.229005 4949 generic.go:334] "Generic (PLEG): container finished" podID="67fe130c-27e6-4d46-8f3e-58bc9a9e94a7" containerID="027e40d88a9075c2c3ed621f6735e0c1d1351a2981c7ec512524851a01f994d3" exitCode=0 Oct 01 16:09:38 crc kubenswrapper[4949]: I1001 16:09:38.229099 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-k85xw" event={"ID":"67fe130c-27e6-4d46-8f3e-58bc9a9e94a7","Type":"ContainerDied","Data":"027e40d88a9075c2c3ed621f6735e0c1d1351a2981c7ec512524851a01f994d3"} Oct 01 16:09:39 crc kubenswrapper[4949]: I1001 16:09:39.699499 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-k85xw" Oct 01 16:09:39 crc kubenswrapper[4949]: I1001 16:09:39.853978 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rj8xx\" (UniqueName: \"kubernetes.io/projected/67fe130c-27e6-4d46-8f3e-58bc9a9e94a7-kube-api-access-rj8xx\") pod \"67fe130c-27e6-4d46-8f3e-58bc9a9e94a7\" (UID: \"67fe130c-27e6-4d46-8f3e-58bc9a9e94a7\") " Oct 01 16:09:39 crc kubenswrapper[4949]: I1001 16:09:39.854143 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/67fe130c-27e6-4d46-8f3e-58bc9a9e94a7-ssh-key\") pod \"67fe130c-27e6-4d46-8f3e-58bc9a9e94a7\" (UID: \"67fe130c-27e6-4d46-8f3e-58bc9a9e94a7\") " Oct 01 16:09:39 crc kubenswrapper[4949]: I1001 16:09:39.854227 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67fe130c-27e6-4d46-8f3e-58bc9a9e94a7-inventory\") pod \"67fe130c-27e6-4d46-8f3e-58bc9a9e94a7\" (UID: \"67fe130c-27e6-4d46-8f3e-58bc9a9e94a7\") " Oct 01 16:09:39 crc kubenswrapper[4949]: I1001 16:09:39.860964 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67fe130c-27e6-4d46-8f3e-58bc9a9e94a7-kube-api-access-rj8xx" (OuterVolumeSpecName: "kube-api-access-rj8xx") pod "67fe130c-27e6-4d46-8f3e-58bc9a9e94a7" (UID: "67fe130c-27e6-4d46-8f3e-58bc9a9e94a7"). InnerVolumeSpecName "kube-api-access-rj8xx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:09:39 crc kubenswrapper[4949]: I1001 16:09:39.880509 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67fe130c-27e6-4d46-8f3e-58bc9a9e94a7-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "67fe130c-27e6-4d46-8f3e-58bc9a9e94a7" (UID: "67fe130c-27e6-4d46-8f3e-58bc9a9e94a7"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:09:39 crc kubenswrapper[4949]: I1001 16:09:39.903691 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/67fe130c-27e6-4d46-8f3e-58bc9a9e94a7-inventory" (OuterVolumeSpecName: "inventory") pod "67fe130c-27e6-4d46-8f3e-58bc9a9e94a7" (UID: "67fe130c-27e6-4d46-8f3e-58bc9a9e94a7"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:09:39 crc kubenswrapper[4949]: I1001 16:09:39.956008 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rj8xx\" (UniqueName: \"kubernetes.io/projected/67fe130c-27e6-4d46-8f3e-58bc9a9e94a7-kube-api-access-rj8xx\") on node \"crc\" DevicePath \"\"" Oct 01 16:09:39 crc kubenswrapper[4949]: I1001 16:09:39.956037 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/67fe130c-27e6-4d46-8f3e-58bc9a9e94a7-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 01 16:09:39 crc kubenswrapper[4949]: I1001 16:09:39.956046 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/67fe130c-27e6-4d46-8f3e-58bc9a9e94a7-inventory\") on node \"crc\" DevicePath \"\"" Oct 01 16:09:40 crc kubenswrapper[4949]: I1001 16:09:40.251037 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-k85xw" event={"ID":"67fe130c-27e6-4d46-8f3e-58bc9a9e94a7","Type":"ContainerDied","Data":"e35408e3e72b240d51c8937c42f865b6f4c92dbbc9602002ef3af3fa07230d14"} Oct 01 16:09:40 crc kubenswrapper[4949]: I1001 16:09:40.251073 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e35408e3e72b240d51c8937c42f865b6f4c92dbbc9602002ef3af3fa07230d14" Oct 01 16:09:40 crc kubenswrapper[4949]: I1001 16:09:40.251151 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-k85xw" Oct 01 16:09:40 crc kubenswrapper[4949]: I1001 16:09:40.360856 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cqdxh"] Oct 01 16:09:40 crc kubenswrapper[4949]: E1001 16:09:40.362520 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4228f73-ff18-43f5-b6f9-4b30ddee8845" containerName="registry-server" Oct 01 16:09:40 crc kubenswrapper[4949]: I1001 16:09:40.362544 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4228f73-ff18-43f5-b6f9-4b30ddee8845" containerName="registry-server" Oct 01 16:09:40 crc kubenswrapper[4949]: E1001 16:09:40.362562 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67fe130c-27e6-4d46-8f3e-58bc9a9e94a7" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Oct 01 16:09:40 crc kubenswrapper[4949]: I1001 16:09:40.362570 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="67fe130c-27e6-4d46-8f3e-58bc9a9e94a7" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Oct 01 16:09:40 crc kubenswrapper[4949]: E1001 16:09:40.362583 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4228f73-ff18-43f5-b6f9-4b30ddee8845" containerName="extract-content" Oct 01 16:09:40 crc kubenswrapper[4949]: I1001 16:09:40.362589 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4228f73-ff18-43f5-b6f9-4b30ddee8845" containerName="extract-content" Oct 01 16:09:40 crc kubenswrapper[4949]: E1001 16:09:40.362602 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4228f73-ff18-43f5-b6f9-4b30ddee8845" containerName="extract-utilities" Oct 01 16:09:40 crc kubenswrapper[4949]: I1001 16:09:40.362608 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4228f73-ff18-43f5-b6f9-4b30ddee8845" containerName="extract-utilities" Oct 01 16:09:40 crc kubenswrapper[4949]: I1001 16:09:40.362780 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="67fe130c-27e6-4d46-8f3e-58bc9a9e94a7" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Oct 01 16:09:40 crc kubenswrapper[4949]: I1001 16:09:40.362793 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4228f73-ff18-43f5-b6f9-4b30ddee8845" containerName="registry-server" Oct 01 16:09:40 crc kubenswrapper[4949]: I1001 16:09:40.363439 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cqdxh" Oct 01 16:09:40 crc kubenswrapper[4949]: I1001 16:09:40.365862 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 01 16:09:40 crc kubenswrapper[4949]: I1001 16:09:40.365895 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 01 16:09:40 crc kubenswrapper[4949]: I1001 16:09:40.366056 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dwhcl" Oct 01 16:09:40 crc kubenswrapper[4949]: I1001 16:09:40.366737 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 01 16:09:40 crc kubenswrapper[4949]: I1001 16:09:40.373347 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cqdxh"] Oct 01 16:09:40 crc kubenswrapper[4949]: I1001 16:09:40.464301 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgdzj\" (UniqueName: \"kubernetes.io/projected/f91c85a1-fe69-4dc3-9e1a-dee5f83fa81a-kube-api-access-zgdzj\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cqdxh\" (UID: \"f91c85a1-fe69-4dc3-9e1a-dee5f83fa81a\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cqdxh" Oct 01 16:09:40 crc kubenswrapper[4949]: I1001 16:09:40.464377 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f91c85a1-fe69-4dc3-9e1a-dee5f83fa81a-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cqdxh\" (UID: \"f91c85a1-fe69-4dc3-9e1a-dee5f83fa81a\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cqdxh" Oct 01 16:09:40 crc kubenswrapper[4949]: I1001 16:09:40.464744 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f91c85a1-fe69-4dc3-9e1a-dee5f83fa81a-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cqdxh\" (UID: \"f91c85a1-fe69-4dc3-9e1a-dee5f83fa81a\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cqdxh" Oct 01 16:09:40 crc kubenswrapper[4949]: I1001 16:09:40.566945 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f91c85a1-fe69-4dc3-9e1a-dee5f83fa81a-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cqdxh\" (UID: \"f91c85a1-fe69-4dc3-9e1a-dee5f83fa81a\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cqdxh" Oct 01 16:09:40 crc kubenswrapper[4949]: I1001 16:09:40.567167 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zgdzj\" (UniqueName: \"kubernetes.io/projected/f91c85a1-fe69-4dc3-9e1a-dee5f83fa81a-kube-api-access-zgdzj\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cqdxh\" (UID: \"f91c85a1-fe69-4dc3-9e1a-dee5f83fa81a\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cqdxh" Oct 01 16:09:40 crc kubenswrapper[4949]: I1001 16:09:40.567220 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f91c85a1-fe69-4dc3-9e1a-dee5f83fa81a-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cqdxh\" (UID: \"f91c85a1-fe69-4dc3-9e1a-dee5f83fa81a\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cqdxh" Oct 01 16:09:40 crc kubenswrapper[4949]: I1001 16:09:40.579629 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f91c85a1-fe69-4dc3-9e1a-dee5f83fa81a-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cqdxh\" (UID: \"f91c85a1-fe69-4dc3-9e1a-dee5f83fa81a\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cqdxh" Oct 01 16:09:40 crc kubenswrapper[4949]: I1001 16:09:40.581037 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f91c85a1-fe69-4dc3-9e1a-dee5f83fa81a-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cqdxh\" (UID: \"f91c85a1-fe69-4dc3-9e1a-dee5f83fa81a\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cqdxh" Oct 01 16:09:40 crc kubenswrapper[4949]: I1001 16:09:40.598515 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgdzj\" (UniqueName: \"kubernetes.io/projected/f91c85a1-fe69-4dc3-9e1a-dee5f83fa81a-kube-api-access-zgdzj\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cqdxh\" (UID: \"f91c85a1-fe69-4dc3-9e1a-dee5f83fa81a\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cqdxh" Oct 01 16:09:40 crc kubenswrapper[4949]: I1001 16:09:40.679092 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cqdxh" Oct 01 16:09:41 crc kubenswrapper[4949]: I1001 16:09:41.028712 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/placement-db-sync-wfg74"] Oct 01 16:09:41 crc kubenswrapper[4949]: I1001 16:09:41.039707 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/placement-db-sync-wfg74"] Oct 01 16:09:41 crc kubenswrapper[4949]: I1001 16:09:41.229767 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cqdxh"] Oct 01 16:09:41 crc kubenswrapper[4949]: I1001 16:09:41.264420 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cqdxh" event={"ID":"f91c85a1-fe69-4dc3-9e1a-dee5f83fa81a","Type":"ContainerStarted","Data":"714a678d3f5713450d1b39fffd9545a2489f9854acb390324d7bcb7d4d1d49a6"} Oct 01 16:09:41 crc kubenswrapper[4949]: I1001 16:09:41.632995 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81bf5b66-a190-4c1a-8607-863f93075c01" path="/var/lib/kubelet/pods/81bf5b66-a190-4c1a-8607-863f93075c01/volumes" Oct 01 16:09:41 crc kubenswrapper[4949]: I1001 16:09:41.738369 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 01 16:09:42 crc kubenswrapper[4949]: I1001 16:09:42.276093 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cqdxh" event={"ID":"f91c85a1-fe69-4dc3-9e1a-dee5f83fa81a","Type":"ContainerStarted","Data":"329e02b1d9b404e86bc9052660e2b57e15025154947f78dcaface24a63feea3c"} Oct 01 16:09:42 crc kubenswrapper[4949]: I1001 16:09:42.297096 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cqdxh" podStartSLOduration=1.801285686 podStartE2EDuration="2.297071744s" podCreationTimestamp="2025-10-01 16:09:40 +0000 UTC" firstStartedPulling="2025-10-01 16:09:41.239638915 +0000 UTC m=+1680.545245106" lastFinishedPulling="2025-10-01 16:09:41.735424933 +0000 UTC m=+1681.041031164" observedRunningTime="2025-10-01 16:09:42.295105209 +0000 UTC m=+1681.600711440" watchObservedRunningTime="2025-10-01 16:09:42.297071744 +0000 UTC m=+1681.602677955" Oct 01 16:09:43 crc kubenswrapper[4949]: I1001 16:09:43.033499 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/barbican-db-sync-bm8vc"] Oct 01 16:09:43 crc kubenswrapper[4949]: I1001 16:09:43.046054 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/barbican-db-sync-bm8vc"] Oct 01 16:09:43 crc kubenswrapper[4949]: I1001 16:09:43.619828 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e44454-db3f-453a-8bc9-d8f435685e32" path="/var/lib/kubelet/pods/d7e44454-db3f-453a-8bc9-d8f435685e32/volumes" Oct 01 16:09:44 crc kubenswrapper[4949]: I1001 16:09:44.029350 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/neutron-db-sync-vs9qk"] Oct 01 16:09:44 crc kubenswrapper[4949]: I1001 16:09:44.044425 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/neutron-db-sync-vs9qk"] Oct 01 16:09:44 crc kubenswrapper[4949]: I1001 16:09:44.601640 4949 scope.go:117] "RemoveContainer" containerID="8c210bfce86fb45b4a25ef5c865680ea34bee887c4c2e606bab8c9c8e7ddd056" Oct 01 16:09:44 crc kubenswrapper[4949]: E1001 16:09:44.602433 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:09:44 crc kubenswrapper[4949]: I1001 16:09:44.851019 4949 scope.go:117] "RemoveContainer" containerID="879c9bb8ce7b67391592a9dd882ef7bb6b59f58559de77033d3eb297815fbf35" Oct 01 16:09:44 crc kubenswrapper[4949]: I1001 16:09:44.920761 4949 scope.go:117] "RemoveContainer" containerID="19c91af858546721d0015fe3ff605169bd9bf4c54c48ff9ba386466a8a79de49" Oct 01 16:09:44 crc kubenswrapper[4949]: I1001 16:09:44.956283 4949 scope.go:117] "RemoveContainer" containerID="6f4d8443e7f3ab71dd2b9192b52536952c0886c7553f72b34ce0bf26abfa4d61" Oct 01 16:09:44 crc kubenswrapper[4949]: I1001 16:09:44.994877 4949 scope.go:117] "RemoveContainer" containerID="0348f03511f01ce66f92b3f4a3b0cf7ef0a302c351d7500f9a8a6ce9e9cc82ce" Oct 01 16:09:45 crc kubenswrapper[4949]: I1001 16:09:45.032573 4949 scope.go:117] "RemoveContainer" containerID="939f8ab3f1f8d4d87c43c17b33dbea9d54926d4dbd45b55434a8a27502ae2a03" Oct 01 16:09:45 crc kubenswrapper[4949]: I1001 16:09:45.066245 4949 scope.go:117] "RemoveContainer" containerID="02af25e7090bb3712bc983bfe089f4a2119797a976d6b6ca073a53bee130f13f" Oct 01 16:09:45 crc kubenswrapper[4949]: I1001 16:09:45.104009 4949 scope.go:117] "RemoveContainer" containerID="956aa20118237b3229f3f50e0fe2cb6f61d8253275f1e7b716e45a8fbbffa576" Oct 01 16:09:45 crc kubenswrapper[4949]: I1001 16:09:45.127494 4949 scope.go:117] "RemoveContainer" containerID="e4e742ff462b7aceeeee88ae2f34bb8fc52715249faea4416550d16da5a1c878" Oct 01 16:09:45 crc kubenswrapper[4949]: I1001 16:09:45.151610 4949 scope.go:117] "RemoveContainer" containerID="bc890e9d93a4a22895864dbd37936bcc5dbae355582e1b48fc18bc3b26aa3243" Oct 01 16:09:45 crc kubenswrapper[4949]: I1001 16:09:45.182028 4949 scope.go:117] "RemoveContainer" containerID="c57d17611be8f2befa48c27321584d20f70833f57894933b63895fc3722964fa" Oct 01 16:09:45 crc kubenswrapper[4949]: I1001 16:09:45.223897 4949 scope.go:117] "RemoveContainer" containerID="057533705fc546a66181093688ed7cf1ed1817a45bde40d0b2b8ff79fd047084" Oct 01 16:09:45 crc kubenswrapper[4949]: I1001 16:09:45.620522 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92c26ffd-a7f6-4593-8718-8947375730ef" path="/var/lib/kubelet/pods/92c26ffd-a7f6-4593-8718-8947375730ef/volumes" Oct 01 16:09:46 crc kubenswrapper[4949]: I1001 16:09:46.341022 4949 generic.go:334] "Generic (PLEG): container finished" podID="f91c85a1-fe69-4dc3-9e1a-dee5f83fa81a" containerID="329e02b1d9b404e86bc9052660e2b57e15025154947f78dcaface24a63feea3c" exitCode=0 Oct 01 16:09:46 crc kubenswrapper[4949]: I1001 16:09:46.341157 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cqdxh" event={"ID":"f91c85a1-fe69-4dc3-9e1a-dee5f83fa81a","Type":"ContainerDied","Data":"329e02b1d9b404e86bc9052660e2b57e15025154947f78dcaface24a63feea3c"} Oct 01 16:09:47 crc kubenswrapper[4949]: I1001 16:09:47.840621 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cqdxh" Oct 01 16:09:47 crc kubenswrapper[4949]: I1001 16:09:47.941486 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f91c85a1-fe69-4dc3-9e1a-dee5f83fa81a-ssh-key\") pod \"f91c85a1-fe69-4dc3-9e1a-dee5f83fa81a\" (UID: \"f91c85a1-fe69-4dc3-9e1a-dee5f83fa81a\") " Oct 01 16:09:47 crc kubenswrapper[4949]: I1001 16:09:47.942009 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdzj\" (UniqueName: \"kubernetes.io/projected/f91c85a1-fe69-4dc3-9e1a-dee5f83fa81a-kube-api-access-zgdzj\") pod \"f91c85a1-fe69-4dc3-9e1a-dee5f83fa81a\" (UID: \"f91c85a1-fe69-4dc3-9e1a-dee5f83fa81a\") " Oct 01 16:09:47 crc kubenswrapper[4949]: I1001 16:09:47.942060 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f91c85a1-fe69-4dc3-9e1a-dee5f83fa81a-inventory\") pod \"f91c85a1-fe69-4dc3-9e1a-dee5f83fa81a\" (UID: \"f91c85a1-fe69-4dc3-9e1a-dee5f83fa81a\") " Oct 01 16:09:47 crc kubenswrapper[4949]: I1001 16:09:47.949300 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f91c85a1-fe69-4dc3-9e1a-dee5f83fa81a-kube-api-access-zgdzj" (OuterVolumeSpecName: "kube-api-access-zgdzj") pod "f91c85a1-fe69-4dc3-9e1a-dee5f83fa81a" (UID: "f91c85a1-fe69-4dc3-9e1a-dee5f83fa81a"). InnerVolumeSpecName "kube-api-access-zgdzj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:09:47 crc kubenswrapper[4949]: I1001 16:09:47.977009 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f91c85a1-fe69-4dc3-9e1a-dee5f83fa81a-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "f91c85a1-fe69-4dc3-9e1a-dee5f83fa81a" (UID: "f91c85a1-fe69-4dc3-9e1a-dee5f83fa81a"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:09:47 crc kubenswrapper[4949]: I1001 16:09:47.987413 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f91c85a1-fe69-4dc3-9e1a-dee5f83fa81a-inventory" (OuterVolumeSpecName: "inventory") pod "f91c85a1-fe69-4dc3-9e1a-dee5f83fa81a" (UID: "f91c85a1-fe69-4dc3-9e1a-dee5f83fa81a"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:09:48 crc kubenswrapper[4949]: I1001 16:09:48.045038 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdzj\" (UniqueName: \"kubernetes.io/projected/f91c85a1-fe69-4dc3-9e1a-dee5f83fa81a-kube-api-access-zgdzj\") on node \"crc\" DevicePath \"\"" Oct 01 16:09:48 crc kubenswrapper[4949]: I1001 16:09:48.045097 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/f91c85a1-fe69-4dc3-9e1a-dee5f83fa81a-inventory\") on node \"crc\" DevicePath \"\"" Oct 01 16:09:48 crc kubenswrapper[4949]: I1001 16:09:48.045115 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/f91c85a1-fe69-4dc3-9e1a-dee5f83fa81a-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 01 16:09:48 crc kubenswrapper[4949]: I1001 16:09:48.365579 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cqdxh" event={"ID":"f91c85a1-fe69-4dc3-9e1a-dee5f83fa81a","Type":"ContainerDied","Data":"714a678d3f5713450d1b39fffd9545a2489f9854acb390324d7bcb7d4d1d49a6"} Oct 01 16:09:48 crc kubenswrapper[4949]: I1001 16:09:48.365650 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="714a678d3f5713450d1b39fffd9545a2489f9854acb390324d7bcb7d4d1d49a6" Oct 01 16:09:48 crc kubenswrapper[4949]: I1001 16:09:48.365684 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cqdxh" Oct 01 16:09:48 crc kubenswrapper[4949]: I1001 16:09:48.447819 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k29r7"] Oct 01 16:09:48 crc kubenswrapper[4949]: E1001 16:09:48.448368 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f91c85a1-fe69-4dc3-9e1a-dee5f83fa81a" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Oct 01 16:09:48 crc kubenswrapper[4949]: I1001 16:09:48.448399 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="f91c85a1-fe69-4dc3-9e1a-dee5f83fa81a" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Oct 01 16:09:48 crc kubenswrapper[4949]: I1001 16:09:48.448722 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="f91c85a1-fe69-4dc3-9e1a-dee5f83fa81a" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Oct 01 16:09:48 crc kubenswrapper[4949]: I1001 16:09:48.449761 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k29r7" Oct 01 16:09:48 crc kubenswrapper[4949]: I1001 16:09:48.452418 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 01 16:09:48 crc kubenswrapper[4949]: I1001 16:09:48.453774 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 01 16:09:48 crc kubenswrapper[4949]: I1001 16:09:48.453860 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 01 16:09:48 crc kubenswrapper[4949]: I1001 16:09:48.456000 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dwhcl" Oct 01 16:09:48 crc kubenswrapper[4949]: I1001 16:09:48.466933 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k29r7"] Oct 01 16:09:48 crc kubenswrapper[4949]: I1001 16:09:48.552539 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-twgwn\" (UniqueName: \"kubernetes.io/projected/ac5b0ffa-c24f-42ec-8c6e-91be329a8402-kube-api-access-twgwn\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-k29r7\" (UID: \"ac5b0ffa-c24f-42ec-8c6e-91be329a8402\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k29r7" Oct 01 16:09:48 crc kubenswrapper[4949]: I1001 16:09:48.552576 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac5b0ffa-c24f-42ec-8c6e-91be329a8402-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-k29r7\" (UID: \"ac5b0ffa-c24f-42ec-8c6e-91be329a8402\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k29r7" Oct 01 16:09:48 crc kubenswrapper[4949]: I1001 16:09:48.552608 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ac5b0ffa-c24f-42ec-8c6e-91be329a8402-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-k29r7\" (UID: \"ac5b0ffa-c24f-42ec-8c6e-91be329a8402\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k29r7" Oct 01 16:09:48 crc kubenswrapper[4949]: I1001 16:09:48.654395 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-twgwn\" (UniqueName: \"kubernetes.io/projected/ac5b0ffa-c24f-42ec-8c6e-91be329a8402-kube-api-access-twgwn\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-k29r7\" (UID: \"ac5b0ffa-c24f-42ec-8c6e-91be329a8402\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k29r7" Oct 01 16:09:48 crc kubenswrapper[4949]: I1001 16:09:48.654711 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac5b0ffa-c24f-42ec-8c6e-91be329a8402-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-k29r7\" (UID: \"ac5b0ffa-c24f-42ec-8c6e-91be329a8402\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k29r7" Oct 01 16:09:48 crc kubenswrapper[4949]: I1001 16:09:48.654830 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ac5b0ffa-c24f-42ec-8c6e-91be329a8402-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-k29r7\" (UID: \"ac5b0ffa-c24f-42ec-8c6e-91be329a8402\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k29r7" Oct 01 16:09:48 crc kubenswrapper[4949]: I1001 16:09:48.659645 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ac5b0ffa-c24f-42ec-8c6e-91be329a8402-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-k29r7\" (UID: \"ac5b0ffa-c24f-42ec-8c6e-91be329a8402\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k29r7" Oct 01 16:09:48 crc kubenswrapper[4949]: I1001 16:09:48.660567 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac5b0ffa-c24f-42ec-8c6e-91be329a8402-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-k29r7\" (UID: \"ac5b0ffa-c24f-42ec-8c6e-91be329a8402\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k29r7" Oct 01 16:09:48 crc kubenswrapper[4949]: I1001 16:09:48.674206 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-twgwn\" (UniqueName: \"kubernetes.io/projected/ac5b0ffa-c24f-42ec-8c6e-91be329a8402-kube-api-access-twgwn\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-k29r7\" (UID: \"ac5b0ffa-c24f-42ec-8c6e-91be329a8402\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k29r7" Oct 01 16:09:48 crc kubenswrapper[4949]: I1001 16:09:48.769158 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k29r7" Oct 01 16:09:49 crc kubenswrapper[4949]: I1001 16:09:49.320117 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k29r7"] Oct 01 16:09:50 crc kubenswrapper[4949]: I1001 16:09:50.455386 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k29r7" event={"ID":"ac5b0ffa-c24f-42ec-8c6e-91be329a8402","Type":"ContainerStarted","Data":"f09c9612f8bbe30eda621d256848fcdd883819b915b756cf7e40f8022ce43bc0"} Oct 01 16:09:50 crc kubenswrapper[4949]: I1001 16:09:50.455874 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k29r7" event={"ID":"ac5b0ffa-c24f-42ec-8c6e-91be329a8402","Type":"ContainerStarted","Data":"75e729ee8f8da1612a37c9e08db9701b48ef4eb95cbb80916a69b483489145bb"} Oct 01 16:09:50 crc kubenswrapper[4949]: I1001 16:09:50.478606 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k29r7" podStartSLOduration=1.941438615 podStartE2EDuration="2.478590465s" podCreationTimestamp="2025-10-01 16:09:48 +0000 UTC" firstStartedPulling="2025-10-01 16:09:49.467922596 +0000 UTC m=+1688.773528837" lastFinishedPulling="2025-10-01 16:09:50.005074496 +0000 UTC m=+1689.310680687" observedRunningTime="2025-10-01 16:09:50.472179986 +0000 UTC m=+1689.777786207" watchObservedRunningTime="2025-10-01 16:09:50.478590465 +0000 UTC m=+1689.784196656" Oct 01 16:09:51 crc kubenswrapper[4949]: I1001 16:09:51.033113 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/cinder-db-sync-tjfmw"] Oct 01 16:09:51 crc kubenswrapper[4949]: I1001 16:09:51.042365 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/cinder-db-sync-tjfmw"] Oct 01 16:09:51 crc kubenswrapper[4949]: I1001 16:09:51.611931 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="41cbbbe8-6b79-4667-ba8d-7252d0d1a998" path="/var/lib/kubelet/pods/41cbbbe8-6b79-4667-ba8d-7252d0d1a998/volumes" Oct 01 16:09:57 crc kubenswrapper[4949]: I1001 16:09:57.602273 4949 scope.go:117] "RemoveContainer" containerID="8c210bfce86fb45b4a25ef5c865680ea34bee887c4c2e606bab8c9c8e7ddd056" Oct 01 16:09:57 crc kubenswrapper[4949]: E1001 16:09:57.603786 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:10:12 crc kubenswrapper[4949]: I1001 16:10:12.602018 4949 scope.go:117] "RemoveContainer" containerID="8c210bfce86fb45b4a25ef5c865680ea34bee887c4c2e606bab8c9c8e7ddd056" Oct 01 16:10:12 crc kubenswrapper[4949]: E1001 16:10:12.602652 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:10:24 crc kubenswrapper[4949]: I1001 16:10:24.049414 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-db-create-js2pl"] Oct 01 16:10:24 crc kubenswrapper[4949]: I1001 16:10:24.057468 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-db-create-bpchz"] Oct 01 16:10:24 crc kubenswrapper[4949]: I1001 16:10:24.065031 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-db-create-pbcdp"] Oct 01 16:10:24 crc kubenswrapper[4949]: I1001 16:10:24.071796 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-db-create-js2pl"] Oct 01 16:10:24 crc kubenswrapper[4949]: I1001 16:10:24.077705 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-db-create-bpchz"] Oct 01 16:10:24 crc kubenswrapper[4949]: I1001 16:10:24.083406 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-db-create-pbcdp"] Oct 01 16:10:24 crc kubenswrapper[4949]: I1001 16:10:24.602640 4949 scope.go:117] "RemoveContainer" containerID="8c210bfce86fb45b4a25ef5c865680ea34bee887c4c2e606bab8c9c8e7ddd056" Oct 01 16:10:24 crc kubenswrapper[4949]: E1001 16:10:24.603041 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:10:25 crc kubenswrapper[4949]: I1001 16:10:25.616933 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="838c9578-08c9-4330-bae8-1ee82a2acc71" path="/var/lib/kubelet/pods/838c9578-08c9-4330-bae8-1ee82a2acc71/volumes" Oct 01 16:10:25 crc kubenswrapper[4949]: I1001 16:10:25.617533 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9251efff-db93-42b2-a1ba-62cfdee08c7f" path="/var/lib/kubelet/pods/9251efff-db93-42b2-a1ba-62cfdee08c7f/volumes" Oct 01 16:10:25 crc kubenswrapper[4949]: I1001 16:10:25.618007 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94c5fb52-ae85-4c88-a811-fde1ae61a33a" path="/var/lib/kubelet/pods/94c5fb52-ae85-4c88-a811-fde1ae61a33a/volumes" Oct 01 16:10:32 crc kubenswrapper[4949]: I1001 16:10:32.031762 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-api-3f61-account-create-c4f44"] Oct 01 16:10:32 crc kubenswrapper[4949]: I1001 16:10:32.044017 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-api-3f61-account-create-c4f44"] Oct 01 16:10:33 crc kubenswrapper[4949]: I1001 16:10:33.613664 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="12d472aa-0dd1-4b72-a1e6-384fb866b92f" path="/var/lib/kubelet/pods/12d472aa-0dd1-4b72-a1e6-384fb866b92f/volumes" Oct 01 16:10:35 crc kubenswrapper[4949]: I1001 16:10:35.602050 4949 scope.go:117] "RemoveContainer" containerID="8c210bfce86fb45b4a25ef5c865680ea34bee887c4c2e606bab8c9c8e7ddd056" Oct 01 16:10:35 crc kubenswrapper[4949]: E1001 16:10:35.602504 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:10:41 crc kubenswrapper[4949]: I1001 16:10:41.045420 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-67ab-account-create-8wlvb"] Oct 01 16:10:41 crc kubenswrapper[4949]: I1001 16:10:41.054824 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-8367-account-create-vptlt"] Oct 01 16:10:41 crc kubenswrapper[4949]: I1001 16:10:41.065219 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-67ab-account-create-8wlvb"] Oct 01 16:10:41 crc kubenswrapper[4949]: I1001 16:10:41.071735 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-8367-account-create-vptlt"] Oct 01 16:10:41 crc kubenswrapper[4949]: I1001 16:10:41.611038 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="084e617b-6327-45db-8d6c-61f5d0f779c2" path="/var/lib/kubelet/pods/084e617b-6327-45db-8d6c-61f5d0f779c2/volumes" Oct 01 16:10:41 crc kubenswrapper[4949]: I1001 16:10:41.611752 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b3553a0-42ab-4edb-9a35-4268e81df5ce" path="/var/lib/kubelet/pods/0b3553a0-42ab-4edb-9a35-4268e81df5ce/volumes" Oct 01 16:10:45 crc kubenswrapper[4949]: I1001 16:10:45.009101 4949 generic.go:334] "Generic (PLEG): container finished" podID="ac5b0ffa-c24f-42ec-8c6e-91be329a8402" containerID="f09c9612f8bbe30eda621d256848fcdd883819b915b756cf7e40f8022ce43bc0" exitCode=2 Oct 01 16:10:45 crc kubenswrapper[4949]: I1001 16:10:45.009269 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k29r7" event={"ID":"ac5b0ffa-c24f-42ec-8c6e-91be329a8402","Type":"ContainerDied","Data":"f09c9612f8bbe30eda621d256848fcdd883819b915b756cf7e40f8022ce43bc0"} Oct 01 16:10:45 crc kubenswrapper[4949]: I1001 16:10:45.482852 4949 scope.go:117] "RemoveContainer" containerID="9232a624cce7eb367e7276f825c669f16894df640253fbd499fc15094cd8b29b" Oct 01 16:10:45 crc kubenswrapper[4949]: I1001 16:10:45.515602 4949 scope.go:117] "RemoveContainer" containerID="afb5c0e994ef3c39c3836b90b514109c09a7dc733f280b31f6e71e8671ad2d30" Oct 01 16:10:45 crc kubenswrapper[4949]: I1001 16:10:45.571769 4949 scope.go:117] "RemoveContainer" containerID="3d90b8786bacf03f48526bc45504ac7d65b7c8bb873ddfe990c613c2ad769367" Oct 01 16:10:45 crc kubenswrapper[4949]: I1001 16:10:45.617929 4949 scope.go:117] "RemoveContainer" containerID="d0ee2e913b1a3940c452da8b7cf4f4e528043c258432cf2f90a83594e9e39a7d" Oct 01 16:10:45 crc kubenswrapper[4949]: I1001 16:10:45.666795 4949 scope.go:117] "RemoveContainer" containerID="d062929600b36141648398c096f2ff8c2539dee38d9804190b4ef4444bf280de" Oct 01 16:10:45 crc kubenswrapper[4949]: I1001 16:10:45.748754 4949 scope.go:117] "RemoveContainer" containerID="35f2947ad6a5c0243df86288559da71faff80418f8959b879d0ddb82b1689af6" Oct 01 16:10:45 crc kubenswrapper[4949]: I1001 16:10:45.769258 4949 scope.go:117] "RemoveContainer" containerID="07d91c7413a4e7927432e7b97110cdc226eef7f33770a710271e1d9f96cec01b" Oct 01 16:10:45 crc kubenswrapper[4949]: I1001 16:10:45.801484 4949 scope.go:117] "RemoveContainer" containerID="95aa3f67524ee4c2cad309bf3cef511fcecd1ff565ecb211657f310b04931304" Oct 01 16:10:46 crc kubenswrapper[4949]: I1001 16:10:46.323360 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k29r7" Oct 01 16:10:46 crc kubenswrapper[4949]: I1001 16:10:46.351401 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac5b0ffa-c24f-42ec-8c6e-91be329a8402-inventory\") pod \"ac5b0ffa-c24f-42ec-8c6e-91be329a8402\" (UID: \"ac5b0ffa-c24f-42ec-8c6e-91be329a8402\") " Oct 01 16:10:46 crc kubenswrapper[4949]: I1001 16:10:46.351524 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ac5b0ffa-c24f-42ec-8c6e-91be329a8402-ssh-key\") pod \"ac5b0ffa-c24f-42ec-8c6e-91be329a8402\" (UID: \"ac5b0ffa-c24f-42ec-8c6e-91be329a8402\") " Oct 01 16:10:46 crc kubenswrapper[4949]: I1001 16:10:46.351563 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twgwn\" (UniqueName: \"kubernetes.io/projected/ac5b0ffa-c24f-42ec-8c6e-91be329a8402-kube-api-access-twgwn\") pod \"ac5b0ffa-c24f-42ec-8c6e-91be329a8402\" (UID: \"ac5b0ffa-c24f-42ec-8c6e-91be329a8402\") " Oct 01 16:10:46 crc kubenswrapper[4949]: I1001 16:10:46.357069 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac5b0ffa-c24f-42ec-8c6e-91be329a8402-kube-api-access-twgwn" (OuterVolumeSpecName: "kube-api-access-twgwn") pod "ac5b0ffa-c24f-42ec-8c6e-91be329a8402" (UID: "ac5b0ffa-c24f-42ec-8c6e-91be329a8402"). InnerVolumeSpecName "kube-api-access-twgwn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:10:46 crc kubenswrapper[4949]: I1001 16:10:46.376421 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac5b0ffa-c24f-42ec-8c6e-91be329a8402-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ac5b0ffa-c24f-42ec-8c6e-91be329a8402" (UID: "ac5b0ffa-c24f-42ec-8c6e-91be329a8402"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:10:46 crc kubenswrapper[4949]: I1001 16:10:46.384322 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac5b0ffa-c24f-42ec-8c6e-91be329a8402-inventory" (OuterVolumeSpecName: "inventory") pod "ac5b0ffa-c24f-42ec-8c6e-91be329a8402" (UID: "ac5b0ffa-c24f-42ec-8c6e-91be329a8402"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:10:46 crc kubenswrapper[4949]: I1001 16:10:46.452910 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ac5b0ffa-c24f-42ec-8c6e-91be329a8402-inventory\") on node \"crc\" DevicePath \"\"" Oct 01 16:10:46 crc kubenswrapper[4949]: I1001 16:10:46.452935 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ac5b0ffa-c24f-42ec-8c6e-91be329a8402-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 01 16:10:46 crc kubenswrapper[4949]: I1001 16:10:46.452944 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-twgwn\" (UniqueName: \"kubernetes.io/projected/ac5b0ffa-c24f-42ec-8c6e-91be329a8402-kube-api-access-twgwn\") on node \"crc\" DevicePath \"\"" Oct 01 16:10:47 crc kubenswrapper[4949]: I1001 16:10:47.054762 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k29r7" event={"ID":"ac5b0ffa-c24f-42ec-8c6e-91be329a8402","Type":"ContainerDied","Data":"75e729ee8f8da1612a37c9e08db9701b48ef4eb95cbb80916a69b483489145bb"} Oct 01 16:10:47 crc kubenswrapper[4949]: I1001 16:10:47.054838 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k29r7" Oct 01 16:10:47 crc kubenswrapper[4949]: I1001 16:10:47.054836 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75e729ee8f8da1612a37c9e08db9701b48ef4eb95cbb80916a69b483489145bb" Oct 01 16:10:47 crc kubenswrapper[4949]: I1001 16:10:47.601956 4949 scope.go:117] "RemoveContainer" containerID="8c210bfce86fb45b4a25ef5c865680ea34bee887c4c2e606bab8c9c8e7ddd056" Oct 01 16:10:47 crc kubenswrapper[4949]: E1001 16:10:47.602836 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:10:54 crc kubenswrapper[4949]: I1001 16:10:54.026787 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-klz9h"] Oct 01 16:10:54 crc kubenswrapper[4949]: E1001 16:10:54.027688 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac5b0ffa-c24f-42ec-8c6e-91be329a8402" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 01 16:10:54 crc kubenswrapper[4949]: I1001 16:10:54.027703 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac5b0ffa-c24f-42ec-8c6e-91be329a8402" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 01 16:10:54 crc kubenswrapper[4949]: I1001 16:10:54.027857 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac5b0ffa-c24f-42ec-8c6e-91be329a8402" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 01 16:10:54 crc kubenswrapper[4949]: I1001 16:10:54.028452 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-klz9h" Oct 01 16:10:54 crc kubenswrapper[4949]: I1001 16:10:54.031940 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 01 16:10:54 crc kubenswrapper[4949]: I1001 16:10:54.032147 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 01 16:10:54 crc kubenswrapper[4949]: I1001 16:10:54.032291 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dwhcl" Oct 01 16:10:54 crc kubenswrapper[4949]: I1001 16:10:54.032481 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 01 16:10:54 crc kubenswrapper[4949]: I1001 16:10:54.039044 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-klz9h"] Oct 01 16:10:54 crc kubenswrapper[4949]: I1001 16:10:54.204446 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/898446c8-028e-442d-a7db-5d2218888fe8-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-klz9h\" (UID: \"898446c8-028e-442d-a7db-5d2218888fe8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-klz9h" Oct 01 16:10:54 crc kubenswrapper[4949]: I1001 16:10:54.204527 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4q24\" (UniqueName: \"kubernetes.io/projected/898446c8-028e-442d-a7db-5d2218888fe8-kube-api-access-n4q24\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-klz9h\" (UID: \"898446c8-028e-442d-a7db-5d2218888fe8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-klz9h" Oct 01 16:10:54 crc kubenswrapper[4949]: I1001 16:10:54.205019 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/898446c8-028e-442d-a7db-5d2218888fe8-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-klz9h\" (UID: \"898446c8-028e-442d-a7db-5d2218888fe8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-klz9h" Oct 01 16:10:54 crc kubenswrapper[4949]: I1001 16:10:54.314996 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/898446c8-028e-442d-a7db-5d2218888fe8-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-klz9h\" (UID: \"898446c8-028e-442d-a7db-5d2218888fe8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-klz9h" Oct 01 16:10:54 crc kubenswrapper[4949]: I1001 16:10:54.315107 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/898446c8-028e-442d-a7db-5d2218888fe8-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-klz9h\" (UID: \"898446c8-028e-442d-a7db-5d2218888fe8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-klz9h" Oct 01 16:10:54 crc kubenswrapper[4949]: I1001 16:10:54.315163 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n4q24\" (UniqueName: \"kubernetes.io/projected/898446c8-028e-442d-a7db-5d2218888fe8-kube-api-access-n4q24\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-klz9h\" (UID: \"898446c8-028e-442d-a7db-5d2218888fe8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-klz9h" Oct 01 16:10:54 crc kubenswrapper[4949]: I1001 16:10:54.324679 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/898446c8-028e-442d-a7db-5d2218888fe8-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-klz9h\" (UID: \"898446c8-028e-442d-a7db-5d2218888fe8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-klz9h" Oct 01 16:10:54 crc kubenswrapper[4949]: I1001 16:10:54.326669 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/898446c8-028e-442d-a7db-5d2218888fe8-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-klz9h\" (UID: \"898446c8-028e-442d-a7db-5d2218888fe8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-klz9h" Oct 01 16:10:54 crc kubenswrapper[4949]: I1001 16:10:54.344181 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n4q24\" (UniqueName: \"kubernetes.io/projected/898446c8-028e-442d-a7db-5d2218888fe8-kube-api-access-n4q24\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-klz9h\" (UID: \"898446c8-028e-442d-a7db-5d2218888fe8\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-klz9h" Oct 01 16:10:54 crc kubenswrapper[4949]: I1001 16:10:54.353390 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-klz9h" Oct 01 16:10:54 crc kubenswrapper[4949]: I1001 16:10:54.685924 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-klz9h"] Oct 01 16:10:55 crc kubenswrapper[4949]: I1001 16:10:55.121474 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-klz9h" event={"ID":"898446c8-028e-442d-a7db-5d2218888fe8","Type":"ContainerStarted","Data":"c800c7ba8df0dd7bddd5a09d88063166379a69f88d036291620eab78fa74e1fa"} Oct 01 16:10:56 crc kubenswrapper[4949]: I1001 16:10:56.131803 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-klz9h" event={"ID":"898446c8-028e-442d-a7db-5d2218888fe8","Type":"ContainerStarted","Data":"830a2f27e1a0916193ed76c18b05b1468aaace94f97a85efa0ea7e39f051457f"} Oct 01 16:10:56 crc kubenswrapper[4949]: I1001 16:10:56.152534 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-klz9h" podStartSLOduration=1.624423189 podStartE2EDuration="2.152508786s" podCreationTimestamp="2025-10-01 16:10:54 +0000 UTC" firstStartedPulling="2025-10-01 16:10:54.700395242 +0000 UTC m=+1754.006001443" lastFinishedPulling="2025-10-01 16:10:55.228480819 +0000 UTC m=+1754.534087040" observedRunningTime="2025-10-01 16:10:56.146597342 +0000 UTC m=+1755.452203543" watchObservedRunningTime="2025-10-01 16:10:56.152508786 +0000 UTC m=+1755.458114977" Oct 01 16:11:02 crc kubenswrapper[4949]: I1001 16:11:02.601762 4949 scope.go:117] "RemoveContainer" containerID="8c210bfce86fb45b4a25ef5c865680ea34bee887c4c2e606bab8c9c8e7ddd056" Oct 01 16:11:02 crc kubenswrapper[4949]: E1001 16:11:02.602576 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:11:16 crc kubenswrapper[4949]: I1001 16:11:16.601724 4949 scope.go:117] "RemoveContainer" containerID="8c210bfce86fb45b4a25ef5c865680ea34bee887c4c2e606bab8c9c8e7ddd056" Oct 01 16:11:16 crc kubenswrapper[4949]: E1001 16:11:16.602586 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:11:20 crc kubenswrapper[4949]: I1001 16:11:20.043705 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zpzx9"] Oct 01 16:11:20 crc kubenswrapper[4949]: I1001 16:11:20.054425 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-conductor-db-sync-zpzx9"] Oct 01 16:11:21 crc kubenswrapper[4949]: I1001 16:11:21.617060 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cb75992b-3dfd-40ee-a759-9ef7b3372366" path="/var/lib/kubelet/pods/cb75992b-3dfd-40ee-a759-9ef7b3372366/volumes" Oct 01 16:11:28 crc kubenswrapper[4949]: I1001 16:11:28.604176 4949 scope.go:117] "RemoveContainer" containerID="8c210bfce86fb45b4a25ef5c865680ea34bee887c4c2e606bab8c9c8e7ddd056" Oct 01 16:11:28 crc kubenswrapper[4949]: E1001 16:11:28.605242 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:11:37 crc kubenswrapper[4949]: I1001 16:11:37.050081 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell0-cell-mapping-829qg"] Oct 01 16:11:37 crc kubenswrapper[4949]: I1001 16:11:37.062601 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell0-cell-mapping-829qg"] Oct 01 16:11:37 crc kubenswrapper[4949]: I1001 16:11:37.617495 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0a0d0f6b-78d8-4295-8842-0b95d1081339" path="/var/lib/kubelet/pods/0a0d0f6b-78d8-4295-8842-0b95d1081339/volumes" Oct 01 16:11:38 crc kubenswrapper[4949]: I1001 16:11:38.036331 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-qt9wv"] Oct 01 16:11:38 crc kubenswrapper[4949]: I1001 16:11:38.056380 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-conductor-db-sync-qt9wv"] Oct 01 16:11:39 crc kubenswrapper[4949]: I1001 16:11:39.622601 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6a3d350-3e49-470b-80e2-0fe197b477e8" path="/var/lib/kubelet/pods/a6a3d350-3e49-470b-80e2-0fe197b477e8/volumes" Oct 01 16:11:40 crc kubenswrapper[4949]: I1001 16:11:40.602084 4949 scope.go:117] "RemoveContainer" containerID="8c210bfce86fb45b4a25ef5c865680ea34bee887c4c2e606bab8c9c8e7ddd056" Oct 01 16:11:40 crc kubenswrapper[4949]: E1001 16:11:40.602458 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:11:44 crc kubenswrapper[4949]: I1001 16:11:44.672636 4949 generic.go:334] "Generic (PLEG): container finished" podID="898446c8-028e-442d-a7db-5d2218888fe8" containerID="830a2f27e1a0916193ed76c18b05b1468aaace94f97a85efa0ea7e39f051457f" exitCode=0 Oct 01 16:11:44 crc kubenswrapper[4949]: I1001 16:11:44.672663 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-klz9h" event={"ID":"898446c8-028e-442d-a7db-5d2218888fe8","Type":"ContainerDied","Data":"830a2f27e1a0916193ed76c18b05b1468aaace94f97a85efa0ea7e39f051457f"} Oct 01 16:11:45 crc kubenswrapper[4949]: I1001 16:11:45.939452 4949 scope.go:117] "RemoveContainer" containerID="1ec3230cb7f5e4c4bce0c2fffc9fc916ee7ba4ddce52fc33da18ad1adddcd8df" Oct 01 16:11:46 crc kubenswrapper[4949]: I1001 16:11:45.999330 4949 scope.go:117] "RemoveContainer" containerID="e36a5d7778e966b1cc2376c784f247789fdb36fe0d18a75d659b29bd9b9ee4a7" Oct 01 16:11:46 crc kubenswrapper[4949]: I1001 16:11:46.051164 4949 scope.go:117] "RemoveContainer" containerID="bb3e955ccf77df549efbaada0405675e70cd6157ed98d8870fe794cbb6b9b714" Oct 01 16:11:46 crc kubenswrapper[4949]: I1001 16:11:46.145531 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-klz9h" Oct 01 16:11:46 crc kubenswrapper[4949]: I1001 16:11:46.242357 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/898446c8-028e-442d-a7db-5d2218888fe8-ssh-key\") pod \"898446c8-028e-442d-a7db-5d2218888fe8\" (UID: \"898446c8-028e-442d-a7db-5d2218888fe8\") " Oct 01 16:11:46 crc kubenswrapper[4949]: I1001 16:11:46.242551 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n4q24\" (UniqueName: \"kubernetes.io/projected/898446c8-028e-442d-a7db-5d2218888fe8-kube-api-access-n4q24\") pod \"898446c8-028e-442d-a7db-5d2218888fe8\" (UID: \"898446c8-028e-442d-a7db-5d2218888fe8\") " Oct 01 16:11:46 crc kubenswrapper[4949]: I1001 16:11:46.242676 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/898446c8-028e-442d-a7db-5d2218888fe8-inventory\") pod \"898446c8-028e-442d-a7db-5d2218888fe8\" (UID: \"898446c8-028e-442d-a7db-5d2218888fe8\") " Oct 01 16:11:46 crc kubenswrapper[4949]: I1001 16:11:46.248110 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/898446c8-028e-442d-a7db-5d2218888fe8-kube-api-access-n4q24" (OuterVolumeSpecName: "kube-api-access-n4q24") pod "898446c8-028e-442d-a7db-5d2218888fe8" (UID: "898446c8-028e-442d-a7db-5d2218888fe8"). InnerVolumeSpecName "kube-api-access-n4q24". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:11:46 crc kubenswrapper[4949]: I1001 16:11:46.266016 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/898446c8-028e-442d-a7db-5d2218888fe8-inventory" (OuterVolumeSpecName: "inventory") pod "898446c8-028e-442d-a7db-5d2218888fe8" (UID: "898446c8-028e-442d-a7db-5d2218888fe8"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:11:46 crc kubenswrapper[4949]: I1001 16:11:46.267663 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/898446c8-028e-442d-a7db-5d2218888fe8-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "898446c8-028e-442d-a7db-5d2218888fe8" (UID: "898446c8-028e-442d-a7db-5d2218888fe8"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:11:46 crc kubenswrapper[4949]: I1001 16:11:46.345111 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n4q24\" (UniqueName: \"kubernetes.io/projected/898446c8-028e-442d-a7db-5d2218888fe8-kube-api-access-n4q24\") on node \"crc\" DevicePath \"\"" Oct 01 16:11:46 crc kubenswrapper[4949]: I1001 16:11:46.345180 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/898446c8-028e-442d-a7db-5d2218888fe8-inventory\") on node \"crc\" DevicePath \"\"" Oct 01 16:11:46 crc kubenswrapper[4949]: I1001 16:11:46.345196 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/898446c8-028e-442d-a7db-5d2218888fe8-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 01 16:11:46 crc kubenswrapper[4949]: I1001 16:11:46.698232 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-klz9h" Oct 01 16:11:46 crc kubenswrapper[4949]: I1001 16:11:46.698234 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-klz9h" event={"ID":"898446c8-028e-442d-a7db-5d2218888fe8","Type":"ContainerDied","Data":"c800c7ba8df0dd7bddd5a09d88063166379a69f88d036291620eab78fa74e1fa"} Oct 01 16:11:46 crc kubenswrapper[4949]: I1001 16:11:46.698470 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c800c7ba8df0dd7bddd5a09d88063166379a69f88d036291620eab78fa74e1fa" Oct 01 16:11:46 crc kubenswrapper[4949]: I1001 16:11:46.804828 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-gqk92"] Oct 01 16:11:46 crc kubenswrapper[4949]: E1001 16:11:46.805318 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="898446c8-028e-442d-a7db-5d2218888fe8" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 01 16:11:46 crc kubenswrapper[4949]: I1001 16:11:46.805334 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="898446c8-028e-442d-a7db-5d2218888fe8" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 01 16:11:46 crc kubenswrapper[4949]: I1001 16:11:46.805613 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="898446c8-028e-442d-a7db-5d2218888fe8" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 01 16:11:46 crc kubenswrapper[4949]: I1001 16:11:46.806420 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-gqk92" Oct 01 16:11:46 crc kubenswrapper[4949]: I1001 16:11:46.808800 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 01 16:11:46 crc kubenswrapper[4949]: I1001 16:11:46.813327 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 01 16:11:46 crc kubenswrapper[4949]: I1001 16:11:46.813427 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 01 16:11:46 crc kubenswrapper[4949]: I1001 16:11:46.813439 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dwhcl" Oct 01 16:11:46 crc kubenswrapper[4949]: I1001 16:11:46.817599 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-gqk92"] Oct 01 16:11:46 crc kubenswrapper[4949]: I1001 16:11:46.856080 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2e31db6d-37b2-494d-b1cd-536f07df5752-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-gqk92\" (UID: \"2e31db6d-37b2-494d-b1cd-536f07df5752\") " pod="openstack/ssh-known-hosts-edpm-deployment-gqk92" Oct 01 16:11:46 crc kubenswrapper[4949]: I1001 16:11:46.856163 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvppq\" (UniqueName: \"kubernetes.io/projected/2e31db6d-37b2-494d-b1cd-536f07df5752-kube-api-access-pvppq\") pod \"ssh-known-hosts-edpm-deployment-gqk92\" (UID: \"2e31db6d-37b2-494d-b1cd-536f07df5752\") " pod="openstack/ssh-known-hosts-edpm-deployment-gqk92" Oct 01 16:11:46 crc kubenswrapper[4949]: I1001 16:11:46.856517 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2e31db6d-37b2-494d-b1cd-536f07df5752-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-gqk92\" (UID: \"2e31db6d-37b2-494d-b1cd-536f07df5752\") " pod="openstack/ssh-known-hosts-edpm-deployment-gqk92" Oct 01 16:11:46 crc kubenswrapper[4949]: I1001 16:11:46.957304 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2e31db6d-37b2-494d-b1cd-536f07df5752-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-gqk92\" (UID: \"2e31db6d-37b2-494d-b1cd-536f07df5752\") " pod="openstack/ssh-known-hosts-edpm-deployment-gqk92" Oct 01 16:11:46 crc kubenswrapper[4949]: I1001 16:11:46.957644 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pvppq\" (UniqueName: \"kubernetes.io/projected/2e31db6d-37b2-494d-b1cd-536f07df5752-kube-api-access-pvppq\") pod \"ssh-known-hosts-edpm-deployment-gqk92\" (UID: \"2e31db6d-37b2-494d-b1cd-536f07df5752\") " pod="openstack/ssh-known-hosts-edpm-deployment-gqk92" Oct 01 16:11:46 crc kubenswrapper[4949]: I1001 16:11:46.957723 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2e31db6d-37b2-494d-b1cd-536f07df5752-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-gqk92\" (UID: \"2e31db6d-37b2-494d-b1cd-536f07df5752\") " pod="openstack/ssh-known-hosts-edpm-deployment-gqk92" Oct 01 16:11:46 crc kubenswrapper[4949]: I1001 16:11:46.961209 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2e31db6d-37b2-494d-b1cd-536f07df5752-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-gqk92\" (UID: \"2e31db6d-37b2-494d-b1cd-536f07df5752\") " pod="openstack/ssh-known-hosts-edpm-deployment-gqk92" Oct 01 16:11:46 crc kubenswrapper[4949]: I1001 16:11:46.961443 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2e31db6d-37b2-494d-b1cd-536f07df5752-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-gqk92\" (UID: \"2e31db6d-37b2-494d-b1cd-536f07df5752\") " pod="openstack/ssh-known-hosts-edpm-deployment-gqk92" Oct 01 16:11:46 crc kubenswrapper[4949]: I1001 16:11:46.978469 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvppq\" (UniqueName: \"kubernetes.io/projected/2e31db6d-37b2-494d-b1cd-536f07df5752-kube-api-access-pvppq\") pod \"ssh-known-hosts-edpm-deployment-gqk92\" (UID: \"2e31db6d-37b2-494d-b1cd-536f07df5752\") " pod="openstack/ssh-known-hosts-edpm-deployment-gqk92" Oct 01 16:11:47 crc kubenswrapper[4949]: I1001 16:11:47.128049 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-gqk92" Oct 01 16:11:47 crc kubenswrapper[4949]: I1001 16:11:47.682736 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-gqk92"] Oct 01 16:11:47 crc kubenswrapper[4949]: I1001 16:11:47.707373 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-gqk92" event={"ID":"2e31db6d-37b2-494d-b1cd-536f07df5752","Type":"ContainerStarted","Data":"7a8a2530bbed4f00c8e9e7d2eb33d30b214c7e1e1b2d1672fdb583c60315c5b9"} Oct 01 16:11:48 crc kubenswrapper[4949]: I1001 16:11:48.720586 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-gqk92" event={"ID":"2e31db6d-37b2-494d-b1cd-536f07df5752","Type":"ContainerStarted","Data":"d0d4eb5c56406c9da3aa277abd13f297c734562e39e1561722f4ae49296af279"} Oct 01 16:11:48 crc kubenswrapper[4949]: I1001 16:11:48.757391 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-gqk92" podStartSLOduration=2.265794658 podStartE2EDuration="2.75735634s" podCreationTimestamp="2025-10-01 16:11:46 +0000 UTC" firstStartedPulling="2025-10-01 16:11:47.690275911 +0000 UTC m=+1806.995882102" lastFinishedPulling="2025-10-01 16:11:48.181837593 +0000 UTC m=+1807.487443784" observedRunningTime="2025-10-01 16:11:48.744391399 +0000 UTC m=+1808.049997630" watchObservedRunningTime="2025-10-01 16:11:48.75735634 +0000 UTC m=+1808.062962571" Oct 01 16:11:52 crc kubenswrapper[4949]: I1001 16:11:52.601879 4949 scope.go:117] "RemoveContainer" containerID="8c210bfce86fb45b4a25ef5c865680ea34bee887c4c2e606bab8c9c8e7ddd056" Oct 01 16:11:52 crc kubenswrapper[4949]: E1001 16:11:52.602649 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:11:56 crc kubenswrapper[4949]: I1001 16:11:56.809178 4949 generic.go:334] "Generic (PLEG): container finished" podID="2e31db6d-37b2-494d-b1cd-536f07df5752" containerID="d0d4eb5c56406c9da3aa277abd13f297c734562e39e1561722f4ae49296af279" exitCode=0 Oct 01 16:11:56 crc kubenswrapper[4949]: I1001 16:11:56.809261 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-gqk92" event={"ID":"2e31db6d-37b2-494d-b1cd-536f07df5752","Type":"ContainerDied","Data":"d0d4eb5c56406c9da3aa277abd13f297c734562e39e1561722f4ae49296af279"} Oct 01 16:11:58 crc kubenswrapper[4949]: I1001 16:11:58.221754 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-gqk92" Oct 01 16:11:58 crc kubenswrapper[4949]: I1001 16:11:58.284520 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2e31db6d-37b2-494d-b1cd-536f07df5752-ssh-key-openstack-edpm-ipam\") pod \"2e31db6d-37b2-494d-b1cd-536f07df5752\" (UID: \"2e31db6d-37b2-494d-b1cd-536f07df5752\") " Oct 01 16:11:58 crc kubenswrapper[4949]: I1001 16:11:58.284566 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2e31db6d-37b2-494d-b1cd-536f07df5752-inventory-0\") pod \"2e31db6d-37b2-494d-b1cd-536f07df5752\" (UID: \"2e31db6d-37b2-494d-b1cd-536f07df5752\") " Oct 01 16:11:58 crc kubenswrapper[4949]: I1001 16:11:58.284882 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvppq\" (UniqueName: \"kubernetes.io/projected/2e31db6d-37b2-494d-b1cd-536f07df5752-kube-api-access-pvppq\") pod \"2e31db6d-37b2-494d-b1cd-536f07df5752\" (UID: \"2e31db6d-37b2-494d-b1cd-536f07df5752\") " Oct 01 16:11:58 crc kubenswrapper[4949]: I1001 16:11:58.290291 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2e31db6d-37b2-494d-b1cd-536f07df5752-kube-api-access-pvppq" (OuterVolumeSpecName: "kube-api-access-pvppq") pod "2e31db6d-37b2-494d-b1cd-536f07df5752" (UID: "2e31db6d-37b2-494d-b1cd-536f07df5752"). InnerVolumeSpecName "kube-api-access-pvppq". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:11:58 crc kubenswrapper[4949]: I1001 16:11:58.316263 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e31db6d-37b2-494d-b1cd-536f07df5752-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "2e31db6d-37b2-494d-b1cd-536f07df5752" (UID: "2e31db6d-37b2-494d-b1cd-536f07df5752"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:11:58 crc kubenswrapper[4949]: I1001 16:11:58.316920 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2e31db6d-37b2-494d-b1cd-536f07df5752-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "2e31db6d-37b2-494d-b1cd-536f07df5752" (UID: "2e31db6d-37b2-494d-b1cd-536f07df5752"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:11:58 crc kubenswrapper[4949]: I1001 16:11:58.386781 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pvppq\" (UniqueName: \"kubernetes.io/projected/2e31db6d-37b2-494d-b1cd-536f07df5752-kube-api-access-pvppq\") on node \"crc\" DevicePath \"\"" Oct 01 16:11:58 crc kubenswrapper[4949]: I1001 16:11:58.386823 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/2e31db6d-37b2-494d-b1cd-536f07df5752-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Oct 01 16:11:58 crc kubenswrapper[4949]: I1001 16:11:58.386834 4949 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/2e31db6d-37b2-494d-b1cd-536f07df5752-inventory-0\") on node \"crc\" DevicePath \"\"" Oct 01 16:11:58 crc kubenswrapper[4949]: I1001 16:11:58.829801 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-gqk92" event={"ID":"2e31db6d-37b2-494d-b1cd-536f07df5752","Type":"ContainerDied","Data":"7a8a2530bbed4f00c8e9e7d2eb33d30b214c7e1e1b2d1672fdb583c60315c5b9"} Oct 01 16:11:58 crc kubenswrapper[4949]: I1001 16:11:58.829864 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a8a2530bbed4f00c8e9e7d2eb33d30b214c7e1e1b2d1672fdb583c60315c5b9" Oct 01 16:11:58 crc kubenswrapper[4949]: I1001 16:11:58.829861 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-gqk92" Oct 01 16:11:58 crc kubenswrapper[4949]: I1001 16:11:58.932987 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-268bb"] Oct 01 16:11:58 crc kubenswrapper[4949]: E1001 16:11:58.933525 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2e31db6d-37b2-494d-b1cd-536f07df5752" containerName="ssh-known-hosts-edpm-deployment" Oct 01 16:11:58 crc kubenswrapper[4949]: I1001 16:11:58.933551 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="2e31db6d-37b2-494d-b1cd-536f07df5752" containerName="ssh-known-hosts-edpm-deployment" Oct 01 16:11:58 crc kubenswrapper[4949]: I1001 16:11:58.933793 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="2e31db6d-37b2-494d-b1cd-536f07df5752" containerName="ssh-known-hosts-edpm-deployment" Oct 01 16:11:58 crc kubenswrapper[4949]: I1001 16:11:58.934629 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-268bb" Oct 01 16:11:58 crc kubenswrapper[4949]: I1001 16:11:58.936782 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dwhcl" Oct 01 16:11:58 crc kubenswrapper[4949]: I1001 16:11:58.937656 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 01 16:11:58 crc kubenswrapper[4949]: I1001 16:11:58.937973 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 01 16:11:58 crc kubenswrapper[4949]: I1001 16:11:58.938070 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 01 16:11:58 crc kubenswrapper[4949]: I1001 16:11:58.941882 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-268bb"] Oct 01 16:11:58 crc kubenswrapper[4949]: I1001 16:11:58.998139 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/962745c2-f5ca-4cde-8543-6ded2a82645d-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-268bb\" (UID: \"962745c2-f5ca-4cde-8543-6ded2a82645d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-268bb" Oct 01 16:11:58 crc kubenswrapper[4949]: I1001 16:11:58.998364 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kvw7\" (UniqueName: \"kubernetes.io/projected/962745c2-f5ca-4cde-8543-6ded2a82645d-kube-api-access-2kvw7\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-268bb\" (UID: \"962745c2-f5ca-4cde-8543-6ded2a82645d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-268bb" Oct 01 16:11:58 crc kubenswrapper[4949]: I1001 16:11:58.998397 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/962745c2-f5ca-4cde-8543-6ded2a82645d-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-268bb\" (UID: \"962745c2-f5ca-4cde-8543-6ded2a82645d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-268bb" Oct 01 16:11:59 crc kubenswrapper[4949]: I1001 16:11:59.100467 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/962745c2-f5ca-4cde-8543-6ded2a82645d-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-268bb\" (UID: \"962745c2-f5ca-4cde-8543-6ded2a82645d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-268bb" Oct 01 16:11:59 crc kubenswrapper[4949]: I1001 16:11:59.101200 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kvw7\" (UniqueName: \"kubernetes.io/projected/962745c2-f5ca-4cde-8543-6ded2a82645d-kube-api-access-2kvw7\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-268bb\" (UID: \"962745c2-f5ca-4cde-8543-6ded2a82645d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-268bb" Oct 01 16:11:59 crc kubenswrapper[4949]: I1001 16:11:59.101503 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/962745c2-f5ca-4cde-8543-6ded2a82645d-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-268bb\" (UID: \"962745c2-f5ca-4cde-8543-6ded2a82645d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-268bb" Oct 01 16:11:59 crc kubenswrapper[4949]: I1001 16:11:59.107621 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/962745c2-f5ca-4cde-8543-6ded2a82645d-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-268bb\" (UID: \"962745c2-f5ca-4cde-8543-6ded2a82645d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-268bb" Oct 01 16:11:59 crc kubenswrapper[4949]: I1001 16:11:59.107962 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/962745c2-f5ca-4cde-8543-6ded2a82645d-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-268bb\" (UID: \"962745c2-f5ca-4cde-8543-6ded2a82645d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-268bb" Oct 01 16:11:59 crc kubenswrapper[4949]: I1001 16:11:59.133742 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kvw7\" (UniqueName: \"kubernetes.io/projected/962745c2-f5ca-4cde-8543-6ded2a82645d-kube-api-access-2kvw7\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-268bb\" (UID: \"962745c2-f5ca-4cde-8543-6ded2a82645d\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-268bb" Oct 01 16:11:59 crc kubenswrapper[4949]: I1001 16:11:59.258502 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-268bb" Oct 01 16:11:59 crc kubenswrapper[4949]: I1001 16:11:59.822923 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-268bb"] Oct 01 16:11:59 crc kubenswrapper[4949]: I1001 16:11:59.840182 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-268bb" event={"ID":"962745c2-f5ca-4cde-8543-6ded2a82645d","Type":"ContainerStarted","Data":"30d96433e631f977d5ff54400bdeb3d7109b533845424fd4f8756c6f66aa6724"} Oct 01 16:12:00 crc kubenswrapper[4949]: I1001 16:12:00.850080 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-268bb" event={"ID":"962745c2-f5ca-4cde-8543-6ded2a82645d","Type":"ContainerStarted","Data":"024d34b43b4c5fd5ed6c7dd8f29a3cd21bf0c404f94bc006eeb759562015f802"} Oct 01 16:12:00 crc kubenswrapper[4949]: I1001 16:12:00.874302 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-268bb" podStartSLOduration=2.437657747 podStartE2EDuration="2.874280719s" podCreationTimestamp="2025-10-01 16:11:58 +0000 UTC" firstStartedPulling="2025-10-01 16:11:59.820615784 +0000 UTC m=+1819.126221985" lastFinishedPulling="2025-10-01 16:12:00.257238766 +0000 UTC m=+1819.562844957" observedRunningTime="2025-10-01 16:12:00.864091116 +0000 UTC m=+1820.169697327" watchObservedRunningTime="2025-10-01 16:12:00.874280719 +0000 UTC m=+1820.179886920" Oct 01 16:12:07 crc kubenswrapper[4949]: I1001 16:12:07.602444 4949 scope.go:117] "RemoveContainer" containerID="8c210bfce86fb45b4a25ef5c865680ea34bee887c4c2e606bab8c9c8e7ddd056" Oct 01 16:12:07 crc kubenswrapper[4949]: E1001 16:12:07.603317 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:12:08 crc kubenswrapper[4949]: I1001 16:12:08.923311 4949 generic.go:334] "Generic (PLEG): container finished" podID="962745c2-f5ca-4cde-8543-6ded2a82645d" containerID="024d34b43b4c5fd5ed6c7dd8f29a3cd21bf0c404f94bc006eeb759562015f802" exitCode=0 Oct 01 16:12:08 crc kubenswrapper[4949]: I1001 16:12:08.923387 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-268bb" event={"ID":"962745c2-f5ca-4cde-8543-6ded2a82645d","Type":"ContainerDied","Data":"024d34b43b4c5fd5ed6c7dd8f29a3cd21bf0c404f94bc006eeb759562015f802"} Oct 01 16:12:10 crc kubenswrapper[4949]: I1001 16:12:10.322941 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-268bb" Oct 01 16:12:10 crc kubenswrapper[4949]: I1001 16:12:10.424647 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2kvw7\" (UniqueName: \"kubernetes.io/projected/962745c2-f5ca-4cde-8543-6ded2a82645d-kube-api-access-2kvw7\") pod \"962745c2-f5ca-4cde-8543-6ded2a82645d\" (UID: \"962745c2-f5ca-4cde-8543-6ded2a82645d\") " Oct 01 16:12:10 crc kubenswrapper[4949]: I1001 16:12:10.424905 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/962745c2-f5ca-4cde-8543-6ded2a82645d-inventory\") pod \"962745c2-f5ca-4cde-8543-6ded2a82645d\" (UID: \"962745c2-f5ca-4cde-8543-6ded2a82645d\") " Oct 01 16:12:10 crc kubenswrapper[4949]: I1001 16:12:10.425060 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/962745c2-f5ca-4cde-8543-6ded2a82645d-ssh-key\") pod \"962745c2-f5ca-4cde-8543-6ded2a82645d\" (UID: \"962745c2-f5ca-4cde-8543-6ded2a82645d\") " Oct 01 16:12:10 crc kubenswrapper[4949]: I1001 16:12:10.429786 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/962745c2-f5ca-4cde-8543-6ded2a82645d-kube-api-access-2kvw7" (OuterVolumeSpecName: "kube-api-access-2kvw7") pod "962745c2-f5ca-4cde-8543-6ded2a82645d" (UID: "962745c2-f5ca-4cde-8543-6ded2a82645d"). InnerVolumeSpecName "kube-api-access-2kvw7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:12:10 crc kubenswrapper[4949]: I1001 16:12:10.458665 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/962745c2-f5ca-4cde-8543-6ded2a82645d-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "962745c2-f5ca-4cde-8543-6ded2a82645d" (UID: "962745c2-f5ca-4cde-8543-6ded2a82645d"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:12:10 crc kubenswrapper[4949]: I1001 16:12:10.459026 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/962745c2-f5ca-4cde-8543-6ded2a82645d-inventory" (OuterVolumeSpecName: "inventory") pod "962745c2-f5ca-4cde-8543-6ded2a82645d" (UID: "962745c2-f5ca-4cde-8543-6ded2a82645d"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:12:10 crc kubenswrapper[4949]: I1001 16:12:10.528164 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2kvw7\" (UniqueName: \"kubernetes.io/projected/962745c2-f5ca-4cde-8543-6ded2a82645d-kube-api-access-2kvw7\") on node \"crc\" DevicePath \"\"" Oct 01 16:12:10 crc kubenswrapper[4949]: I1001 16:12:10.528193 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/962745c2-f5ca-4cde-8543-6ded2a82645d-inventory\") on node \"crc\" DevicePath \"\"" Oct 01 16:12:10 crc kubenswrapper[4949]: I1001 16:12:10.528203 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/962745c2-f5ca-4cde-8543-6ded2a82645d-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 01 16:12:10 crc kubenswrapper[4949]: I1001 16:12:10.946381 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-268bb" event={"ID":"962745c2-f5ca-4cde-8543-6ded2a82645d","Type":"ContainerDied","Data":"30d96433e631f977d5ff54400bdeb3d7109b533845424fd4f8756c6f66aa6724"} Oct 01 16:12:10 crc kubenswrapper[4949]: I1001 16:12:10.946442 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="30d96433e631f977d5ff54400bdeb3d7109b533845424fd4f8756c6f66aa6724" Oct 01 16:12:10 crc kubenswrapper[4949]: I1001 16:12:10.946554 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-268bb" Oct 01 16:12:11 crc kubenswrapper[4949]: I1001 16:12:11.037630 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6kwsn"] Oct 01 16:12:11 crc kubenswrapper[4949]: E1001 16:12:11.038317 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="962745c2-f5ca-4cde-8543-6ded2a82645d" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Oct 01 16:12:11 crc kubenswrapper[4949]: I1001 16:12:11.038344 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="962745c2-f5ca-4cde-8543-6ded2a82645d" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Oct 01 16:12:11 crc kubenswrapper[4949]: I1001 16:12:11.038598 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="962745c2-f5ca-4cde-8543-6ded2a82645d" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Oct 01 16:12:11 crc kubenswrapper[4949]: I1001 16:12:11.039426 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6kwsn" Oct 01 16:12:11 crc kubenswrapper[4949]: I1001 16:12:11.046368 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dwhcl" Oct 01 16:12:11 crc kubenswrapper[4949]: I1001 16:12:11.046602 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 01 16:12:11 crc kubenswrapper[4949]: I1001 16:12:11.046715 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 01 16:12:11 crc kubenswrapper[4949]: I1001 16:12:11.047887 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 01 16:12:11 crc kubenswrapper[4949]: I1001 16:12:11.052628 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6kwsn"] Oct 01 16:12:11 crc kubenswrapper[4949]: I1001 16:12:11.242201 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2lvdc\" (UniqueName: \"kubernetes.io/projected/bdfaf2f8-66f9-4988-9142-98222b343bc0-kube-api-access-2lvdc\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6kwsn\" (UID: \"bdfaf2f8-66f9-4988-9142-98222b343bc0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6kwsn" Oct 01 16:12:11 crc kubenswrapper[4949]: I1001 16:12:11.242303 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bdfaf2f8-66f9-4988-9142-98222b343bc0-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6kwsn\" (UID: \"bdfaf2f8-66f9-4988-9142-98222b343bc0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6kwsn" Oct 01 16:12:11 crc kubenswrapper[4949]: I1001 16:12:11.242449 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bdfaf2f8-66f9-4988-9142-98222b343bc0-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6kwsn\" (UID: \"bdfaf2f8-66f9-4988-9142-98222b343bc0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6kwsn" Oct 01 16:12:11 crc kubenswrapper[4949]: I1001 16:12:11.344037 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bdfaf2f8-66f9-4988-9142-98222b343bc0-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6kwsn\" (UID: \"bdfaf2f8-66f9-4988-9142-98222b343bc0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6kwsn" Oct 01 16:12:11 crc kubenswrapper[4949]: I1001 16:12:11.344149 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2lvdc\" (UniqueName: \"kubernetes.io/projected/bdfaf2f8-66f9-4988-9142-98222b343bc0-kube-api-access-2lvdc\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6kwsn\" (UID: \"bdfaf2f8-66f9-4988-9142-98222b343bc0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6kwsn" Oct 01 16:12:11 crc kubenswrapper[4949]: I1001 16:12:11.344192 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bdfaf2f8-66f9-4988-9142-98222b343bc0-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6kwsn\" (UID: \"bdfaf2f8-66f9-4988-9142-98222b343bc0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6kwsn" Oct 01 16:12:11 crc kubenswrapper[4949]: I1001 16:12:11.350027 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bdfaf2f8-66f9-4988-9142-98222b343bc0-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6kwsn\" (UID: \"bdfaf2f8-66f9-4988-9142-98222b343bc0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6kwsn" Oct 01 16:12:11 crc kubenswrapper[4949]: I1001 16:12:11.350393 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bdfaf2f8-66f9-4988-9142-98222b343bc0-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6kwsn\" (UID: \"bdfaf2f8-66f9-4988-9142-98222b343bc0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6kwsn" Oct 01 16:12:11 crc kubenswrapper[4949]: I1001 16:12:11.364232 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2lvdc\" (UniqueName: \"kubernetes.io/projected/bdfaf2f8-66f9-4988-9142-98222b343bc0-kube-api-access-2lvdc\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-6kwsn\" (UID: \"bdfaf2f8-66f9-4988-9142-98222b343bc0\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6kwsn" Oct 01 16:12:11 crc kubenswrapper[4949]: I1001 16:12:11.370924 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6kwsn" Oct 01 16:12:11 crc kubenswrapper[4949]: I1001 16:12:11.888474 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6kwsn"] Oct 01 16:12:11 crc kubenswrapper[4949]: I1001 16:12:11.957393 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6kwsn" event={"ID":"bdfaf2f8-66f9-4988-9142-98222b343bc0","Type":"ContainerStarted","Data":"7e0fe8d158f0d04e1c528e357848cc01a561fbbdd7cf44a7d2d0a783b7b6b225"} Oct 01 16:12:12 crc kubenswrapper[4949]: I1001 16:12:12.969382 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6kwsn" event={"ID":"bdfaf2f8-66f9-4988-9142-98222b343bc0","Type":"ContainerStarted","Data":"891cd20fd0c93b05bdce4f720688ec05518d2a32f0fbc2e2c823820ec19a534a"} Oct 01 16:12:18 crc kubenswrapper[4949]: I1001 16:12:18.603050 4949 scope.go:117] "RemoveContainer" containerID="8c210bfce86fb45b4a25ef5c865680ea34bee887c4c2e606bab8c9c8e7ddd056" Oct 01 16:12:19 crc kubenswrapper[4949]: I1001 16:12:19.027572 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" event={"ID":"0e15cd67-d4ad-49b8-96a6-da114105e558","Type":"ContainerStarted","Data":"90c4b46f71f5dc2c694e5dd56eb6f14191f02494f0dd0b6bd71d4be48af4cdce"} Oct 01 16:12:19 crc kubenswrapper[4949]: I1001 16:12:19.046710 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6kwsn" podStartSLOduration=7.544846167 podStartE2EDuration="8.046693096s" podCreationTimestamp="2025-10-01 16:12:11 +0000 UTC" firstStartedPulling="2025-10-01 16:12:11.900380608 +0000 UTC m=+1831.205986809" lastFinishedPulling="2025-10-01 16:12:12.402227507 +0000 UTC m=+1831.707833738" observedRunningTime="2025-10-01 16:12:12.985943132 +0000 UTC m=+1832.291549323" watchObservedRunningTime="2025-10-01 16:12:19.046693096 +0000 UTC m=+1838.352299287" Oct 01 16:12:22 crc kubenswrapper[4949]: I1001 16:12:22.050247 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/nova-cell1-cell-mapping-bkdh7"] Oct 01 16:12:22 crc kubenswrapper[4949]: I1001 16:12:22.058282 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/nova-cell1-cell-mapping-bkdh7"] Oct 01 16:12:23 crc kubenswrapper[4949]: I1001 16:12:23.066521 4949 generic.go:334] "Generic (PLEG): container finished" podID="bdfaf2f8-66f9-4988-9142-98222b343bc0" containerID="891cd20fd0c93b05bdce4f720688ec05518d2a32f0fbc2e2c823820ec19a534a" exitCode=0 Oct 01 16:12:23 crc kubenswrapper[4949]: I1001 16:12:23.066590 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6kwsn" event={"ID":"bdfaf2f8-66f9-4988-9142-98222b343bc0","Type":"ContainerDied","Data":"891cd20fd0c93b05bdce4f720688ec05518d2a32f0fbc2e2c823820ec19a534a"} Oct 01 16:12:23 crc kubenswrapper[4949]: I1001 16:12:23.642940 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ef377cf2-dc25-42b4-bbbc-057ddd12c20d" path="/var/lib/kubelet/pods/ef377cf2-dc25-42b4-bbbc-057ddd12c20d/volumes" Oct 01 16:12:24 crc kubenswrapper[4949]: I1001 16:12:24.509399 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6kwsn" Oct 01 16:12:24 crc kubenswrapper[4949]: I1001 16:12:24.545895 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bdfaf2f8-66f9-4988-9142-98222b343bc0-inventory\") pod \"bdfaf2f8-66f9-4988-9142-98222b343bc0\" (UID: \"bdfaf2f8-66f9-4988-9142-98222b343bc0\") " Oct 01 16:12:24 crc kubenswrapper[4949]: I1001 16:12:24.546032 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2lvdc\" (UniqueName: \"kubernetes.io/projected/bdfaf2f8-66f9-4988-9142-98222b343bc0-kube-api-access-2lvdc\") pod \"bdfaf2f8-66f9-4988-9142-98222b343bc0\" (UID: \"bdfaf2f8-66f9-4988-9142-98222b343bc0\") " Oct 01 16:12:24 crc kubenswrapper[4949]: I1001 16:12:24.546319 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bdfaf2f8-66f9-4988-9142-98222b343bc0-ssh-key\") pod \"bdfaf2f8-66f9-4988-9142-98222b343bc0\" (UID: \"bdfaf2f8-66f9-4988-9142-98222b343bc0\") " Oct 01 16:12:24 crc kubenswrapper[4949]: I1001 16:12:24.552935 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bdfaf2f8-66f9-4988-9142-98222b343bc0-kube-api-access-2lvdc" (OuterVolumeSpecName: "kube-api-access-2lvdc") pod "bdfaf2f8-66f9-4988-9142-98222b343bc0" (UID: "bdfaf2f8-66f9-4988-9142-98222b343bc0"). InnerVolumeSpecName "kube-api-access-2lvdc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:12:24 crc kubenswrapper[4949]: I1001 16:12:24.577307 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdfaf2f8-66f9-4988-9142-98222b343bc0-inventory" (OuterVolumeSpecName: "inventory") pod "bdfaf2f8-66f9-4988-9142-98222b343bc0" (UID: "bdfaf2f8-66f9-4988-9142-98222b343bc0"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:12:24 crc kubenswrapper[4949]: I1001 16:12:24.592839 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bdfaf2f8-66f9-4988-9142-98222b343bc0-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "bdfaf2f8-66f9-4988-9142-98222b343bc0" (UID: "bdfaf2f8-66f9-4988-9142-98222b343bc0"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:12:24 crc kubenswrapper[4949]: I1001 16:12:24.648808 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bdfaf2f8-66f9-4988-9142-98222b343bc0-inventory\") on node \"crc\" DevicePath \"\"" Oct 01 16:12:24 crc kubenswrapper[4949]: I1001 16:12:24.648876 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2lvdc\" (UniqueName: \"kubernetes.io/projected/bdfaf2f8-66f9-4988-9142-98222b343bc0-kube-api-access-2lvdc\") on node \"crc\" DevicePath \"\"" Oct 01 16:12:24 crc kubenswrapper[4949]: I1001 16:12:24.648891 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bdfaf2f8-66f9-4988-9142-98222b343bc0-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 01 16:12:25 crc kubenswrapper[4949]: I1001 16:12:25.092009 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6kwsn" event={"ID":"bdfaf2f8-66f9-4988-9142-98222b343bc0","Type":"ContainerDied","Data":"7e0fe8d158f0d04e1c528e357848cc01a561fbbdd7cf44a7d2d0a783b7b6b225"} Oct 01 16:12:25 crc kubenswrapper[4949]: I1001 16:12:25.092069 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7e0fe8d158f0d04e1c528e357848cc01a561fbbdd7cf44a7d2d0a783b7b6b225" Oct 01 16:12:25 crc kubenswrapper[4949]: I1001 16:12:25.092150 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6kwsn" Oct 01 16:12:46 crc kubenswrapper[4949]: I1001 16:12:46.197534 4949 scope.go:117] "RemoveContainer" containerID="d9737839c3b9452cbe4b575d7008806208f989bdaf749ddf577ea68583eabdbf" Oct 01 16:14:18 crc kubenswrapper[4949]: I1001 16:14:18.038753 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 16:14:18 crc kubenswrapper[4949]: I1001 16:14:18.039383 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 16:14:48 crc kubenswrapper[4949]: I1001 16:14:48.038501 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 16:14:48 crc kubenswrapper[4949]: I1001 16:14:48.039238 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 16:15:00 crc kubenswrapper[4949]: I1001 16:15:00.167897 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29322255-6hmtv"] Oct 01 16:15:00 crc kubenswrapper[4949]: E1001 16:15:00.173676 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bdfaf2f8-66f9-4988-9142-98222b343bc0" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Oct 01 16:15:00 crc kubenswrapper[4949]: I1001 16:15:00.173722 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="bdfaf2f8-66f9-4988-9142-98222b343bc0" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Oct 01 16:15:00 crc kubenswrapper[4949]: I1001 16:15:00.174097 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="bdfaf2f8-66f9-4988-9142-98222b343bc0" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Oct 01 16:15:00 crc kubenswrapper[4949]: I1001 16:15:00.175199 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29322255-6hmtv" Oct 01 16:15:00 crc kubenswrapper[4949]: I1001 16:15:00.178457 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 01 16:15:00 crc kubenswrapper[4949]: I1001 16:15:00.178701 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 01 16:15:00 crc kubenswrapper[4949]: I1001 16:15:00.188016 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29322255-6hmtv"] Oct 01 16:15:00 crc kubenswrapper[4949]: I1001 16:15:00.332598 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d1fc09fb-8949-422e-80cd-e1b1de960653-secret-volume\") pod \"collect-profiles-29322255-6hmtv\" (UID: \"d1fc09fb-8949-422e-80cd-e1b1de960653\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322255-6hmtv" Oct 01 16:15:00 crc kubenswrapper[4949]: I1001 16:15:00.333012 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d1fc09fb-8949-422e-80cd-e1b1de960653-config-volume\") pod \"collect-profiles-29322255-6hmtv\" (UID: \"d1fc09fb-8949-422e-80cd-e1b1de960653\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322255-6hmtv" Oct 01 16:15:00 crc kubenswrapper[4949]: I1001 16:15:00.333055 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-24ms7\" (UniqueName: \"kubernetes.io/projected/d1fc09fb-8949-422e-80cd-e1b1de960653-kube-api-access-24ms7\") pod \"collect-profiles-29322255-6hmtv\" (UID: \"d1fc09fb-8949-422e-80cd-e1b1de960653\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322255-6hmtv" Oct 01 16:15:00 crc kubenswrapper[4949]: I1001 16:15:00.435090 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d1fc09fb-8949-422e-80cd-e1b1de960653-secret-volume\") pod \"collect-profiles-29322255-6hmtv\" (UID: \"d1fc09fb-8949-422e-80cd-e1b1de960653\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322255-6hmtv" Oct 01 16:15:00 crc kubenswrapper[4949]: I1001 16:15:00.435244 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d1fc09fb-8949-422e-80cd-e1b1de960653-config-volume\") pod \"collect-profiles-29322255-6hmtv\" (UID: \"d1fc09fb-8949-422e-80cd-e1b1de960653\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322255-6hmtv" Oct 01 16:15:00 crc kubenswrapper[4949]: I1001 16:15:00.435306 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-24ms7\" (UniqueName: \"kubernetes.io/projected/d1fc09fb-8949-422e-80cd-e1b1de960653-kube-api-access-24ms7\") pod \"collect-profiles-29322255-6hmtv\" (UID: \"d1fc09fb-8949-422e-80cd-e1b1de960653\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322255-6hmtv" Oct 01 16:15:00 crc kubenswrapper[4949]: I1001 16:15:00.437427 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d1fc09fb-8949-422e-80cd-e1b1de960653-config-volume\") pod \"collect-profiles-29322255-6hmtv\" (UID: \"d1fc09fb-8949-422e-80cd-e1b1de960653\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322255-6hmtv" Oct 01 16:15:00 crc kubenswrapper[4949]: I1001 16:15:00.447356 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d1fc09fb-8949-422e-80cd-e1b1de960653-secret-volume\") pod \"collect-profiles-29322255-6hmtv\" (UID: \"d1fc09fb-8949-422e-80cd-e1b1de960653\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322255-6hmtv" Oct 01 16:15:00 crc kubenswrapper[4949]: I1001 16:15:00.464098 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-24ms7\" (UniqueName: \"kubernetes.io/projected/d1fc09fb-8949-422e-80cd-e1b1de960653-kube-api-access-24ms7\") pod \"collect-profiles-29322255-6hmtv\" (UID: \"d1fc09fb-8949-422e-80cd-e1b1de960653\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322255-6hmtv" Oct 01 16:15:00 crc kubenswrapper[4949]: I1001 16:15:00.512236 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29322255-6hmtv" Oct 01 16:15:01 crc kubenswrapper[4949]: I1001 16:15:01.040439 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29322255-6hmtv"] Oct 01 16:15:01 crc kubenswrapper[4949]: W1001 16:15:01.057155 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1fc09fb_8949_422e_80cd_e1b1de960653.slice/crio-1e7d6b6c0814b11a81cfcb4b01e16245f5933bc169b5cbaf17a85ca64460cc57 WatchSource:0}: Error finding container 1e7d6b6c0814b11a81cfcb4b01e16245f5933bc169b5cbaf17a85ca64460cc57: Status 404 returned error can't find the container with id 1e7d6b6c0814b11a81cfcb4b01e16245f5933bc169b5cbaf17a85ca64460cc57 Oct 01 16:15:01 crc kubenswrapper[4949]: I1001 16:15:01.691682 4949 generic.go:334] "Generic (PLEG): container finished" podID="d1fc09fb-8949-422e-80cd-e1b1de960653" containerID="321fc938d06a3668d2c402b44b27d40ffed28a68ea9ac6311ce6fe8f1d41859a" exitCode=0 Oct 01 16:15:01 crc kubenswrapper[4949]: I1001 16:15:01.691788 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29322255-6hmtv" event={"ID":"d1fc09fb-8949-422e-80cd-e1b1de960653","Type":"ContainerDied","Data":"321fc938d06a3668d2c402b44b27d40ffed28a68ea9ac6311ce6fe8f1d41859a"} Oct 01 16:15:01 crc kubenswrapper[4949]: I1001 16:15:01.692146 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29322255-6hmtv" event={"ID":"d1fc09fb-8949-422e-80cd-e1b1de960653","Type":"ContainerStarted","Data":"1e7d6b6c0814b11a81cfcb4b01e16245f5933bc169b5cbaf17a85ca64460cc57"} Oct 01 16:15:03 crc kubenswrapper[4949]: I1001 16:15:03.169654 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29322255-6hmtv" Oct 01 16:15:03 crc kubenswrapper[4949]: I1001 16:15:03.293211 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d1fc09fb-8949-422e-80cd-e1b1de960653-secret-volume\") pod \"d1fc09fb-8949-422e-80cd-e1b1de960653\" (UID: \"d1fc09fb-8949-422e-80cd-e1b1de960653\") " Oct 01 16:15:03 crc kubenswrapper[4949]: I1001 16:15:03.293522 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d1fc09fb-8949-422e-80cd-e1b1de960653-config-volume\") pod \"d1fc09fb-8949-422e-80cd-e1b1de960653\" (UID: \"d1fc09fb-8949-422e-80cd-e1b1de960653\") " Oct 01 16:15:03 crc kubenswrapper[4949]: I1001 16:15:03.293628 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-24ms7\" (UniqueName: \"kubernetes.io/projected/d1fc09fb-8949-422e-80cd-e1b1de960653-kube-api-access-24ms7\") pod \"d1fc09fb-8949-422e-80cd-e1b1de960653\" (UID: \"d1fc09fb-8949-422e-80cd-e1b1de960653\") " Oct 01 16:15:03 crc kubenswrapper[4949]: I1001 16:15:03.294373 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d1fc09fb-8949-422e-80cd-e1b1de960653-config-volume" (OuterVolumeSpecName: "config-volume") pod "d1fc09fb-8949-422e-80cd-e1b1de960653" (UID: "d1fc09fb-8949-422e-80cd-e1b1de960653"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 16:15:03 crc kubenswrapper[4949]: I1001 16:15:03.299406 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1fc09fb-8949-422e-80cd-e1b1de960653-kube-api-access-24ms7" (OuterVolumeSpecName: "kube-api-access-24ms7") pod "d1fc09fb-8949-422e-80cd-e1b1de960653" (UID: "d1fc09fb-8949-422e-80cd-e1b1de960653"). InnerVolumeSpecName "kube-api-access-24ms7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:15:03 crc kubenswrapper[4949]: I1001 16:15:03.300984 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1fc09fb-8949-422e-80cd-e1b1de960653-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "d1fc09fb-8949-422e-80cd-e1b1de960653" (UID: "d1fc09fb-8949-422e-80cd-e1b1de960653"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:15:03 crc kubenswrapper[4949]: I1001 16:15:03.396149 4949 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d1fc09fb-8949-422e-80cd-e1b1de960653-config-volume\") on node \"crc\" DevicePath \"\"" Oct 01 16:15:03 crc kubenswrapper[4949]: I1001 16:15:03.396229 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-24ms7\" (UniqueName: \"kubernetes.io/projected/d1fc09fb-8949-422e-80cd-e1b1de960653-kube-api-access-24ms7\") on node \"crc\" DevicePath \"\"" Oct 01 16:15:03 crc kubenswrapper[4949]: I1001 16:15:03.396267 4949 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/d1fc09fb-8949-422e-80cd-e1b1de960653-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 01 16:15:03 crc kubenswrapper[4949]: I1001 16:15:03.717489 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29322255-6hmtv" event={"ID":"d1fc09fb-8949-422e-80cd-e1b1de960653","Type":"ContainerDied","Data":"1e7d6b6c0814b11a81cfcb4b01e16245f5933bc169b5cbaf17a85ca64460cc57"} Oct 01 16:15:03 crc kubenswrapper[4949]: I1001 16:15:03.717538 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29322255-6hmtv" Oct 01 16:15:03 crc kubenswrapper[4949]: I1001 16:15:03.717548 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e7d6b6c0814b11a81cfcb4b01e16245f5933bc169b5cbaf17a85ca64460cc57" Oct 01 16:15:04 crc kubenswrapper[4949]: I1001 16:15:04.251030 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29322210-dtf9r"] Oct 01 16:15:04 crc kubenswrapper[4949]: I1001 16:15:04.257304 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29322210-dtf9r"] Oct 01 16:15:05 crc kubenswrapper[4949]: I1001 16:15:05.618044 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98af3efd-3e5b-4bfd-96ae-f3629aa18f43" path="/var/lib/kubelet/pods/98af3efd-3e5b-4bfd-96ae-f3629aa18f43/volumes" Oct 01 16:15:18 crc kubenswrapper[4949]: I1001 16:15:18.039049 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 16:15:18 crc kubenswrapper[4949]: I1001 16:15:18.039709 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 16:15:18 crc kubenswrapper[4949]: I1001 16:15:18.039823 4949 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l6287" Oct 01 16:15:18 crc kubenswrapper[4949]: I1001 16:15:18.040775 4949 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"90c4b46f71f5dc2c694e5dd56eb6f14191f02494f0dd0b6bd71d4be48af4cdce"} pod="openshift-machine-config-operator/machine-config-daemon-l6287" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 01 16:15:18 crc kubenswrapper[4949]: I1001 16:15:18.040847 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" containerID="cri-o://90c4b46f71f5dc2c694e5dd56eb6f14191f02494f0dd0b6bd71d4be48af4cdce" gracePeriod=600 Oct 01 16:15:18 crc kubenswrapper[4949]: I1001 16:15:18.875710 4949 generic.go:334] "Generic (PLEG): container finished" podID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerID="90c4b46f71f5dc2c694e5dd56eb6f14191f02494f0dd0b6bd71d4be48af4cdce" exitCode=0 Oct 01 16:15:18 crc kubenswrapper[4949]: I1001 16:15:18.875776 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" event={"ID":"0e15cd67-d4ad-49b8-96a6-da114105e558","Type":"ContainerDied","Data":"90c4b46f71f5dc2c694e5dd56eb6f14191f02494f0dd0b6bd71d4be48af4cdce"} Oct 01 16:15:18 crc kubenswrapper[4949]: I1001 16:15:18.876429 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" event={"ID":"0e15cd67-d4ad-49b8-96a6-da114105e558","Type":"ContainerStarted","Data":"757b624790258f8fb52b1acdc82513f0413ed869329247fea2c8ea990d6338de"} Oct 01 16:15:18 crc kubenswrapper[4949]: I1001 16:15:18.876463 4949 scope.go:117] "RemoveContainer" containerID="8c210bfce86fb45b4a25ef5c865680ea34bee887c4c2e606bab8c9c8e7ddd056" Oct 01 16:15:46 crc kubenswrapper[4949]: I1001 16:15:46.309739 4949 scope.go:117] "RemoveContainer" containerID="2d490e2f46af61ffe392c319b26ad8d1d9ce07b8f2e490d32a341c0386f66336" Oct 01 16:16:41 crc kubenswrapper[4949]: I1001 16:16:41.002326 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hfq97"] Oct 01 16:16:41 crc kubenswrapper[4949]: E1001 16:16:41.004306 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1fc09fb-8949-422e-80cd-e1b1de960653" containerName="collect-profiles" Oct 01 16:16:41 crc kubenswrapper[4949]: I1001 16:16:41.004436 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1fc09fb-8949-422e-80cd-e1b1de960653" containerName="collect-profiles" Oct 01 16:16:41 crc kubenswrapper[4949]: I1001 16:16:41.004755 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1fc09fb-8949-422e-80cd-e1b1de960653" containerName="collect-profiles" Oct 01 16:16:41 crc kubenswrapper[4949]: I1001 16:16:41.006677 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hfq97" Oct 01 16:16:41 crc kubenswrapper[4949]: I1001 16:16:41.015838 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hfq97"] Oct 01 16:16:41 crc kubenswrapper[4949]: I1001 16:16:41.168085 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c70fb164-e2c2-4573-b3db-b86df5cc023c-utilities\") pod \"community-operators-hfq97\" (UID: \"c70fb164-e2c2-4573-b3db-b86df5cc023c\") " pod="openshift-marketplace/community-operators-hfq97" Oct 01 16:16:41 crc kubenswrapper[4949]: I1001 16:16:41.168659 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8lwl\" (UniqueName: \"kubernetes.io/projected/c70fb164-e2c2-4573-b3db-b86df5cc023c-kube-api-access-j8lwl\") pod \"community-operators-hfq97\" (UID: \"c70fb164-e2c2-4573-b3db-b86df5cc023c\") " pod="openshift-marketplace/community-operators-hfq97" Oct 01 16:16:41 crc kubenswrapper[4949]: I1001 16:16:41.168953 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c70fb164-e2c2-4573-b3db-b86df5cc023c-catalog-content\") pod \"community-operators-hfq97\" (UID: \"c70fb164-e2c2-4573-b3db-b86df5cc023c\") " pod="openshift-marketplace/community-operators-hfq97" Oct 01 16:16:41 crc kubenswrapper[4949]: I1001 16:16:41.270250 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c70fb164-e2c2-4573-b3db-b86df5cc023c-utilities\") pod \"community-operators-hfq97\" (UID: \"c70fb164-e2c2-4573-b3db-b86df5cc023c\") " pod="openshift-marketplace/community-operators-hfq97" Oct 01 16:16:41 crc kubenswrapper[4949]: I1001 16:16:41.270339 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8lwl\" (UniqueName: \"kubernetes.io/projected/c70fb164-e2c2-4573-b3db-b86df5cc023c-kube-api-access-j8lwl\") pod \"community-operators-hfq97\" (UID: \"c70fb164-e2c2-4573-b3db-b86df5cc023c\") " pod="openshift-marketplace/community-operators-hfq97" Oct 01 16:16:41 crc kubenswrapper[4949]: I1001 16:16:41.270389 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c70fb164-e2c2-4573-b3db-b86df5cc023c-catalog-content\") pod \"community-operators-hfq97\" (UID: \"c70fb164-e2c2-4573-b3db-b86df5cc023c\") " pod="openshift-marketplace/community-operators-hfq97" Oct 01 16:16:41 crc kubenswrapper[4949]: I1001 16:16:41.270702 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c70fb164-e2c2-4573-b3db-b86df5cc023c-utilities\") pod \"community-operators-hfq97\" (UID: \"c70fb164-e2c2-4573-b3db-b86df5cc023c\") " pod="openshift-marketplace/community-operators-hfq97" Oct 01 16:16:41 crc kubenswrapper[4949]: I1001 16:16:41.270816 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c70fb164-e2c2-4573-b3db-b86df5cc023c-catalog-content\") pod \"community-operators-hfq97\" (UID: \"c70fb164-e2c2-4573-b3db-b86df5cc023c\") " pod="openshift-marketplace/community-operators-hfq97" Oct 01 16:16:41 crc kubenswrapper[4949]: I1001 16:16:41.290493 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8lwl\" (UniqueName: \"kubernetes.io/projected/c70fb164-e2c2-4573-b3db-b86df5cc023c-kube-api-access-j8lwl\") pod \"community-operators-hfq97\" (UID: \"c70fb164-e2c2-4573-b3db-b86df5cc023c\") " pod="openshift-marketplace/community-operators-hfq97" Oct 01 16:16:41 crc kubenswrapper[4949]: I1001 16:16:41.351968 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hfq97" Oct 01 16:16:41 crc kubenswrapper[4949]: I1001 16:16:41.828568 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hfq97"] Oct 01 16:16:42 crc kubenswrapper[4949]: I1001 16:16:42.797419 4949 generic.go:334] "Generic (PLEG): container finished" podID="c70fb164-e2c2-4573-b3db-b86df5cc023c" containerID="bd18e83413409fd5129c8d3d9342a307f65a63470297bf931d73c5898d519a2d" exitCode=0 Oct 01 16:16:42 crc kubenswrapper[4949]: I1001 16:16:42.798291 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hfq97" event={"ID":"c70fb164-e2c2-4573-b3db-b86df5cc023c","Type":"ContainerDied","Data":"bd18e83413409fd5129c8d3d9342a307f65a63470297bf931d73c5898d519a2d"} Oct 01 16:16:42 crc kubenswrapper[4949]: I1001 16:16:42.798744 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hfq97" event={"ID":"c70fb164-e2c2-4573-b3db-b86df5cc023c","Type":"ContainerStarted","Data":"36593f080506ae9111bf3766aa31fface5abd87d3b8efecc80b43eef5b8e0ce3"} Oct 01 16:16:42 crc kubenswrapper[4949]: I1001 16:16:42.824242 4949 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 01 16:16:43 crc kubenswrapper[4949]: I1001 16:16:43.809157 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hfq97" event={"ID":"c70fb164-e2c2-4573-b3db-b86df5cc023c","Type":"ContainerStarted","Data":"eb855fe5413ac094740a3b625915b607f9fd82e6c4b793e9867aad26126b1a24"} Oct 01 16:16:44 crc kubenswrapper[4949]: I1001 16:16:44.822106 4949 generic.go:334] "Generic (PLEG): container finished" podID="c70fb164-e2c2-4573-b3db-b86df5cc023c" containerID="eb855fe5413ac094740a3b625915b607f9fd82e6c4b793e9867aad26126b1a24" exitCode=0 Oct 01 16:16:44 crc kubenswrapper[4949]: I1001 16:16:44.822242 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hfq97" event={"ID":"c70fb164-e2c2-4573-b3db-b86df5cc023c","Type":"ContainerDied","Data":"eb855fe5413ac094740a3b625915b607f9fd82e6c4b793e9867aad26126b1a24"} Oct 01 16:16:45 crc kubenswrapper[4949]: I1001 16:16:45.836766 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hfq97" event={"ID":"c70fb164-e2c2-4573-b3db-b86df5cc023c","Type":"ContainerStarted","Data":"238fde49ffbf6f3b46b33f530fbd47711da9f2c1fd7cb19a59c26ecf3398c298"} Oct 01 16:16:45 crc kubenswrapper[4949]: I1001 16:16:45.863292 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hfq97" podStartSLOduration=3.108121764 podStartE2EDuration="5.863275229s" podCreationTimestamp="2025-10-01 16:16:40 +0000 UTC" firstStartedPulling="2025-10-01 16:16:42.823962533 +0000 UTC m=+2102.129568724" lastFinishedPulling="2025-10-01 16:16:45.579115988 +0000 UTC m=+2104.884722189" observedRunningTime="2025-10-01 16:16:45.853760925 +0000 UTC m=+2105.159367116" watchObservedRunningTime="2025-10-01 16:16:45.863275229 +0000 UTC m=+2105.168881420" Oct 01 16:16:51 crc kubenswrapper[4949]: I1001 16:16:51.352751 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hfq97" Oct 01 16:16:51 crc kubenswrapper[4949]: I1001 16:16:51.353615 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hfq97" Oct 01 16:16:51 crc kubenswrapper[4949]: I1001 16:16:51.445873 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hfq97" Oct 01 16:16:51 crc kubenswrapper[4949]: I1001 16:16:51.953343 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hfq97" Oct 01 16:16:52 crc kubenswrapper[4949]: I1001 16:16:52.018981 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hfq97"] Oct 01 16:16:53 crc kubenswrapper[4949]: I1001 16:16:53.910653 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-hfq97" podUID="c70fb164-e2c2-4573-b3db-b86df5cc023c" containerName="registry-server" containerID="cri-o://238fde49ffbf6f3b46b33f530fbd47711da9f2c1fd7cb19a59c26ecf3398c298" gracePeriod=2 Oct 01 16:16:54 crc kubenswrapper[4949]: I1001 16:16:54.450037 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hfq97" Oct 01 16:16:54 crc kubenswrapper[4949]: I1001 16:16:54.579434 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c70fb164-e2c2-4573-b3db-b86df5cc023c-catalog-content\") pod \"c70fb164-e2c2-4573-b3db-b86df5cc023c\" (UID: \"c70fb164-e2c2-4573-b3db-b86df5cc023c\") " Oct 01 16:16:54 crc kubenswrapper[4949]: I1001 16:16:54.579691 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c70fb164-e2c2-4573-b3db-b86df5cc023c-utilities\") pod \"c70fb164-e2c2-4573-b3db-b86df5cc023c\" (UID: \"c70fb164-e2c2-4573-b3db-b86df5cc023c\") " Oct 01 16:16:54 crc kubenswrapper[4949]: I1001 16:16:54.579716 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8lwl\" (UniqueName: \"kubernetes.io/projected/c70fb164-e2c2-4573-b3db-b86df5cc023c-kube-api-access-j8lwl\") pod \"c70fb164-e2c2-4573-b3db-b86df5cc023c\" (UID: \"c70fb164-e2c2-4573-b3db-b86df5cc023c\") " Oct 01 16:16:54 crc kubenswrapper[4949]: I1001 16:16:54.581452 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c70fb164-e2c2-4573-b3db-b86df5cc023c-utilities" (OuterVolumeSpecName: "utilities") pod "c70fb164-e2c2-4573-b3db-b86df5cc023c" (UID: "c70fb164-e2c2-4573-b3db-b86df5cc023c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:16:54 crc kubenswrapper[4949]: I1001 16:16:54.590416 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c70fb164-e2c2-4573-b3db-b86df5cc023c-kube-api-access-j8lwl" (OuterVolumeSpecName: "kube-api-access-j8lwl") pod "c70fb164-e2c2-4573-b3db-b86df5cc023c" (UID: "c70fb164-e2c2-4573-b3db-b86df5cc023c"). InnerVolumeSpecName "kube-api-access-j8lwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:16:54 crc kubenswrapper[4949]: I1001 16:16:54.644340 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c70fb164-e2c2-4573-b3db-b86df5cc023c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c70fb164-e2c2-4573-b3db-b86df5cc023c" (UID: "c70fb164-e2c2-4573-b3db-b86df5cc023c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:16:54 crc kubenswrapper[4949]: I1001 16:16:54.682117 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c70fb164-e2c2-4573-b3db-b86df5cc023c-utilities\") on node \"crc\" DevicePath \"\"" Oct 01 16:16:54 crc kubenswrapper[4949]: I1001 16:16:54.682162 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8lwl\" (UniqueName: \"kubernetes.io/projected/c70fb164-e2c2-4573-b3db-b86df5cc023c-kube-api-access-j8lwl\") on node \"crc\" DevicePath \"\"" Oct 01 16:16:54 crc kubenswrapper[4949]: I1001 16:16:54.682173 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c70fb164-e2c2-4573-b3db-b86df5cc023c-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 01 16:16:54 crc kubenswrapper[4949]: I1001 16:16:54.922549 4949 generic.go:334] "Generic (PLEG): container finished" podID="c70fb164-e2c2-4573-b3db-b86df5cc023c" containerID="238fde49ffbf6f3b46b33f530fbd47711da9f2c1fd7cb19a59c26ecf3398c298" exitCode=0 Oct 01 16:16:54 crc kubenswrapper[4949]: I1001 16:16:54.922584 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hfq97" event={"ID":"c70fb164-e2c2-4573-b3db-b86df5cc023c","Type":"ContainerDied","Data":"238fde49ffbf6f3b46b33f530fbd47711da9f2c1fd7cb19a59c26ecf3398c298"} Oct 01 16:16:54 crc kubenswrapper[4949]: I1001 16:16:54.922607 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hfq97" event={"ID":"c70fb164-e2c2-4573-b3db-b86df5cc023c","Type":"ContainerDied","Data":"36593f080506ae9111bf3766aa31fface5abd87d3b8efecc80b43eef5b8e0ce3"} Oct 01 16:16:54 crc kubenswrapper[4949]: I1001 16:16:54.922623 4949 scope.go:117] "RemoveContainer" containerID="238fde49ffbf6f3b46b33f530fbd47711da9f2c1fd7cb19a59c26ecf3398c298" Oct 01 16:16:54 crc kubenswrapper[4949]: I1001 16:16:54.922723 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hfq97" Oct 01 16:16:54 crc kubenswrapper[4949]: I1001 16:16:54.962236 4949 scope.go:117] "RemoveContainer" containerID="eb855fe5413ac094740a3b625915b607f9fd82e6c4b793e9867aad26126b1a24" Oct 01 16:16:54 crc kubenswrapper[4949]: I1001 16:16:54.965644 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-hfq97"] Oct 01 16:16:54 crc kubenswrapper[4949]: I1001 16:16:54.983557 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-hfq97"] Oct 01 16:16:55 crc kubenswrapper[4949]: I1001 16:16:55.009059 4949 scope.go:117] "RemoveContainer" containerID="bd18e83413409fd5129c8d3d9342a307f65a63470297bf931d73c5898d519a2d" Oct 01 16:16:55 crc kubenswrapper[4949]: I1001 16:16:55.043012 4949 scope.go:117] "RemoveContainer" containerID="238fde49ffbf6f3b46b33f530fbd47711da9f2c1fd7cb19a59c26ecf3398c298" Oct 01 16:16:55 crc kubenswrapper[4949]: E1001 16:16:55.044163 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"238fde49ffbf6f3b46b33f530fbd47711da9f2c1fd7cb19a59c26ecf3398c298\": container with ID starting with 238fde49ffbf6f3b46b33f530fbd47711da9f2c1fd7cb19a59c26ecf3398c298 not found: ID does not exist" containerID="238fde49ffbf6f3b46b33f530fbd47711da9f2c1fd7cb19a59c26ecf3398c298" Oct 01 16:16:55 crc kubenswrapper[4949]: I1001 16:16:55.044599 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"238fde49ffbf6f3b46b33f530fbd47711da9f2c1fd7cb19a59c26ecf3398c298"} err="failed to get container status \"238fde49ffbf6f3b46b33f530fbd47711da9f2c1fd7cb19a59c26ecf3398c298\": rpc error: code = NotFound desc = could not find container \"238fde49ffbf6f3b46b33f530fbd47711da9f2c1fd7cb19a59c26ecf3398c298\": container with ID starting with 238fde49ffbf6f3b46b33f530fbd47711da9f2c1fd7cb19a59c26ecf3398c298 not found: ID does not exist" Oct 01 16:16:55 crc kubenswrapper[4949]: I1001 16:16:55.044643 4949 scope.go:117] "RemoveContainer" containerID="eb855fe5413ac094740a3b625915b607f9fd82e6c4b793e9867aad26126b1a24" Oct 01 16:16:55 crc kubenswrapper[4949]: E1001 16:16:55.045161 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb855fe5413ac094740a3b625915b607f9fd82e6c4b793e9867aad26126b1a24\": container with ID starting with eb855fe5413ac094740a3b625915b607f9fd82e6c4b793e9867aad26126b1a24 not found: ID does not exist" containerID="eb855fe5413ac094740a3b625915b607f9fd82e6c4b793e9867aad26126b1a24" Oct 01 16:16:55 crc kubenswrapper[4949]: I1001 16:16:55.045233 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb855fe5413ac094740a3b625915b607f9fd82e6c4b793e9867aad26126b1a24"} err="failed to get container status \"eb855fe5413ac094740a3b625915b607f9fd82e6c4b793e9867aad26126b1a24\": rpc error: code = NotFound desc = could not find container \"eb855fe5413ac094740a3b625915b607f9fd82e6c4b793e9867aad26126b1a24\": container with ID starting with eb855fe5413ac094740a3b625915b607f9fd82e6c4b793e9867aad26126b1a24 not found: ID does not exist" Oct 01 16:16:55 crc kubenswrapper[4949]: I1001 16:16:55.045283 4949 scope.go:117] "RemoveContainer" containerID="bd18e83413409fd5129c8d3d9342a307f65a63470297bf931d73c5898d519a2d" Oct 01 16:16:55 crc kubenswrapper[4949]: E1001 16:16:55.045651 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bd18e83413409fd5129c8d3d9342a307f65a63470297bf931d73c5898d519a2d\": container with ID starting with bd18e83413409fd5129c8d3d9342a307f65a63470297bf931d73c5898d519a2d not found: ID does not exist" containerID="bd18e83413409fd5129c8d3d9342a307f65a63470297bf931d73c5898d519a2d" Oct 01 16:16:55 crc kubenswrapper[4949]: I1001 16:16:55.045693 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bd18e83413409fd5129c8d3d9342a307f65a63470297bf931d73c5898d519a2d"} err="failed to get container status \"bd18e83413409fd5129c8d3d9342a307f65a63470297bf931d73c5898d519a2d\": rpc error: code = NotFound desc = could not find container \"bd18e83413409fd5129c8d3d9342a307f65a63470297bf931d73c5898d519a2d\": container with ID starting with bd18e83413409fd5129c8d3d9342a307f65a63470297bf931d73c5898d519a2d not found: ID does not exist" Oct 01 16:16:55 crc kubenswrapper[4949]: I1001 16:16:55.620456 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c70fb164-e2c2-4573-b3db-b86df5cc023c" path="/var/lib/kubelet/pods/c70fb164-e2c2-4573-b3db-b86df5cc023c/volumes" Oct 01 16:17:07 crc kubenswrapper[4949]: E1001 16:17:07.796858 4949 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.188:53368->38.102.83.188:38263: write tcp 38.102.83.188:53368->38.102.83.188:38263: write: broken pipe Oct 01 16:17:14 crc kubenswrapper[4949]: I1001 16:17:14.804911 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pznv4"] Oct 01 16:17:14 crc kubenswrapper[4949]: E1001 16:17:14.805941 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c70fb164-e2c2-4573-b3db-b86df5cc023c" containerName="extract-content" Oct 01 16:17:14 crc kubenswrapper[4949]: I1001 16:17:14.805979 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="c70fb164-e2c2-4573-b3db-b86df5cc023c" containerName="extract-content" Oct 01 16:17:14 crc kubenswrapper[4949]: E1001 16:17:14.806011 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c70fb164-e2c2-4573-b3db-b86df5cc023c" containerName="registry-server" Oct 01 16:17:14 crc kubenswrapper[4949]: I1001 16:17:14.806019 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="c70fb164-e2c2-4573-b3db-b86df5cc023c" containerName="registry-server" Oct 01 16:17:14 crc kubenswrapper[4949]: E1001 16:17:14.806067 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c70fb164-e2c2-4573-b3db-b86df5cc023c" containerName="extract-utilities" Oct 01 16:17:14 crc kubenswrapper[4949]: I1001 16:17:14.806077 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="c70fb164-e2c2-4573-b3db-b86df5cc023c" containerName="extract-utilities" Oct 01 16:17:14 crc kubenswrapper[4949]: I1001 16:17:14.806525 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="c70fb164-e2c2-4573-b3db-b86df5cc023c" containerName="registry-server" Oct 01 16:17:14 crc kubenswrapper[4949]: I1001 16:17:14.808163 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pznv4" Oct 01 16:17:14 crc kubenswrapper[4949]: I1001 16:17:14.815677 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pznv4"] Oct 01 16:17:14 crc kubenswrapper[4949]: I1001 16:17:14.841214 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0d47724-591c-4862-9e3f-9c190aada131-utilities\") pod \"certified-operators-pznv4\" (UID: \"f0d47724-591c-4862-9e3f-9c190aada131\") " pod="openshift-marketplace/certified-operators-pznv4" Oct 01 16:17:14 crc kubenswrapper[4949]: I1001 16:17:14.841292 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0d47724-591c-4862-9e3f-9c190aada131-catalog-content\") pod \"certified-operators-pznv4\" (UID: \"f0d47724-591c-4862-9e3f-9c190aada131\") " pod="openshift-marketplace/certified-operators-pznv4" Oct 01 16:17:14 crc kubenswrapper[4949]: I1001 16:17:14.841641 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bp2wq\" (UniqueName: \"kubernetes.io/projected/f0d47724-591c-4862-9e3f-9c190aada131-kube-api-access-bp2wq\") pod \"certified-operators-pznv4\" (UID: \"f0d47724-591c-4862-9e3f-9c190aada131\") " pod="openshift-marketplace/certified-operators-pznv4" Oct 01 16:17:14 crc kubenswrapper[4949]: I1001 16:17:14.942393 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bp2wq\" (UniqueName: \"kubernetes.io/projected/f0d47724-591c-4862-9e3f-9c190aada131-kube-api-access-bp2wq\") pod \"certified-operators-pznv4\" (UID: \"f0d47724-591c-4862-9e3f-9c190aada131\") " pod="openshift-marketplace/certified-operators-pznv4" Oct 01 16:17:14 crc kubenswrapper[4949]: I1001 16:17:14.942453 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0d47724-591c-4862-9e3f-9c190aada131-utilities\") pod \"certified-operators-pznv4\" (UID: \"f0d47724-591c-4862-9e3f-9c190aada131\") " pod="openshift-marketplace/certified-operators-pznv4" Oct 01 16:17:14 crc kubenswrapper[4949]: I1001 16:17:14.942490 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0d47724-591c-4862-9e3f-9c190aada131-catalog-content\") pod \"certified-operators-pznv4\" (UID: \"f0d47724-591c-4862-9e3f-9c190aada131\") " pod="openshift-marketplace/certified-operators-pznv4" Oct 01 16:17:14 crc kubenswrapper[4949]: I1001 16:17:14.942996 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0d47724-591c-4862-9e3f-9c190aada131-catalog-content\") pod \"certified-operators-pznv4\" (UID: \"f0d47724-591c-4862-9e3f-9c190aada131\") " pod="openshift-marketplace/certified-operators-pznv4" Oct 01 16:17:14 crc kubenswrapper[4949]: I1001 16:17:14.943193 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0d47724-591c-4862-9e3f-9c190aada131-utilities\") pod \"certified-operators-pznv4\" (UID: \"f0d47724-591c-4862-9e3f-9c190aada131\") " pod="openshift-marketplace/certified-operators-pznv4" Oct 01 16:17:14 crc kubenswrapper[4949]: I1001 16:17:14.961981 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bp2wq\" (UniqueName: \"kubernetes.io/projected/f0d47724-591c-4862-9e3f-9c190aada131-kube-api-access-bp2wq\") pod \"certified-operators-pznv4\" (UID: \"f0d47724-591c-4862-9e3f-9c190aada131\") " pod="openshift-marketplace/certified-operators-pznv4" Oct 01 16:17:15 crc kubenswrapper[4949]: I1001 16:17:15.134475 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pznv4" Oct 01 16:17:15 crc kubenswrapper[4949]: I1001 16:17:15.650073 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pznv4"] Oct 01 16:17:16 crc kubenswrapper[4949]: I1001 16:17:16.140753 4949 generic.go:334] "Generic (PLEG): container finished" podID="f0d47724-591c-4862-9e3f-9c190aada131" containerID="d442ef798c8a071f52aa026695279ee6d66a67ba4e7910fa4868bb735bb921a3" exitCode=0 Oct 01 16:17:16 crc kubenswrapper[4949]: I1001 16:17:16.140799 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pznv4" event={"ID":"f0d47724-591c-4862-9e3f-9c190aada131","Type":"ContainerDied","Data":"d442ef798c8a071f52aa026695279ee6d66a67ba4e7910fa4868bb735bb921a3"} Oct 01 16:17:16 crc kubenswrapper[4949]: I1001 16:17:16.141182 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pznv4" event={"ID":"f0d47724-591c-4862-9e3f-9c190aada131","Type":"ContainerStarted","Data":"c95de49ca158af74b5a030edc7fb854bdc49f6dc8f6745ad7d1465c6c9a5aa85"} Oct 01 16:17:18 crc kubenswrapper[4949]: I1001 16:17:18.038974 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 16:17:18 crc kubenswrapper[4949]: I1001 16:17:18.039360 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 16:17:21 crc kubenswrapper[4949]: I1001 16:17:21.185660 4949 generic.go:334] "Generic (PLEG): container finished" podID="f0d47724-591c-4862-9e3f-9c190aada131" containerID="31468bae05072a90328ebbeb815571b601b2e1f54fe9d9686c87b52bb2cf1274" exitCode=0 Oct 01 16:17:21 crc kubenswrapper[4949]: I1001 16:17:21.185906 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pznv4" event={"ID":"f0d47724-591c-4862-9e3f-9c190aada131","Type":"ContainerDied","Data":"31468bae05072a90328ebbeb815571b601b2e1f54fe9d9686c87b52bb2cf1274"} Oct 01 16:17:22 crc kubenswrapper[4949]: I1001 16:17:22.197299 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pznv4" event={"ID":"f0d47724-591c-4862-9e3f-9c190aada131","Type":"ContainerStarted","Data":"e6f5fbfeb37156a1227d089de05b7d4a7be1cfb07a5dddce8a2e4be3f44474bf"} Oct 01 16:17:22 crc kubenswrapper[4949]: I1001 16:17:22.221205 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pznv4" podStartSLOduration=2.490612117 podStartE2EDuration="8.221187719s" podCreationTimestamp="2025-10-01 16:17:14 +0000 UTC" firstStartedPulling="2025-10-01 16:17:16.142871661 +0000 UTC m=+2135.448477862" lastFinishedPulling="2025-10-01 16:17:21.873447273 +0000 UTC m=+2141.179053464" observedRunningTime="2025-10-01 16:17:22.219420521 +0000 UTC m=+2141.525026722" watchObservedRunningTime="2025-10-01 16:17:22.221187719 +0000 UTC m=+2141.526793910" Oct 01 16:17:25 crc kubenswrapper[4949]: I1001 16:17:25.135569 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pznv4" Oct 01 16:17:25 crc kubenswrapper[4949]: I1001 16:17:25.136264 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pznv4" Oct 01 16:17:25 crc kubenswrapper[4949]: I1001 16:17:25.216858 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pznv4" Oct 01 16:17:35 crc kubenswrapper[4949]: I1001 16:17:35.228979 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pznv4" Oct 01 16:17:35 crc kubenswrapper[4949]: I1001 16:17:35.335685 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pznv4"] Oct 01 16:17:35 crc kubenswrapper[4949]: I1001 16:17:35.393375 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g84kx"] Oct 01 16:17:35 crc kubenswrapper[4949]: I1001 16:17:35.393619 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-g84kx" podUID="08f8c987-cdde-4a43-8a61-e01d63fdb5e9" containerName="registry-server" containerID="cri-o://e18c5122584ca14a66e47ad33c947434a08901d5c407346b44bc76c7d578b553" gracePeriod=2 Oct 01 16:17:35 crc kubenswrapper[4949]: I1001 16:17:35.890325 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g84kx" Oct 01 16:17:36 crc kubenswrapper[4949]: I1001 16:17:36.074930 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l68cs\" (UniqueName: \"kubernetes.io/projected/08f8c987-cdde-4a43-8a61-e01d63fdb5e9-kube-api-access-l68cs\") pod \"08f8c987-cdde-4a43-8a61-e01d63fdb5e9\" (UID: \"08f8c987-cdde-4a43-8a61-e01d63fdb5e9\") " Oct 01 16:17:36 crc kubenswrapper[4949]: I1001 16:17:36.075232 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08f8c987-cdde-4a43-8a61-e01d63fdb5e9-catalog-content\") pod \"08f8c987-cdde-4a43-8a61-e01d63fdb5e9\" (UID: \"08f8c987-cdde-4a43-8a61-e01d63fdb5e9\") " Oct 01 16:17:36 crc kubenswrapper[4949]: I1001 16:17:36.075400 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08f8c987-cdde-4a43-8a61-e01d63fdb5e9-utilities\") pod \"08f8c987-cdde-4a43-8a61-e01d63fdb5e9\" (UID: \"08f8c987-cdde-4a43-8a61-e01d63fdb5e9\") " Oct 01 16:17:36 crc kubenswrapper[4949]: I1001 16:17:36.076293 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08f8c987-cdde-4a43-8a61-e01d63fdb5e9-utilities" (OuterVolumeSpecName: "utilities") pod "08f8c987-cdde-4a43-8a61-e01d63fdb5e9" (UID: "08f8c987-cdde-4a43-8a61-e01d63fdb5e9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:17:36 crc kubenswrapper[4949]: I1001 16:17:36.101387 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08f8c987-cdde-4a43-8a61-e01d63fdb5e9-kube-api-access-l68cs" (OuterVolumeSpecName: "kube-api-access-l68cs") pod "08f8c987-cdde-4a43-8a61-e01d63fdb5e9" (UID: "08f8c987-cdde-4a43-8a61-e01d63fdb5e9"). InnerVolumeSpecName "kube-api-access-l68cs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:17:36 crc kubenswrapper[4949]: I1001 16:17:36.134729 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/08f8c987-cdde-4a43-8a61-e01d63fdb5e9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "08f8c987-cdde-4a43-8a61-e01d63fdb5e9" (UID: "08f8c987-cdde-4a43-8a61-e01d63fdb5e9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:17:36 crc kubenswrapper[4949]: I1001 16:17:36.177757 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l68cs\" (UniqueName: \"kubernetes.io/projected/08f8c987-cdde-4a43-8a61-e01d63fdb5e9-kube-api-access-l68cs\") on node \"crc\" DevicePath \"\"" Oct 01 16:17:36 crc kubenswrapper[4949]: I1001 16:17:36.177791 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/08f8c987-cdde-4a43-8a61-e01d63fdb5e9-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 01 16:17:36 crc kubenswrapper[4949]: I1001 16:17:36.177802 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/08f8c987-cdde-4a43-8a61-e01d63fdb5e9-utilities\") on node \"crc\" DevicePath \"\"" Oct 01 16:17:36 crc kubenswrapper[4949]: I1001 16:17:36.362948 4949 generic.go:334] "Generic (PLEG): container finished" podID="08f8c987-cdde-4a43-8a61-e01d63fdb5e9" containerID="e18c5122584ca14a66e47ad33c947434a08901d5c407346b44bc76c7d578b553" exitCode=0 Oct 01 16:17:36 crc kubenswrapper[4949]: I1001 16:17:36.362988 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g84kx" event={"ID":"08f8c987-cdde-4a43-8a61-e01d63fdb5e9","Type":"ContainerDied","Data":"e18c5122584ca14a66e47ad33c947434a08901d5c407346b44bc76c7d578b553"} Oct 01 16:17:36 crc kubenswrapper[4949]: I1001 16:17:36.363021 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-g84kx" event={"ID":"08f8c987-cdde-4a43-8a61-e01d63fdb5e9","Type":"ContainerDied","Data":"09ebb9e1dd3c66719c441a22700fe4bf49222a0a8e5b0ca90ccb676dfaf4fa53"} Oct 01 16:17:36 crc kubenswrapper[4949]: I1001 16:17:36.363038 4949 scope.go:117] "RemoveContainer" containerID="e18c5122584ca14a66e47ad33c947434a08901d5c407346b44bc76c7d578b553" Oct 01 16:17:36 crc kubenswrapper[4949]: I1001 16:17:36.363723 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-g84kx" Oct 01 16:17:36 crc kubenswrapper[4949]: I1001 16:17:36.385757 4949 scope.go:117] "RemoveContainer" containerID="9f97eecdaf2f752738f6634906ec9d79327018037cc8be23674890514415f6ae" Oct 01 16:17:36 crc kubenswrapper[4949]: I1001 16:17:36.399210 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-g84kx"] Oct 01 16:17:36 crc kubenswrapper[4949]: I1001 16:17:36.407768 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-g84kx"] Oct 01 16:17:36 crc kubenswrapper[4949]: I1001 16:17:36.441400 4949 scope.go:117] "RemoveContainer" containerID="ebfd00ded25edc4969bd679c40d97d4895cdd10762e9a383c3cecdc08b1fd070" Oct 01 16:17:36 crc kubenswrapper[4949]: I1001 16:17:36.480294 4949 scope.go:117] "RemoveContainer" containerID="e18c5122584ca14a66e47ad33c947434a08901d5c407346b44bc76c7d578b553" Oct 01 16:17:36 crc kubenswrapper[4949]: E1001 16:17:36.480744 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e18c5122584ca14a66e47ad33c947434a08901d5c407346b44bc76c7d578b553\": container with ID starting with e18c5122584ca14a66e47ad33c947434a08901d5c407346b44bc76c7d578b553 not found: ID does not exist" containerID="e18c5122584ca14a66e47ad33c947434a08901d5c407346b44bc76c7d578b553" Oct 01 16:17:36 crc kubenswrapper[4949]: I1001 16:17:36.480786 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e18c5122584ca14a66e47ad33c947434a08901d5c407346b44bc76c7d578b553"} err="failed to get container status \"e18c5122584ca14a66e47ad33c947434a08901d5c407346b44bc76c7d578b553\": rpc error: code = NotFound desc = could not find container \"e18c5122584ca14a66e47ad33c947434a08901d5c407346b44bc76c7d578b553\": container with ID starting with e18c5122584ca14a66e47ad33c947434a08901d5c407346b44bc76c7d578b553 not found: ID does not exist" Oct 01 16:17:36 crc kubenswrapper[4949]: I1001 16:17:36.480811 4949 scope.go:117] "RemoveContainer" containerID="9f97eecdaf2f752738f6634906ec9d79327018037cc8be23674890514415f6ae" Oct 01 16:17:36 crc kubenswrapper[4949]: E1001 16:17:36.483553 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f97eecdaf2f752738f6634906ec9d79327018037cc8be23674890514415f6ae\": container with ID starting with 9f97eecdaf2f752738f6634906ec9d79327018037cc8be23674890514415f6ae not found: ID does not exist" containerID="9f97eecdaf2f752738f6634906ec9d79327018037cc8be23674890514415f6ae" Oct 01 16:17:36 crc kubenswrapper[4949]: I1001 16:17:36.483579 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f97eecdaf2f752738f6634906ec9d79327018037cc8be23674890514415f6ae"} err="failed to get container status \"9f97eecdaf2f752738f6634906ec9d79327018037cc8be23674890514415f6ae\": rpc error: code = NotFound desc = could not find container \"9f97eecdaf2f752738f6634906ec9d79327018037cc8be23674890514415f6ae\": container with ID starting with 9f97eecdaf2f752738f6634906ec9d79327018037cc8be23674890514415f6ae not found: ID does not exist" Oct 01 16:17:36 crc kubenswrapper[4949]: I1001 16:17:36.483614 4949 scope.go:117] "RemoveContainer" containerID="ebfd00ded25edc4969bd679c40d97d4895cdd10762e9a383c3cecdc08b1fd070" Oct 01 16:17:36 crc kubenswrapper[4949]: E1001 16:17:36.483898 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ebfd00ded25edc4969bd679c40d97d4895cdd10762e9a383c3cecdc08b1fd070\": container with ID starting with ebfd00ded25edc4969bd679c40d97d4895cdd10762e9a383c3cecdc08b1fd070 not found: ID does not exist" containerID="ebfd00ded25edc4969bd679c40d97d4895cdd10762e9a383c3cecdc08b1fd070" Oct 01 16:17:36 crc kubenswrapper[4949]: I1001 16:17:36.483942 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ebfd00ded25edc4969bd679c40d97d4895cdd10762e9a383c3cecdc08b1fd070"} err="failed to get container status \"ebfd00ded25edc4969bd679c40d97d4895cdd10762e9a383c3cecdc08b1fd070\": rpc error: code = NotFound desc = could not find container \"ebfd00ded25edc4969bd679c40d97d4895cdd10762e9a383c3cecdc08b1fd070\": container with ID starting with ebfd00ded25edc4969bd679c40d97d4895cdd10762e9a383c3cecdc08b1fd070 not found: ID does not exist" Oct 01 16:17:37 crc kubenswrapper[4949]: I1001 16:17:37.612515 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08f8c987-cdde-4a43-8a61-e01d63fdb5e9" path="/var/lib/kubelet/pods/08f8c987-cdde-4a43-8a61-e01d63fdb5e9/volumes" Oct 01 16:17:38 crc kubenswrapper[4949]: I1001 16:17:38.489947 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-gqk92"] Oct 01 16:17:38 crc kubenswrapper[4949]: I1001 16:17:38.499425 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nkzv4"] Oct 01 16:17:38 crc kubenswrapper[4949]: I1001 16:17:38.505592 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6kwsn"] Oct 01 16:17:38 crc kubenswrapper[4949]: I1001 16:17:38.511901 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-268bb"] Oct 01 16:17:38 crc kubenswrapper[4949]: I1001 16:17:38.517548 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2rxmk"] Oct 01 16:17:38 crc kubenswrapper[4949]: I1001 16:17:38.523302 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cqdxh"] Oct 01 16:17:38 crc kubenswrapper[4949]: I1001 16:17:38.529205 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-k85xw"] Oct 01 16:17:38 crc kubenswrapper[4949]: I1001 16:17:38.535213 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxjqg"] Oct 01 16:17:38 crc kubenswrapper[4949]: I1001 16:17:38.540508 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k29r7"] Oct 01 16:17:38 crc kubenswrapper[4949]: I1001 16:17:38.545625 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bct5c"] Oct 01 16:17:38 crc kubenswrapper[4949]: I1001 16:17:38.551300 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-klz9h"] Oct 01 16:17:38 crc kubenswrapper[4949]: I1001 16:17:38.556086 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-6kwsn"] Oct 01 16:17:38 crc kubenswrapper[4949]: I1001 16:17:38.561970 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-gqk92"] Oct 01 16:17:38 crc kubenswrapper[4949]: I1001 16:17:38.567107 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-268bb"] Oct 01 16:17:38 crc kubenswrapper[4949]: I1001 16:17:38.572502 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nkzv4"] Oct 01 16:17:38 crc kubenswrapper[4949]: I1001 16:17:38.578432 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-2rxmk"] Oct 01 16:17:38 crc kubenswrapper[4949]: I1001 16:17:38.584752 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-klz9h"] Oct 01 16:17:38 crc kubenswrapper[4949]: I1001 16:17:38.590343 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-k29r7"] Oct 01 16:17:38 crc kubenswrapper[4949]: I1001 16:17:38.596076 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-bct5c"] Oct 01 16:17:38 crc kubenswrapper[4949]: I1001 16:17:38.602103 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-cqdxh"] Oct 01 16:17:38 crc kubenswrapper[4949]: I1001 16:17:38.608094 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-k85xw"] Oct 01 16:17:38 crc kubenswrapper[4949]: I1001 16:17:38.613150 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-wxjqg"] Oct 01 16:17:39 crc kubenswrapper[4949]: I1001 16:17:39.616868 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b976daa-cecf-4085-a431-7d5f85d127e2" path="/var/lib/kubelet/pods/0b976daa-cecf-4085-a431-7d5f85d127e2/volumes" Oct 01 16:17:39 crc kubenswrapper[4949]: I1001 16:17:39.618784 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2e31db6d-37b2-494d-b1cd-536f07df5752" path="/var/lib/kubelet/pods/2e31db6d-37b2-494d-b1cd-536f07df5752/volumes" Oct 01 16:17:39 crc kubenswrapper[4949]: I1001 16:17:39.620088 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67fe130c-27e6-4d46-8f3e-58bc9a9e94a7" path="/var/lib/kubelet/pods/67fe130c-27e6-4d46-8f3e-58bc9a9e94a7/volumes" Oct 01 16:17:39 crc kubenswrapper[4949]: I1001 16:17:39.621427 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="898446c8-028e-442d-a7db-5d2218888fe8" path="/var/lib/kubelet/pods/898446c8-028e-442d-a7db-5d2218888fe8/volumes" Oct 01 16:17:39 crc kubenswrapper[4949]: I1001 16:17:39.623610 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="962745c2-f5ca-4cde-8543-6ded2a82645d" path="/var/lib/kubelet/pods/962745c2-f5ca-4cde-8543-6ded2a82645d/volumes" Oct 01 16:17:39 crc kubenswrapper[4949]: I1001 16:17:39.624593 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac5b0ffa-c24f-42ec-8c6e-91be329a8402" path="/var/lib/kubelet/pods/ac5b0ffa-c24f-42ec-8c6e-91be329a8402/volumes" Oct 01 16:17:39 crc kubenswrapper[4949]: I1001 16:17:39.625520 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bdfaf2f8-66f9-4988-9142-98222b343bc0" path="/var/lib/kubelet/pods/bdfaf2f8-66f9-4988-9142-98222b343bc0/volumes" Oct 01 16:17:39 crc kubenswrapper[4949]: I1001 16:17:39.627180 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df6fb3b0-6b6b-40d2-b610-9a393a89d502" path="/var/lib/kubelet/pods/df6fb3b0-6b6b-40d2-b610-9a393a89d502/volumes" Oct 01 16:17:39 crc kubenswrapper[4949]: I1001 16:17:39.627965 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eaf20575-fb73-4f03-9b71-e9cf7f76710b" path="/var/lib/kubelet/pods/eaf20575-fb73-4f03-9b71-e9cf7f76710b/volumes" Oct 01 16:17:39 crc kubenswrapper[4949]: I1001 16:17:39.628702 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f91c85a1-fe69-4dc3-9e1a-dee5f83fa81a" path="/var/lib/kubelet/pods/f91c85a1-fe69-4dc3-9e1a-dee5f83fa81a/volumes" Oct 01 16:17:39 crc kubenswrapper[4949]: I1001 16:17:39.629989 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fabdc1d2-59b5-4699-b280-e35380873dc2" path="/var/lib/kubelet/pods/fabdc1d2-59b5-4699-b280-e35380873dc2/volumes" Oct 01 16:17:44 crc kubenswrapper[4949]: I1001 16:17:44.483553 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-sxsg2"] Oct 01 16:17:44 crc kubenswrapper[4949]: E1001 16:17:44.484427 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08f8c987-cdde-4a43-8a61-e01d63fdb5e9" containerName="extract-utilities" Oct 01 16:17:44 crc kubenswrapper[4949]: I1001 16:17:44.484444 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="08f8c987-cdde-4a43-8a61-e01d63fdb5e9" containerName="extract-utilities" Oct 01 16:17:44 crc kubenswrapper[4949]: E1001 16:17:44.484469 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08f8c987-cdde-4a43-8a61-e01d63fdb5e9" containerName="extract-content" Oct 01 16:17:44 crc kubenswrapper[4949]: I1001 16:17:44.484478 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="08f8c987-cdde-4a43-8a61-e01d63fdb5e9" containerName="extract-content" Oct 01 16:17:44 crc kubenswrapper[4949]: E1001 16:17:44.484497 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08f8c987-cdde-4a43-8a61-e01d63fdb5e9" containerName="registry-server" Oct 01 16:17:44 crc kubenswrapper[4949]: I1001 16:17:44.484505 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="08f8c987-cdde-4a43-8a61-e01d63fdb5e9" containerName="registry-server" Oct 01 16:17:44 crc kubenswrapper[4949]: I1001 16:17:44.484752 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="08f8c987-cdde-4a43-8a61-e01d63fdb5e9" containerName="registry-server" Oct 01 16:17:44 crc kubenswrapper[4949]: I1001 16:17:44.485504 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-sxsg2" Oct 01 16:17:44 crc kubenswrapper[4949]: I1001 16:17:44.492724 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 01 16:17:44 crc kubenswrapper[4949]: I1001 16:17:44.493532 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 01 16:17:44 crc kubenswrapper[4949]: I1001 16:17:44.493840 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 01 16:17:44 crc kubenswrapper[4949]: I1001 16:17:44.494734 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dwhcl" Oct 01 16:17:44 crc kubenswrapper[4949]: I1001 16:17:44.497142 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 01 16:17:44 crc kubenswrapper[4949]: I1001 16:17:44.501423 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-sxsg2"] Oct 01 16:17:44 crc kubenswrapper[4949]: I1001 16:17:44.551940 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7071f30b-0ed1-46d1-a2a7-c37d584f1ef4-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-sxsg2\" (UID: \"7071f30b-0ed1-46d1-a2a7-c37d584f1ef4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-sxsg2" Oct 01 16:17:44 crc kubenswrapper[4949]: I1001 16:17:44.552014 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7071f30b-0ed1-46d1-a2a7-c37d584f1ef4-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-sxsg2\" (UID: \"7071f30b-0ed1-46d1-a2a7-c37d584f1ef4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-sxsg2" Oct 01 16:17:44 crc kubenswrapper[4949]: I1001 16:17:44.552132 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7071f30b-0ed1-46d1-a2a7-c37d584f1ef4-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-sxsg2\" (UID: \"7071f30b-0ed1-46d1-a2a7-c37d584f1ef4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-sxsg2" Oct 01 16:17:44 crc kubenswrapper[4949]: I1001 16:17:44.552324 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7071f30b-0ed1-46d1-a2a7-c37d584f1ef4-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-sxsg2\" (UID: \"7071f30b-0ed1-46d1-a2a7-c37d584f1ef4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-sxsg2" Oct 01 16:17:44 crc kubenswrapper[4949]: I1001 16:17:44.552369 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ph6w2\" (UniqueName: \"kubernetes.io/projected/7071f30b-0ed1-46d1-a2a7-c37d584f1ef4-kube-api-access-ph6w2\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-sxsg2\" (UID: \"7071f30b-0ed1-46d1-a2a7-c37d584f1ef4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-sxsg2" Oct 01 16:17:44 crc kubenswrapper[4949]: I1001 16:17:44.654269 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7071f30b-0ed1-46d1-a2a7-c37d584f1ef4-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-sxsg2\" (UID: \"7071f30b-0ed1-46d1-a2a7-c37d584f1ef4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-sxsg2" Oct 01 16:17:44 crc kubenswrapper[4949]: I1001 16:17:44.654379 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7071f30b-0ed1-46d1-a2a7-c37d584f1ef4-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-sxsg2\" (UID: \"7071f30b-0ed1-46d1-a2a7-c37d584f1ef4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-sxsg2" Oct 01 16:17:44 crc kubenswrapper[4949]: I1001 16:17:44.654423 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7071f30b-0ed1-46d1-a2a7-c37d584f1ef4-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-sxsg2\" (UID: \"7071f30b-0ed1-46d1-a2a7-c37d584f1ef4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-sxsg2" Oct 01 16:17:44 crc kubenswrapper[4949]: I1001 16:17:44.654557 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7071f30b-0ed1-46d1-a2a7-c37d584f1ef4-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-sxsg2\" (UID: \"7071f30b-0ed1-46d1-a2a7-c37d584f1ef4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-sxsg2" Oct 01 16:17:44 crc kubenswrapper[4949]: I1001 16:17:44.654607 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ph6w2\" (UniqueName: \"kubernetes.io/projected/7071f30b-0ed1-46d1-a2a7-c37d584f1ef4-kube-api-access-ph6w2\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-sxsg2\" (UID: \"7071f30b-0ed1-46d1-a2a7-c37d584f1ef4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-sxsg2" Oct 01 16:17:44 crc kubenswrapper[4949]: I1001 16:17:44.661396 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7071f30b-0ed1-46d1-a2a7-c37d584f1ef4-ssh-key\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-sxsg2\" (UID: \"7071f30b-0ed1-46d1-a2a7-c37d584f1ef4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-sxsg2" Oct 01 16:17:44 crc kubenswrapper[4949]: I1001 16:17:44.661559 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7071f30b-0ed1-46d1-a2a7-c37d584f1ef4-inventory\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-sxsg2\" (UID: \"7071f30b-0ed1-46d1-a2a7-c37d584f1ef4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-sxsg2" Oct 01 16:17:44 crc kubenswrapper[4949]: I1001 16:17:44.661694 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7071f30b-0ed1-46d1-a2a7-c37d584f1ef4-repo-setup-combined-ca-bundle\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-sxsg2\" (UID: \"7071f30b-0ed1-46d1-a2a7-c37d584f1ef4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-sxsg2" Oct 01 16:17:44 crc kubenswrapper[4949]: I1001 16:17:44.662311 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7071f30b-0ed1-46d1-a2a7-c37d584f1ef4-ceph\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-sxsg2\" (UID: \"7071f30b-0ed1-46d1-a2a7-c37d584f1ef4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-sxsg2" Oct 01 16:17:44 crc kubenswrapper[4949]: I1001 16:17:44.673593 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ph6w2\" (UniqueName: \"kubernetes.io/projected/7071f30b-0ed1-46d1-a2a7-c37d584f1ef4-kube-api-access-ph6w2\") pod \"repo-setup-edpm-deployment-openstack-edpm-ipam-sxsg2\" (UID: \"7071f30b-0ed1-46d1-a2a7-c37d584f1ef4\") " pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-sxsg2" Oct 01 16:17:44 crc kubenswrapper[4949]: I1001 16:17:44.808230 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-sxsg2" Oct 01 16:17:45 crc kubenswrapper[4949]: I1001 16:17:45.482939 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-sxsg2"] Oct 01 16:17:46 crc kubenswrapper[4949]: I1001 16:17:46.392358 4949 scope.go:117] "RemoveContainer" containerID="ecd50d8a74fd99fafd9394d1ad96e7e74d04fcc11ecb5071005123039991ab7c" Oct 01 16:17:46 crc kubenswrapper[4949]: I1001 16:17:46.473499 4949 scope.go:117] "RemoveContainer" containerID="f09c9612f8bbe30eda621d256848fcdd883819b915b756cf7e40f8022ce43bc0" Oct 01 16:17:46 crc kubenswrapper[4949]: I1001 16:17:46.493639 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-sxsg2" event={"ID":"7071f30b-0ed1-46d1-a2a7-c37d584f1ef4","Type":"ContainerStarted","Data":"41b4600846de5b05c17f54ea1e295e4fd35164a4ad4a4fa131852b6561556809"} Oct 01 16:17:46 crc kubenswrapper[4949]: I1001 16:17:46.514332 4949 scope.go:117] "RemoveContainer" containerID="027e40d88a9075c2c3ed621f6735e0c1d1351a2981c7ec512524851a01f994d3" Oct 01 16:17:46 crc kubenswrapper[4949]: I1001 16:17:46.589768 4949 scope.go:117] "RemoveContainer" containerID="830a2f27e1a0916193ed76c18b05b1468aaace94f97a85efa0ea7e39f051457f" Oct 01 16:17:46 crc kubenswrapper[4949]: I1001 16:17:46.633763 4949 scope.go:117] "RemoveContainer" containerID="763f5fe8db341a31d7ad986a69dd845062f9987afe5eac9513fb28f8e97131b9" Oct 01 16:17:46 crc kubenswrapper[4949]: I1001 16:17:46.672584 4949 scope.go:117] "RemoveContainer" containerID="19919642c99bf230bd71c2bad3373b7df3cd13641c11af2fcabf44579840e112" Oct 01 16:17:46 crc kubenswrapper[4949]: I1001 16:17:46.704537 4949 scope.go:117] "RemoveContainer" containerID="329e02b1d9b404e86bc9052660e2b57e15025154947f78dcaface24a63feea3c" Oct 01 16:17:46 crc kubenswrapper[4949]: I1001 16:17:46.743429 4949 scope.go:117] "RemoveContainer" containerID="b279d97a6d69c40fcafc36962f6d15a8dfe7c218281a10d92eead4e772378c37" Oct 01 16:17:47 crc kubenswrapper[4949]: I1001 16:17:47.512385 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-sxsg2" event={"ID":"7071f30b-0ed1-46d1-a2a7-c37d584f1ef4","Type":"ContainerStarted","Data":"8102ecc764db915fd7d0d5e78ee464b6d09f3230dbc9328e9ffdedf570896ddc"} Oct 01 16:17:47 crc kubenswrapper[4949]: I1001 16:17:47.542824 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-sxsg2" podStartSLOduration=2.814065839 podStartE2EDuration="3.542797752s" podCreationTimestamp="2025-10-01 16:17:44 +0000 UTC" firstStartedPulling="2025-10-01 16:17:45.486922821 +0000 UTC m=+2164.792529012" lastFinishedPulling="2025-10-01 16:17:46.215654694 +0000 UTC m=+2165.521260925" observedRunningTime="2025-10-01 16:17:47.535997526 +0000 UTC m=+2166.841603717" watchObservedRunningTime="2025-10-01 16:17:47.542797752 +0000 UTC m=+2166.848403953" Oct 01 16:17:48 crc kubenswrapper[4949]: I1001 16:17:48.038738 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 16:17:48 crc kubenswrapper[4949]: I1001 16:17:48.038817 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 16:17:58 crc kubenswrapper[4949]: I1001 16:17:58.629809 4949 generic.go:334] "Generic (PLEG): container finished" podID="7071f30b-0ed1-46d1-a2a7-c37d584f1ef4" containerID="8102ecc764db915fd7d0d5e78ee464b6d09f3230dbc9328e9ffdedf570896ddc" exitCode=0 Oct 01 16:17:58 crc kubenswrapper[4949]: I1001 16:17:58.629828 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-sxsg2" event={"ID":"7071f30b-0ed1-46d1-a2a7-c37d584f1ef4","Type":"ContainerDied","Data":"8102ecc764db915fd7d0d5e78ee464b6d09f3230dbc9328e9ffdedf570896ddc"} Oct 01 16:18:00 crc kubenswrapper[4949]: I1001 16:18:00.164677 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-sxsg2" Oct 01 16:18:00 crc kubenswrapper[4949]: I1001 16:18:00.320595 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7071f30b-0ed1-46d1-a2a7-c37d584f1ef4-repo-setup-combined-ca-bundle\") pod \"7071f30b-0ed1-46d1-a2a7-c37d584f1ef4\" (UID: \"7071f30b-0ed1-46d1-a2a7-c37d584f1ef4\") " Oct 01 16:18:00 crc kubenswrapper[4949]: I1001 16:18:00.320686 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7071f30b-0ed1-46d1-a2a7-c37d584f1ef4-ceph\") pod \"7071f30b-0ed1-46d1-a2a7-c37d584f1ef4\" (UID: \"7071f30b-0ed1-46d1-a2a7-c37d584f1ef4\") " Oct 01 16:18:00 crc kubenswrapper[4949]: I1001 16:18:00.320761 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7071f30b-0ed1-46d1-a2a7-c37d584f1ef4-ssh-key\") pod \"7071f30b-0ed1-46d1-a2a7-c37d584f1ef4\" (UID: \"7071f30b-0ed1-46d1-a2a7-c37d584f1ef4\") " Oct 01 16:18:00 crc kubenswrapper[4949]: I1001 16:18:00.320799 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ph6w2\" (UniqueName: \"kubernetes.io/projected/7071f30b-0ed1-46d1-a2a7-c37d584f1ef4-kube-api-access-ph6w2\") pod \"7071f30b-0ed1-46d1-a2a7-c37d584f1ef4\" (UID: \"7071f30b-0ed1-46d1-a2a7-c37d584f1ef4\") " Oct 01 16:18:00 crc kubenswrapper[4949]: I1001 16:18:00.320901 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7071f30b-0ed1-46d1-a2a7-c37d584f1ef4-inventory\") pod \"7071f30b-0ed1-46d1-a2a7-c37d584f1ef4\" (UID: \"7071f30b-0ed1-46d1-a2a7-c37d584f1ef4\") " Oct 01 16:18:00 crc kubenswrapper[4949]: I1001 16:18:00.326528 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7071f30b-0ed1-46d1-a2a7-c37d584f1ef4-ceph" (OuterVolumeSpecName: "ceph") pod "7071f30b-0ed1-46d1-a2a7-c37d584f1ef4" (UID: "7071f30b-0ed1-46d1-a2a7-c37d584f1ef4"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:18:00 crc kubenswrapper[4949]: I1001 16:18:00.327184 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7071f30b-0ed1-46d1-a2a7-c37d584f1ef4-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "7071f30b-0ed1-46d1-a2a7-c37d584f1ef4" (UID: "7071f30b-0ed1-46d1-a2a7-c37d584f1ef4"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:18:00 crc kubenswrapper[4949]: I1001 16:18:00.328264 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7071f30b-0ed1-46d1-a2a7-c37d584f1ef4-kube-api-access-ph6w2" (OuterVolumeSpecName: "kube-api-access-ph6w2") pod "7071f30b-0ed1-46d1-a2a7-c37d584f1ef4" (UID: "7071f30b-0ed1-46d1-a2a7-c37d584f1ef4"). InnerVolumeSpecName "kube-api-access-ph6w2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:18:00 crc kubenswrapper[4949]: I1001 16:18:00.364175 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7071f30b-0ed1-46d1-a2a7-c37d584f1ef4-inventory" (OuterVolumeSpecName: "inventory") pod "7071f30b-0ed1-46d1-a2a7-c37d584f1ef4" (UID: "7071f30b-0ed1-46d1-a2a7-c37d584f1ef4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:18:00 crc kubenswrapper[4949]: I1001 16:18:00.364924 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7071f30b-0ed1-46d1-a2a7-c37d584f1ef4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "7071f30b-0ed1-46d1-a2a7-c37d584f1ef4" (UID: "7071f30b-0ed1-46d1-a2a7-c37d584f1ef4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:18:00 crc kubenswrapper[4949]: I1001 16:18:00.423496 4949 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7071f30b-0ed1-46d1-a2a7-c37d584f1ef4-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 16:18:00 crc kubenswrapper[4949]: I1001 16:18:00.423538 4949 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/7071f30b-0ed1-46d1-a2a7-c37d584f1ef4-ceph\") on node \"crc\" DevicePath \"\"" Oct 01 16:18:00 crc kubenswrapper[4949]: I1001 16:18:00.423553 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/7071f30b-0ed1-46d1-a2a7-c37d584f1ef4-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 01 16:18:00 crc kubenswrapper[4949]: I1001 16:18:00.423568 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ph6w2\" (UniqueName: \"kubernetes.io/projected/7071f30b-0ed1-46d1-a2a7-c37d584f1ef4-kube-api-access-ph6w2\") on node \"crc\" DevicePath \"\"" Oct 01 16:18:00 crc kubenswrapper[4949]: I1001 16:18:00.423580 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/7071f30b-0ed1-46d1-a2a7-c37d584f1ef4-inventory\") on node \"crc\" DevicePath \"\"" Oct 01 16:18:00 crc kubenswrapper[4949]: I1001 16:18:00.654337 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-sxsg2" event={"ID":"7071f30b-0ed1-46d1-a2a7-c37d584f1ef4","Type":"ContainerDied","Data":"41b4600846de5b05c17f54ea1e295e4fd35164a4ad4a4fa131852b6561556809"} Oct 01 16:18:00 crc kubenswrapper[4949]: I1001 16:18:00.654393 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41b4600846de5b05c17f54ea1e295e4fd35164a4ad4a4fa131852b6561556809" Oct 01 16:18:00 crc kubenswrapper[4949]: I1001 16:18:00.654433 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/repo-setup-edpm-deployment-openstack-edpm-ipam-sxsg2" Oct 01 16:18:00 crc kubenswrapper[4949]: I1001 16:18:00.787335 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nw9r5"] Oct 01 16:18:00 crc kubenswrapper[4949]: E1001 16:18:00.787947 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7071f30b-0ed1-46d1-a2a7-c37d584f1ef4" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Oct 01 16:18:00 crc kubenswrapper[4949]: I1001 16:18:00.787981 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="7071f30b-0ed1-46d1-a2a7-c37d584f1ef4" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Oct 01 16:18:00 crc kubenswrapper[4949]: I1001 16:18:00.788353 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="7071f30b-0ed1-46d1-a2a7-c37d584f1ef4" containerName="repo-setup-edpm-deployment-openstack-edpm-ipam" Oct 01 16:18:00 crc kubenswrapper[4949]: I1001 16:18:00.789368 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nw9r5" Oct 01 16:18:00 crc kubenswrapper[4949]: I1001 16:18:00.791879 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 01 16:18:00 crc kubenswrapper[4949]: I1001 16:18:00.793985 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 01 16:18:00 crc kubenswrapper[4949]: I1001 16:18:00.795001 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 01 16:18:00 crc kubenswrapper[4949]: I1001 16:18:00.795222 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dwhcl" Oct 01 16:18:00 crc kubenswrapper[4949]: I1001 16:18:00.795391 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 01 16:18:00 crc kubenswrapper[4949]: I1001 16:18:00.797458 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nw9r5"] Oct 01 16:18:00 crc kubenswrapper[4949]: I1001 16:18:00.932615 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bf83f788-14e7-4c60-bdb0-174b3d343b75-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nw9r5\" (UID: \"bf83f788-14e7-4c60-bdb0-174b3d343b75\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nw9r5" Oct 01 16:18:00 crc kubenswrapper[4949]: I1001 16:18:00.932911 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bf83f788-14e7-4c60-bdb0-174b3d343b75-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nw9r5\" (UID: \"bf83f788-14e7-4c60-bdb0-174b3d343b75\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nw9r5" Oct 01 16:18:00 crc kubenswrapper[4949]: I1001 16:18:00.932982 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf83f788-14e7-4c60-bdb0-174b3d343b75-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nw9r5\" (UID: \"bf83f788-14e7-4c60-bdb0-174b3d343b75\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nw9r5" Oct 01 16:18:00 crc kubenswrapper[4949]: I1001 16:18:00.933353 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf83f788-14e7-4c60-bdb0-174b3d343b75-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nw9r5\" (UID: \"bf83f788-14e7-4c60-bdb0-174b3d343b75\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nw9r5" Oct 01 16:18:00 crc kubenswrapper[4949]: I1001 16:18:00.933537 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrvsj\" (UniqueName: \"kubernetes.io/projected/bf83f788-14e7-4c60-bdb0-174b3d343b75-kube-api-access-lrvsj\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nw9r5\" (UID: \"bf83f788-14e7-4c60-bdb0-174b3d343b75\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nw9r5" Oct 01 16:18:01 crc kubenswrapper[4949]: I1001 16:18:01.036228 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bf83f788-14e7-4c60-bdb0-174b3d343b75-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nw9r5\" (UID: \"bf83f788-14e7-4c60-bdb0-174b3d343b75\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nw9r5" Oct 01 16:18:01 crc kubenswrapper[4949]: I1001 16:18:01.036344 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf83f788-14e7-4c60-bdb0-174b3d343b75-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nw9r5\" (UID: \"bf83f788-14e7-4c60-bdb0-174b3d343b75\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nw9r5" Oct 01 16:18:01 crc kubenswrapper[4949]: I1001 16:18:01.036492 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf83f788-14e7-4c60-bdb0-174b3d343b75-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nw9r5\" (UID: \"bf83f788-14e7-4c60-bdb0-174b3d343b75\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nw9r5" Oct 01 16:18:01 crc kubenswrapper[4949]: I1001 16:18:01.036599 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrvsj\" (UniqueName: \"kubernetes.io/projected/bf83f788-14e7-4c60-bdb0-174b3d343b75-kube-api-access-lrvsj\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nw9r5\" (UID: \"bf83f788-14e7-4c60-bdb0-174b3d343b75\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nw9r5" Oct 01 16:18:01 crc kubenswrapper[4949]: I1001 16:18:01.036792 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bf83f788-14e7-4c60-bdb0-174b3d343b75-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nw9r5\" (UID: \"bf83f788-14e7-4c60-bdb0-174b3d343b75\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nw9r5" Oct 01 16:18:01 crc kubenswrapper[4949]: I1001 16:18:01.044511 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bf83f788-14e7-4c60-bdb0-174b3d343b75-ceph\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nw9r5\" (UID: \"bf83f788-14e7-4c60-bdb0-174b3d343b75\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nw9r5" Oct 01 16:18:01 crc kubenswrapper[4949]: I1001 16:18:01.047692 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf83f788-14e7-4c60-bdb0-174b3d343b75-inventory\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nw9r5\" (UID: \"bf83f788-14e7-4c60-bdb0-174b3d343b75\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nw9r5" Oct 01 16:18:01 crc kubenswrapper[4949]: I1001 16:18:01.058655 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf83f788-14e7-4c60-bdb0-174b3d343b75-bootstrap-combined-ca-bundle\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nw9r5\" (UID: \"bf83f788-14e7-4c60-bdb0-174b3d343b75\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nw9r5" Oct 01 16:18:01 crc kubenswrapper[4949]: I1001 16:18:01.063884 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bf83f788-14e7-4c60-bdb0-174b3d343b75-ssh-key\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nw9r5\" (UID: \"bf83f788-14e7-4c60-bdb0-174b3d343b75\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nw9r5" Oct 01 16:18:01 crc kubenswrapper[4949]: I1001 16:18:01.067713 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrvsj\" (UniqueName: \"kubernetes.io/projected/bf83f788-14e7-4c60-bdb0-174b3d343b75-kube-api-access-lrvsj\") pod \"bootstrap-edpm-deployment-openstack-edpm-ipam-nw9r5\" (UID: \"bf83f788-14e7-4c60-bdb0-174b3d343b75\") " pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nw9r5" Oct 01 16:18:01 crc kubenswrapper[4949]: I1001 16:18:01.107980 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nw9r5" Oct 01 16:18:01 crc kubenswrapper[4949]: I1001 16:18:01.703969 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nw9r5"] Oct 01 16:18:02 crc kubenswrapper[4949]: I1001 16:18:02.679304 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nw9r5" event={"ID":"bf83f788-14e7-4c60-bdb0-174b3d343b75","Type":"ContainerStarted","Data":"75e7ed6e50fbaee2791d9ae17fd27270236918dfe684f9507ad4ea6a47a25867"} Oct 01 16:18:03 crc kubenswrapper[4949]: I1001 16:18:03.689687 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nw9r5" event={"ID":"bf83f788-14e7-4c60-bdb0-174b3d343b75","Type":"ContainerStarted","Data":"041ad9885ca6678b3f866d642169b3432e90dbe407ba3945821c2a799fd78a2a"} Oct 01 16:18:03 crc kubenswrapper[4949]: I1001 16:18:03.714758 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nw9r5" podStartSLOduration=3.012068473 podStartE2EDuration="3.714733252s" podCreationTimestamp="2025-10-01 16:18:00 +0000 UTC" firstStartedPulling="2025-10-01 16:18:01.697288173 +0000 UTC m=+2181.002894404" lastFinishedPulling="2025-10-01 16:18:02.399952962 +0000 UTC m=+2181.705559183" observedRunningTime="2025-10-01 16:18:03.706421174 +0000 UTC m=+2183.012027365" watchObservedRunningTime="2025-10-01 16:18:03.714733252 +0000 UTC m=+2183.020339493" Oct 01 16:18:18 crc kubenswrapper[4949]: I1001 16:18:18.038666 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 16:18:18 crc kubenswrapper[4949]: I1001 16:18:18.039274 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 16:18:18 crc kubenswrapper[4949]: I1001 16:18:18.039332 4949 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l6287" Oct 01 16:18:18 crc kubenswrapper[4949]: I1001 16:18:18.040107 4949 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"757b624790258f8fb52b1acdc82513f0413ed869329247fea2c8ea990d6338de"} pod="openshift-machine-config-operator/machine-config-daemon-l6287" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 01 16:18:18 crc kubenswrapper[4949]: I1001 16:18:18.040212 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" containerID="cri-o://757b624790258f8fb52b1acdc82513f0413ed869329247fea2c8ea990d6338de" gracePeriod=600 Oct 01 16:18:18 crc kubenswrapper[4949]: E1001 16:18:18.171911 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:18:18 crc kubenswrapper[4949]: I1001 16:18:18.848631 4949 generic.go:334] "Generic (PLEG): container finished" podID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerID="757b624790258f8fb52b1acdc82513f0413ed869329247fea2c8ea990d6338de" exitCode=0 Oct 01 16:18:18 crc kubenswrapper[4949]: I1001 16:18:18.848674 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" event={"ID":"0e15cd67-d4ad-49b8-96a6-da114105e558","Type":"ContainerDied","Data":"757b624790258f8fb52b1acdc82513f0413ed869329247fea2c8ea990d6338de"} Oct 01 16:18:18 crc kubenswrapper[4949]: I1001 16:18:18.849055 4949 scope.go:117] "RemoveContainer" containerID="90c4b46f71f5dc2c694e5dd56eb6f14191f02494f0dd0b6bd71d4be48af4cdce" Oct 01 16:18:18 crc kubenswrapper[4949]: I1001 16:18:18.849753 4949 scope.go:117] "RemoveContainer" containerID="757b624790258f8fb52b1acdc82513f0413ed869329247fea2c8ea990d6338de" Oct 01 16:18:18 crc kubenswrapper[4949]: E1001 16:18:18.850046 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:18:30 crc kubenswrapper[4949]: I1001 16:18:30.602365 4949 scope.go:117] "RemoveContainer" containerID="757b624790258f8fb52b1acdc82513f0413ed869329247fea2c8ea990d6338de" Oct 01 16:18:30 crc kubenswrapper[4949]: E1001 16:18:30.603554 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:18:36 crc kubenswrapper[4949]: I1001 16:18:36.642499 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openstack/ceilometer-0" podUID="f7a262ff-0c09-4792-a7e7-e8fb709aa971" containerName="ceilometer-central-agent" probeResult="failure" output="command timed out" Oct 01 16:18:44 crc kubenswrapper[4949]: I1001 16:18:44.601071 4949 scope.go:117] "RemoveContainer" containerID="757b624790258f8fb52b1acdc82513f0413ed869329247fea2c8ea990d6338de" Oct 01 16:18:44 crc kubenswrapper[4949]: E1001 16:18:44.601817 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:18:46 crc kubenswrapper[4949]: I1001 16:18:46.959537 4949 scope.go:117] "RemoveContainer" containerID="024d34b43b4c5fd5ed6c7dd8f29a3cd21bf0c404f94bc006eeb759562015f802" Oct 01 16:18:46 crc kubenswrapper[4949]: I1001 16:18:46.992805 4949 scope.go:117] "RemoveContainer" containerID="891cd20fd0c93b05bdce4f720688ec05518d2a32f0fbc2e2c823820ec19a534a" Oct 01 16:18:47 crc kubenswrapper[4949]: I1001 16:18:47.033910 4949 scope.go:117] "RemoveContainer" containerID="d0d4eb5c56406c9da3aa277abd13f297c734562e39e1561722f4ae49296af279" Oct 01 16:18:58 crc kubenswrapper[4949]: I1001 16:18:58.601593 4949 scope.go:117] "RemoveContainer" containerID="757b624790258f8fb52b1acdc82513f0413ed869329247fea2c8ea990d6338de" Oct 01 16:18:58 crc kubenswrapper[4949]: E1001 16:18:58.602548 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:19:12 crc kubenswrapper[4949]: I1001 16:19:12.618891 4949 scope.go:117] "RemoveContainer" containerID="757b624790258f8fb52b1acdc82513f0413ed869329247fea2c8ea990d6338de" Oct 01 16:19:12 crc kubenswrapper[4949]: E1001 16:19:12.619787 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:19:17 crc kubenswrapper[4949]: I1001 16:19:17.560724 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-btlcx"] Oct 01 16:19:17 crc kubenswrapper[4949]: I1001 16:19:17.565318 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-btlcx" Oct 01 16:19:17 crc kubenswrapper[4949]: I1001 16:19:17.575903 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-btlcx"] Oct 01 16:19:17 crc kubenswrapper[4949]: I1001 16:19:17.636676 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55f19eb7-e134-48c0-96b1-4b7801cefa6b-catalog-content\") pod \"redhat-operators-btlcx\" (UID: \"55f19eb7-e134-48c0-96b1-4b7801cefa6b\") " pod="openshift-marketplace/redhat-operators-btlcx" Oct 01 16:19:17 crc kubenswrapper[4949]: I1001 16:19:17.636762 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d4jq7\" (UniqueName: \"kubernetes.io/projected/55f19eb7-e134-48c0-96b1-4b7801cefa6b-kube-api-access-d4jq7\") pod \"redhat-operators-btlcx\" (UID: \"55f19eb7-e134-48c0-96b1-4b7801cefa6b\") " pod="openshift-marketplace/redhat-operators-btlcx" Oct 01 16:19:17 crc kubenswrapper[4949]: I1001 16:19:17.636839 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55f19eb7-e134-48c0-96b1-4b7801cefa6b-utilities\") pod \"redhat-operators-btlcx\" (UID: \"55f19eb7-e134-48c0-96b1-4b7801cefa6b\") " pod="openshift-marketplace/redhat-operators-btlcx" Oct 01 16:19:17 crc kubenswrapper[4949]: I1001 16:19:17.738751 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55f19eb7-e134-48c0-96b1-4b7801cefa6b-catalog-content\") pod \"redhat-operators-btlcx\" (UID: \"55f19eb7-e134-48c0-96b1-4b7801cefa6b\") " pod="openshift-marketplace/redhat-operators-btlcx" Oct 01 16:19:17 crc kubenswrapper[4949]: I1001 16:19:17.738801 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d4jq7\" (UniqueName: \"kubernetes.io/projected/55f19eb7-e134-48c0-96b1-4b7801cefa6b-kube-api-access-d4jq7\") pod \"redhat-operators-btlcx\" (UID: \"55f19eb7-e134-48c0-96b1-4b7801cefa6b\") " pod="openshift-marketplace/redhat-operators-btlcx" Oct 01 16:19:17 crc kubenswrapper[4949]: I1001 16:19:17.738866 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55f19eb7-e134-48c0-96b1-4b7801cefa6b-utilities\") pod \"redhat-operators-btlcx\" (UID: \"55f19eb7-e134-48c0-96b1-4b7801cefa6b\") " pod="openshift-marketplace/redhat-operators-btlcx" Oct 01 16:19:17 crc kubenswrapper[4949]: I1001 16:19:17.739348 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55f19eb7-e134-48c0-96b1-4b7801cefa6b-catalog-content\") pod \"redhat-operators-btlcx\" (UID: \"55f19eb7-e134-48c0-96b1-4b7801cefa6b\") " pod="openshift-marketplace/redhat-operators-btlcx" Oct 01 16:19:17 crc kubenswrapper[4949]: I1001 16:19:17.739385 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55f19eb7-e134-48c0-96b1-4b7801cefa6b-utilities\") pod \"redhat-operators-btlcx\" (UID: \"55f19eb7-e134-48c0-96b1-4b7801cefa6b\") " pod="openshift-marketplace/redhat-operators-btlcx" Oct 01 16:19:17 crc kubenswrapper[4949]: I1001 16:19:17.764047 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d4jq7\" (UniqueName: \"kubernetes.io/projected/55f19eb7-e134-48c0-96b1-4b7801cefa6b-kube-api-access-d4jq7\") pod \"redhat-operators-btlcx\" (UID: \"55f19eb7-e134-48c0-96b1-4b7801cefa6b\") " pod="openshift-marketplace/redhat-operators-btlcx" Oct 01 16:19:17 crc kubenswrapper[4949]: I1001 16:19:17.912099 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-btlcx" Oct 01 16:19:18 crc kubenswrapper[4949]: I1001 16:19:18.342462 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-btlcx"] Oct 01 16:19:18 crc kubenswrapper[4949]: I1001 16:19:18.506585 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-btlcx" event={"ID":"55f19eb7-e134-48c0-96b1-4b7801cefa6b","Type":"ContainerStarted","Data":"0304078d3a21784e7183ba3737211e02ab1f727eb9ef97913dddf0e49dc60dbc"} Oct 01 16:19:19 crc kubenswrapper[4949]: I1001 16:19:19.516800 4949 generic.go:334] "Generic (PLEG): container finished" podID="55f19eb7-e134-48c0-96b1-4b7801cefa6b" containerID="7cd485211d5f5b2eb7865bf2f0b8bd8caa91fbf12bc8d86f1dde1d25342f32bb" exitCode=0 Oct 01 16:19:19 crc kubenswrapper[4949]: I1001 16:19:19.516925 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-btlcx" event={"ID":"55f19eb7-e134-48c0-96b1-4b7801cefa6b","Type":"ContainerDied","Data":"7cd485211d5f5b2eb7865bf2f0b8bd8caa91fbf12bc8d86f1dde1d25342f32bb"} Oct 01 16:19:25 crc kubenswrapper[4949]: I1001 16:19:25.602721 4949 scope.go:117] "RemoveContainer" containerID="757b624790258f8fb52b1acdc82513f0413ed869329247fea2c8ea990d6338de" Oct 01 16:19:25 crc kubenswrapper[4949]: E1001 16:19:25.604285 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:19:27 crc kubenswrapper[4949]: I1001 16:19:27.585589 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-btlcx" event={"ID":"55f19eb7-e134-48c0-96b1-4b7801cefa6b","Type":"ContainerStarted","Data":"2ffdd252deeaab42252747fc8ac59dee6cac6c5c2d5f5ef0256704364424700b"} Oct 01 16:19:28 crc kubenswrapper[4949]: I1001 16:19:28.598039 4949 generic.go:334] "Generic (PLEG): container finished" podID="55f19eb7-e134-48c0-96b1-4b7801cefa6b" containerID="2ffdd252deeaab42252747fc8ac59dee6cac6c5c2d5f5ef0256704364424700b" exitCode=0 Oct 01 16:19:28 crc kubenswrapper[4949]: I1001 16:19:28.598099 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-btlcx" event={"ID":"55f19eb7-e134-48c0-96b1-4b7801cefa6b","Type":"ContainerDied","Data":"2ffdd252deeaab42252747fc8ac59dee6cac6c5c2d5f5ef0256704364424700b"} Oct 01 16:19:30 crc kubenswrapper[4949]: I1001 16:19:30.619042 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-btlcx" event={"ID":"55f19eb7-e134-48c0-96b1-4b7801cefa6b","Type":"ContainerStarted","Data":"086e4ca5f5275f659c2c0afd3a569127804f4c5d38c35cdcc4126e8af29141f8"} Oct 01 16:19:30 crc kubenswrapper[4949]: I1001 16:19:30.675294 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-btlcx" podStartSLOduration=3.951711428 podStartE2EDuration="13.675268702s" podCreationTimestamp="2025-10-01 16:19:17 +0000 UTC" firstStartedPulling="2025-10-01 16:19:19.519845314 +0000 UTC m=+2258.825451505" lastFinishedPulling="2025-10-01 16:19:29.243402578 +0000 UTC m=+2268.549008779" observedRunningTime="2025-10-01 16:19:30.666282316 +0000 UTC m=+2269.971888517" watchObservedRunningTime="2025-10-01 16:19:30.675268702 +0000 UTC m=+2269.980874893" Oct 01 16:19:37 crc kubenswrapper[4949]: I1001 16:19:37.913414 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-btlcx" Oct 01 16:19:37 crc kubenswrapper[4949]: I1001 16:19:37.913980 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-btlcx" Oct 01 16:19:37 crc kubenswrapper[4949]: I1001 16:19:37.975201 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-btlcx" Oct 01 16:19:38 crc kubenswrapper[4949]: I1001 16:19:38.773578 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-btlcx" Oct 01 16:19:38 crc kubenswrapper[4949]: I1001 16:19:38.889621 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-btlcx"] Oct 01 16:19:38 crc kubenswrapper[4949]: I1001 16:19:38.935718 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zzv5b"] Oct 01 16:19:38 crc kubenswrapper[4949]: I1001 16:19:38.936646 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zzv5b" podUID="46dbc1ee-6aff-455c-9f06-0a9dc719ce20" containerName="registry-server" containerID="cri-o://558991c982a624fe9e1f5636c6e4032ee37472e21ef1e3d5ada2e38b61e0e278" gracePeriod=2 Oct 01 16:19:39 crc kubenswrapper[4949]: I1001 16:19:39.432482 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zzv5b" Oct 01 16:19:39 crc kubenswrapper[4949]: I1001 16:19:39.496504 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6wkb\" (UniqueName: \"kubernetes.io/projected/46dbc1ee-6aff-455c-9f06-0a9dc719ce20-kube-api-access-n6wkb\") pod \"46dbc1ee-6aff-455c-9f06-0a9dc719ce20\" (UID: \"46dbc1ee-6aff-455c-9f06-0a9dc719ce20\") " Oct 01 16:19:39 crc kubenswrapper[4949]: I1001 16:19:39.496569 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46dbc1ee-6aff-455c-9f06-0a9dc719ce20-utilities\") pod \"46dbc1ee-6aff-455c-9f06-0a9dc719ce20\" (UID: \"46dbc1ee-6aff-455c-9f06-0a9dc719ce20\") " Oct 01 16:19:39 crc kubenswrapper[4949]: I1001 16:19:39.496614 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46dbc1ee-6aff-455c-9f06-0a9dc719ce20-catalog-content\") pod \"46dbc1ee-6aff-455c-9f06-0a9dc719ce20\" (UID: \"46dbc1ee-6aff-455c-9f06-0a9dc719ce20\") " Oct 01 16:19:39 crc kubenswrapper[4949]: I1001 16:19:39.500578 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46dbc1ee-6aff-455c-9f06-0a9dc719ce20-utilities" (OuterVolumeSpecName: "utilities") pod "46dbc1ee-6aff-455c-9f06-0a9dc719ce20" (UID: "46dbc1ee-6aff-455c-9f06-0a9dc719ce20"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:19:39 crc kubenswrapper[4949]: I1001 16:19:39.507422 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46dbc1ee-6aff-455c-9f06-0a9dc719ce20-kube-api-access-n6wkb" (OuterVolumeSpecName: "kube-api-access-n6wkb") pod "46dbc1ee-6aff-455c-9f06-0a9dc719ce20" (UID: "46dbc1ee-6aff-455c-9f06-0a9dc719ce20"). InnerVolumeSpecName "kube-api-access-n6wkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:19:39 crc kubenswrapper[4949]: I1001 16:19:39.577354 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46dbc1ee-6aff-455c-9f06-0a9dc719ce20-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "46dbc1ee-6aff-455c-9f06-0a9dc719ce20" (UID: "46dbc1ee-6aff-455c-9f06-0a9dc719ce20"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:19:39 crc kubenswrapper[4949]: I1001 16:19:39.597628 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6wkb\" (UniqueName: \"kubernetes.io/projected/46dbc1ee-6aff-455c-9f06-0a9dc719ce20-kube-api-access-n6wkb\") on node \"crc\" DevicePath \"\"" Oct 01 16:19:39 crc kubenswrapper[4949]: I1001 16:19:39.597661 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/46dbc1ee-6aff-455c-9f06-0a9dc719ce20-utilities\") on node \"crc\" DevicePath \"\"" Oct 01 16:19:39 crc kubenswrapper[4949]: I1001 16:19:39.597669 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/46dbc1ee-6aff-455c-9f06-0a9dc719ce20-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 01 16:19:39 crc kubenswrapper[4949]: I1001 16:19:39.602410 4949 scope.go:117] "RemoveContainer" containerID="757b624790258f8fb52b1acdc82513f0413ed869329247fea2c8ea990d6338de" Oct 01 16:19:39 crc kubenswrapper[4949]: E1001 16:19:39.602667 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:19:39 crc kubenswrapper[4949]: I1001 16:19:39.702623 4949 generic.go:334] "Generic (PLEG): container finished" podID="46dbc1ee-6aff-455c-9f06-0a9dc719ce20" containerID="558991c982a624fe9e1f5636c6e4032ee37472e21ef1e3d5ada2e38b61e0e278" exitCode=0 Oct 01 16:19:39 crc kubenswrapper[4949]: I1001 16:19:39.702683 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zzv5b" event={"ID":"46dbc1ee-6aff-455c-9f06-0a9dc719ce20","Type":"ContainerDied","Data":"558991c982a624fe9e1f5636c6e4032ee37472e21ef1e3d5ada2e38b61e0e278"} Oct 01 16:19:39 crc kubenswrapper[4949]: I1001 16:19:39.702742 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zzv5b" event={"ID":"46dbc1ee-6aff-455c-9f06-0a9dc719ce20","Type":"ContainerDied","Data":"5c3dcb641c531c097f628d9961b61a4166e31640cf6440032ae1555a668d030b"} Oct 01 16:19:39 crc kubenswrapper[4949]: I1001 16:19:39.702765 4949 scope.go:117] "RemoveContainer" containerID="558991c982a624fe9e1f5636c6e4032ee37472e21ef1e3d5ada2e38b61e0e278" Oct 01 16:19:39 crc kubenswrapper[4949]: I1001 16:19:39.702693 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zzv5b" Oct 01 16:19:39 crc kubenswrapper[4949]: I1001 16:19:39.723458 4949 scope.go:117] "RemoveContainer" containerID="a658f66313914d504a77801496ecb735cbb513e45dce76b2c7abf5d07976290e" Oct 01 16:19:39 crc kubenswrapper[4949]: I1001 16:19:39.724664 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zzv5b"] Oct 01 16:19:39 crc kubenswrapper[4949]: I1001 16:19:39.733436 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zzv5b"] Oct 01 16:19:39 crc kubenswrapper[4949]: I1001 16:19:39.741419 4949 scope.go:117] "RemoveContainer" containerID="67338827caf75af88a51c34a6ffce8b5b2f5227e012bfb5905871a4cf443fc8f" Oct 01 16:19:39 crc kubenswrapper[4949]: I1001 16:19:39.779247 4949 scope.go:117] "RemoveContainer" containerID="558991c982a624fe9e1f5636c6e4032ee37472e21ef1e3d5ada2e38b61e0e278" Oct 01 16:19:39 crc kubenswrapper[4949]: E1001 16:19:39.779787 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"558991c982a624fe9e1f5636c6e4032ee37472e21ef1e3d5ada2e38b61e0e278\": container with ID starting with 558991c982a624fe9e1f5636c6e4032ee37472e21ef1e3d5ada2e38b61e0e278 not found: ID does not exist" containerID="558991c982a624fe9e1f5636c6e4032ee37472e21ef1e3d5ada2e38b61e0e278" Oct 01 16:19:39 crc kubenswrapper[4949]: I1001 16:19:39.779841 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"558991c982a624fe9e1f5636c6e4032ee37472e21ef1e3d5ada2e38b61e0e278"} err="failed to get container status \"558991c982a624fe9e1f5636c6e4032ee37472e21ef1e3d5ada2e38b61e0e278\": rpc error: code = NotFound desc = could not find container \"558991c982a624fe9e1f5636c6e4032ee37472e21ef1e3d5ada2e38b61e0e278\": container with ID starting with 558991c982a624fe9e1f5636c6e4032ee37472e21ef1e3d5ada2e38b61e0e278 not found: ID does not exist" Oct 01 16:19:39 crc kubenswrapper[4949]: I1001 16:19:39.779876 4949 scope.go:117] "RemoveContainer" containerID="a658f66313914d504a77801496ecb735cbb513e45dce76b2c7abf5d07976290e" Oct 01 16:19:39 crc kubenswrapper[4949]: E1001 16:19:39.780298 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a658f66313914d504a77801496ecb735cbb513e45dce76b2c7abf5d07976290e\": container with ID starting with a658f66313914d504a77801496ecb735cbb513e45dce76b2c7abf5d07976290e not found: ID does not exist" containerID="a658f66313914d504a77801496ecb735cbb513e45dce76b2c7abf5d07976290e" Oct 01 16:19:39 crc kubenswrapper[4949]: I1001 16:19:39.780354 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a658f66313914d504a77801496ecb735cbb513e45dce76b2c7abf5d07976290e"} err="failed to get container status \"a658f66313914d504a77801496ecb735cbb513e45dce76b2c7abf5d07976290e\": rpc error: code = NotFound desc = could not find container \"a658f66313914d504a77801496ecb735cbb513e45dce76b2c7abf5d07976290e\": container with ID starting with a658f66313914d504a77801496ecb735cbb513e45dce76b2c7abf5d07976290e not found: ID does not exist" Oct 01 16:19:39 crc kubenswrapper[4949]: I1001 16:19:39.780391 4949 scope.go:117] "RemoveContainer" containerID="67338827caf75af88a51c34a6ffce8b5b2f5227e012bfb5905871a4cf443fc8f" Oct 01 16:19:39 crc kubenswrapper[4949]: E1001 16:19:39.780749 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67338827caf75af88a51c34a6ffce8b5b2f5227e012bfb5905871a4cf443fc8f\": container with ID starting with 67338827caf75af88a51c34a6ffce8b5b2f5227e012bfb5905871a4cf443fc8f not found: ID does not exist" containerID="67338827caf75af88a51c34a6ffce8b5b2f5227e012bfb5905871a4cf443fc8f" Oct 01 16:19:39 crc kubenswrapper[4949]: I1001 16:19:39.780781 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67338827caf75af88a51c34a6ffce8b5b2f5227e012bfb5905871a4cf443fc8f"} err="failed to get container status \"67338827caf75af88a51c34a6ffce8b5b2f5227e012bfb5905871a4cf443fc8f\": rpc error: code = NotFound desc = could not find container \"67338827caf75af88a51c34a6ffce8b5b2f5227e012bfb5905871a4cf443fc8f\": container with ID starting with 67338827caf75af88a51c34a6ffce8b5b2f5227e012bfb5905871a4cf443fc8f not found: ID does not exist" Oct 01 16:19:41 crc kubenswrapper[4949]: I1001 16:19:41.082379 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2p4dwt"] Oct 01 16:19:41 crc kubenswrapper[4949]: E1001 16:19:41.082970 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46dbc1ee-6aff-455c-9f06-0a9dc719ce20" containerName="extract-utilities" Oct 01 16:19:41 crc kubenswrapper[4949]: I1001 16:19:41.082981 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="46dbc1ee-6aff-455c-9f06-0a9dc719ce20" containerName="extract-utilities" Oct 01 16:19:41 crc kubenswrapper[4949]: E1001 16:19:41.083017 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46dbc1ee-6aff-455c-9f06-0a9dc719ce20" containerName="registry-server" Oct 01 16:19:41 crc kubenswrapper[4949]: I1001 16:19:41.083023 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="46dbc1ee-6aff-455c-9f06-0a9dc719ce20" containerName="registry-server" Oct 01 16:19:41 crc kubenswrapper[4949]: E1001 16:19:41.083036 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46dbc1ee-6aff-455c-9f06-0a9dc719ce20" containerName="extract-content" Oct 01 16:19:41 crc kubenswrapper[4949]: I1001 16:19:41.083043 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="46dbc1ee-6aff-455c-9f06-0a9dc719ce20" containerName="extract-content" Oct 01 16:19:41 crc kubenswrapper[4949]: I1001 16:19:41.083214 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="46dbc1ee-6aff-455c-9f06-0a9dc719ce20" containerName="registry-server" Oct 01 16:19:41 crc kubenswrapper[4949]: I1001 16:19:41.084439 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2p4dwt" Oct 01 16:19:41 crc kubenswrapper[4949]: I1001 16:19:41.086530 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Oct 01 16:19:41 crc kubenswrapper[4949]: I1001 16:19:41.116367 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2p4dwt"] Oct 01 16:19:41 crc kubenswrapper[4949]: I1001 16:19:41.121876 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h2vkx\" (UniqueName: \"kubernetes.io/projected/5c4b466f-a6c5-447e-84ca-70e154cd29c7-kube-api-access-h2vkx\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2p4dwt\" (UID: \"5c4b466f-a6c5-447e-84ca-70e154cd29c7\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2p4dwt" Oct 01 16:19:41 crc kubenswrapper[4949]: I1001 16:19:41.121922 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c4b466f-a6c5-447e-84ca-70e154cd29c7-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2p4dwt\" (UID: \"5c4b466f-a6c5-447e-84ca-70e154cd29c7\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2p4dwt" Oct 01 16:19:41 crc kubenswrapper[4949]: I1001 16:19:41.122046 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c4b466f-a6c5-447e-84ca-70e154cd29c7-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2p4dwt\" (UID: \"5c4b466f-a6c5-447e-84ca-70e154cd29c7\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2p4dwt" Oct 01 16:19:41 crc kubenswrapper[4949]: I1001 16:19:41.223601 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h2vkx\" (UniqueName: \"kubernetes.io/projected/5c4b466f-a6c5-447e-84ca-70e154cd29c7-kube-api-access-h2vkx\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2p4dwt\" (UID: \"5c4b466f-a6c5-447e-84ca-70e154cd29c7\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2p4dwt" Oct 01 16:19:41 crc kubenswrapper[4949]: I1001 16:19:41.223657 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c4b466f-a6c5-447e-84ca-70e154cd29c7-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2p4dwt\" (UID: \"5c4b466f-a6c5-447e-84ca-70e154cd29c7\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2p4dwt" Oct 01 16:19:41 crc kubenswrapper[4949]: I1001 16:19:41.223709 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c4b466f-a6c5-447e-84ca-70e154cd29c7-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2p4dwt\" (UID: \"5c4b466f-a6c5-447e-84ca-70e154cd29c7\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2p4dwt" Oct 01 16:19:41 crc kubenswrapper[4949]: I1001 16:19:41.224300 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c4b466f-a6c5-447e-84ca-70e154cd29c7-util\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2p4dwt\" (UID: \"5c4b466f-a6c5-447e-84ca-70e154cd29c7\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2p4dwt" Oct 01 16:19:41 crc kubenswrapper[4949]: I1001 16:19:41.224347 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c4b466f-a6c5-447e-84ca-70e154cd29c7-bundle\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2p4dwt\" (UID: \"5c4b466f-a6c5-447e-84ca-70e154cd29c7\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2p4dwt" Oct 01 16:19:41 crc kubenswrapper[4949]: I1001 16:19:41.241563 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h2vkx\" (UniqueName: \"kubernetes.io/projected/5c4b466f-a6c5-447e-84ca-70e154cd29c7-kube-api-access-h2vkx\") pod \"8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2p4dwt\" (UID: \"5c4b466f-a6c5-447e-84ca-70e154cd29c7\") " pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2p4dwt" Oct 01 16:19:41 crc kubenswrapper[4949]: I1001 16:19:41.274610 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbf2qs"] Oct 01 16:19:41 crc kubenswrapper[4949]: I1001 16:19:41.276484 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbf2qs" Oct 01 16:19:41 crc kubenswrapper[4949]: I1001 16:19:41.288384 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbf2qs"] Oct 01 16:19:41 crc kubenswrapper[4949]: I1001 16:19:41.324590 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bcdec972-172f-4ddb-83ed-e421a89a9a15-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbf2qs\" (UID: \"bcdec972-172f-4ddb-83ed-e421a89a9a15\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbf2qs" Oct 01 16:19:41 crc kubenswrapper[4949]: I1001 16:19:41.324662 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bcdec972-172f-4ddb-83ed-e421a89a9a15-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbf2qs\" (UID: \"bcdec972-172f-4ddb-83ed-e421a89a9a15\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbf2qs" Oct 01 16:19:41 crc kubenswrapper[4949]: I1001 16:19:41.324691 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5p6rm\" (UniqueName: \"kubernetes.io/projected/bcdec972-172f-4ddb-83ed-e421a89a9a15-kube-api-access-5p6rm\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbf2qs\" (UID: \"bcdec972-172f-4ddb-83ed-e421a89a9a15\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbf2qs" Oct 01 16:19:41 crc kubenswrapper[4949]: I1001 16:19:41.401244 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2p4dwt" Oct 01 16:19:41 crc kubenswrapper[4949]: I1001 16:19:41.429278 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bcdec972-172f-4ddb-83ed-e421a89a9a15-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbf2qs\" (UID: \"bcdec972-172f-4ddb-83ed-e421a89a9a15\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbf2qs" Oct 01 16:19:41 crc kubenswrapper[4949]: I1001 16:19:41.429363 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bcdec972-172f-4ddb-83ed-e421a89a9a15-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbf2qs\" (UID: \"bcdec972-172f-4ddb-83ed-e421a89a9a15\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbf2qs" Oct 01 16:19:41 crc kubenswrapper[4949]: I1001 16:19:41.429395 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5p6rm\" (UniqueName: \"kubernetes.io/projected/bcdec972-172f-4ddb-83ed-e421a89a9a15-kube-api-access-5p6rm\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbf2qs\" (UID: \"bcdec972-172f-4ddb-83ed-e421a89a9a15\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbf2qs" Oct 01 16:19:41 crc kubenswrapper[4949]: I1001 16:19:41.429786 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bcdec972-172f-4ddb-83ed-e421a89a9a15-bundle\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbf2qs\" (UID: \"bcdec972-172f-4ddb-83ed-e421a89a9a15\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbf2qs" Oct 01 16:19:41 crc kubenswrapper[4949]: I1001 16:19:41.430035 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bcdec972-172f-4ddb-83ed-e421a89a9a15-util\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbf2qs\" (UID: \"bcdec972-172f-4ddb-83ed-e421a89a9a15\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbf2qs" Oct 01 16:19:41 crc kubenswrapper[4949]: I1001 16:19:41.447634 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5p6rm\" (UniqueName: \"kubernetes.io/projected/bcdec972-172f-4ddb-83ed-e421a89a9a15-kube-api-access-5p6rm\") pod \"fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbf2qs\" (UID: \"bcdec972-172f-4ddb-83ed-e421a89a9a15\") " pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbf2qs" Oct 01 16:19:41 crc kubenswrapper[4949]: I1001 16:19:41.612363 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbf2qs" Oct 01 16:19:41 crc kubenswrapper[4949]: I1001 16:19:41.617541 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46dbc1ee-6aff-455c-9f06-0a9dc719ce20" path="/var/lib/kubelet/pods/46dbc1ee-6aff-455c-9f06-0a9dc719ce20/volumes" Oct 01 16:19:41 crc kubenswrapper[4949]: I1001 16:19:41.849646 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2p4dwt"] Oct 01 16:19:42 crc kubenswrapper[4949]: I1001 16:19:42.063413 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbf2qs"] Oct 01 16:19:42 crc kubenswrapper[4949]: W1001 16:19:42.079552 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbcdec972_172f_4ddb_83ed_e421a89a9a15.slice/crio-4a63e3ad43e72266527c12e24353b1f5626e9e76060b1c50988482afb48790c0 WatchSource:0}: Error finding container 4a63e3ad43e72266527c12e24353b1f5626e9e76060b1c50988482afb48790c0: Status 404 returned error can't find the container with id 4a63e3ad43e72266527c12e24353b1f5626e9e76060b1c50988482afb48790c0 Oct 01 16:19:42 crc kubenswrapper[4949]: I1001 16:19:42.742318 4949 generic.go:334] "Generic (PLEG): container finished" podID="5c4b466f-a6c5-447e-84ca-70e154cd29c7" containerID="e954977cd7dfa1dfc31b5b54ad673f90b64cd8de162a7df67b56d6f8e5a93eea" exitCode=0 Oct 01 16:19:42 crc kubenswrapper[4949]: I1001 16:19:42.742415 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2p4dwt" event={"ID":"5c4b466f-a6c5-447e-84ca-70e154cd29c7","Type":"ContainerDied","Data":"e954977cd7dfa1dfc31b5b54ad673f90b64cd8de162a7df67b56d6f8e5a93eea"} Oct 01 16:19:42 crc kubenswrapper[4949]: I1001 16:19:42.742748 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2p4dwt" event={"ID":"5c4b466f-a6c5-447e-84ca-70e154cd29c7","Type":"ContainerStarted","Data":"cebb3b9bf1f67100faf4e4e96ae6597c1c9ea21e9eee243784aa429a14fc630c"} Oct 01 16:19:42 crc kubenswrapper[4949]: I1001 16:19:42.745231 4949 generic.go:334] "Generic (PLEG): container finished" podID="bcdec972-172f-4ddb-83ed-e421a89a9a15" containerID="3ddc8aa976c59f5d0063a7e3a16c145cc85ce200bab36cb8770ea4071ab1bb7a" exitCode=0 Oct 01 16:19:42 crc kubenswrapper[4949]: I1001 16:19:42.745272 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbf2qs" event={"ID":"bcdec972-172f-4ddb-83ed-e421a89a9a15","Type":"ContainerDied","Data":"3ddc8aa976c59f5d0063a7e3a16c145cc85ce200bab36cb8770ea4071ab1bb7a"} Oct 01 16:19:42 crc kubenswrapper[4949]: I1001 16:19:42.745303 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbf2qs" event={"ID":"bcdec972-172f-4ddb-83ed-e421a89a9a15","Type":"ContainerStarted","Data":"4a63e3ad43e72266527c12e24353b1f5626e9e76060b1c50988482afb48790c0"} Oct 01 16:19:44 crc kubenswrapper[4949]: I1001 16:19:44.765234 4949 generic.go:334] "Generic (PLEG): container finished" podID="5c4b466f-a6c5-447e-84ca-70e154cd29c7" containerID="a26fafd07855d49cc66ce7711719a5a354f7aca4b544de10f87ad4625e939bf4" exitCode=0 Oct 01 16:19:44 crc kubenswrapper[4949]: I1001 16:19:44.765334 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2p4dwt" event={"ID":"5c4b466f-a6c5-447e-84ca-70e154cd29c7","Type":"ContainerDied","Data":"a26fafd07855d49cc66ce7711719a5a354f7aca4b544de10f87ad4625e939bf4"} Oct 01 16:19:44 crc kubenswrapper[4949]: I1001 16:19:44.768433 4949 generic.go:334] "Generic (PLEG): container finished" podID="bcdec972-172f-4ddb-83ed-e421a89a9a15" containerID="73431a203f422fc01b5a65b09d30fa459991716340257c55df4cf47d07393b54" exitCode=0 Oct 01 16:19:44 crc kubenswrapper[4949]: I1001 16:19:44.768475 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbf2qs" event={"ID":"bcdec972-172f-4ddb-83ed-e421a89a9a15","Type":"ContainerDied","Data":"73431a203f422fc01b5a65b09d30fa459991716340257c55df4cf47d07393b54"} Oct 01 16:19:45 crc kubenswrapper[4949]: I1001 16:19:45.789033 4949 generic.go:334] "Generic (PLEG): container finished" podID="5c4b466f-a6c5-447e-84ca-70e154cd29c7" containerID="ced3ae9518c0d695b8f2c04fc39ff28af6d27497c2170025245995bf0a976367" exitCode=0 Oct 01 16:19:45 crc kubenswrapper[4949]: I1001 16:19:45.789094 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2p4dwt" event={"ID":"5c4b466f-a6c5-447e-84ca-70e154cd29c7","Type":"ContainerDied","Data":"ced3ae9518c0d695b8f2c04fc39ff28af6d27497c2170025245995bf0a976367"} Oct 01 16:19:45 crc kubenswrapper[4949]: I1001 16:19:45.792932 4949 generic.go:334] "Generic (PLEG): container finished" podID="bcdec972-172f-4ddb-83ed-e421a89a9a15" containerID="fdc18309f46c7e48f27dc0715bcae6287021e4282c7b8eff1219e70753bec77a" exitCode=0 Oct 01 16:19:45 crc kubenswrapper[4949]: I1001 16:19:45.792994 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbf2qs" event={"ID":"bcdec972-172f-4ddb-83ed-e421a89a9a15","Type":"ContainerDied","Data":"fdc18309f46c7e48f27dc0715bcae6287021e4282c7b8eff1219e70753bec77a"} Oct 01 16:19:47 crc kubenswrapper[4949]: I1001 16:19:47.184504 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2p4dwt" Oct 01 16:19:47 crc kubenswrapper[4949]: I1001 16:19:47.190571 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbf2qs" Oct 01 16:19:47 crc kubenswrapper[4949]: I1001 16:19:47.337482 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bcdec972-172f-4ddb-83ed-e421a89a9a15-bundle\") pod \"bcdec972-172f-4ddb-83ed-e421a89a9a15\" (UID: \"bcdec972-172f-4ddb-83ed-e421a89a9a15\") " Oct 01 16:19:47 crc kubenswrapper[4949]: I1001 16:19:47.337532 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5p6rm\" (UniqueName: \"kubernetes.io/projected/bcdec972-172f-4ddb-83ed-e421a89a9a15-kube-api-access-5p6rm\") pod \"bcdec972-172f-4ddb-83ed-e421a89a9a15\" (UID: \"bcdec972-172f-4ddb-83ed-e421a89a9a15\") " Oct 01 16:19:47 crc kubenswrapper[4949]: I1001 16:19:47.337631 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bcdec972-172f-4ddb-83ed-e421a89a9a15-util\") pod \"bcdec972-172f-4ddb-83ed-e421a89a9a15\" (UID: \"bcdec972-172f-4ddb-83ed-e421a89a9a15\") " Oct 01 16:19:47 crc kubenswrapper[4949]: I1001 16:19:47.337651 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c4b466f-a6c5-447e-84ca-70e154cd29c7-util\") pod \"5c4b466f-a6c5-447e-84ca-70e154cd29c7\" (UID: \"5c4b466f-a6c5-447e-84ca-70e154cd29c7\") " Oct 01 16:19:47 crc kubenswrapper[4949]: I1001 16:19:47.337691 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h2vkx\" (UniqueName: \"kubernetes.io/projected/5c4b466f-a6c5-447e-84ca-70e154cd29c7-kube-api-access-h2vkx\") pod \"5c4b466f-a6c5-447e-84ca-70e154cd29c7\" (UID: \"5c4b466f-a6c5-447e-84ca-70e154cd29c7\") " Oct 01 16:19:47 crc kubenswrapper[4949]: I1001 16:19:47.337728 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c4b466f-a6c5-447e-84ca-70e154cd29c7-bundle\") pod \"5c4b466f-a6c5-447e-84ca-70e154cd29c7\" (UID: \"5c4b466f-a6c5-447e-84ca-70e154cd29c7\") " Oct 01 16:19:47 crc kubenswrapper[4949]: I1001 16:19:47.338611 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcdec972-172f-4ddb-83ed-e421a89a9a15-bundle" (OuterVolumeSpecName: "bundle") pod "bcdec972-172f-4ddb-83ed-e421a89a9a15" (UID: "bcdec972-172f-4ddb-83ed-e421a89a9a15"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:19:47 crc kubenswrapper[4949]: I1001 16:19:47.341264 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c4b466f-a6c5-447e-84ca-70e154cd29c7-bundle" (OuterVolumeSpecName: "bundle") pod "5c4b466f-a6c5-447e-84ca-70e154cd29c7" (UID: "5c4b466f-a6c5-447e-84ca-70e154cd29c7"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:19:47 crc kubenswrapper[4949]: I1001 16:19:47.343814 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bcdec972-172f-4ddb-83ed-e421a89a9a15-kube-api-access-5p6rm" (OuterVolumeSpecName: "kube-api-access-5p6rm") pod "bcdec972-172f-4ddb-83ed-e421a89a9a15" (UID: "bcdec972-172f-4ddb-83ed-e421a89a9a15"). InnerVolumeSpecName "kube-api-access-5p6rm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:19:47 crc kubenswrapper[4949]: I1001 16:19:47.350473 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c4b466f-a6c5-447e-84ca-70e154cd29c7-kube-api-access-h2vkx" (OuterVolumeSpecName: "kube-api-access-h2vkx") pod "5c4b466f-a6c5-447e-84ca-70e154cd29c7" (UID: "5c4b466f-a6c5-447e-84ca-70e154cd29c7"). InnerVolumeSpecName "kube-api-access-h2vkx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:19:47 crc kubenswrapper[4949]: I1001 16:19:47.362401 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bcdec972-172f-4ddb-83ed-e421a89a9a15-util" (OuterVolumeSpecName: "util") pod "bcdec972-172f-4ddb-83ed-e421a89a9a15" (UID: "bcdec972-172f-4ddb-83ed-e421a89a9a15"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:19:47 crc kubenswrapper[4949]: I1001 16:19:47.440357 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h2vkx\" (UniqueName: \"kubernetes.io/projected/5c4b466f-a6c5-447e-84ca-70e154cd29c7-kube-api-access-h2vkx\") on node \"crc\" DevicePath \"\"" Oct 01 16:19:47 crc kubenswrapper[4949]: I1001 16:19:47.440390 4949 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/5c4b466f-a6c5-447e-84ca-70e154cd29c7-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 16:19:47 crc kubenswrapper[4949]: I1001 16:19:47.440399 4949 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/bcdec972-172f-4ddb-83ed-e421a89a9a15-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 16:19:47 crc kubenswrapper[4949]: I1001 16:19:47.440407 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5p6rm\" (UniqueName: \"kubernetes.io/projected/bcdec972-172f-4ddb-83ed-e421a89a9a15-kube-api-access-5p6rm\") on node \"crc\" DevicePath \"\"" Oct 01 16:19:47 crc kubenswrapper[4949]: I1001 16:19:47.440416 4949 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/bcdec972-172f-4ddb-83ed-e421a89a9a15-util\") on node \"crc\" DevicePath \"\"" Oct 01 16:19:47 crc kubenswrapper[4949]: I1001 16:19:47.610501 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c4b466f-a6c5-447e-84ca-70e154cd29c7-util" (OuterVolumeSpecName: "util") pod "5c4b466f-a6c5-447e-84ca-70e154cd29c7" (UID: "5c4b466f-a6c5-447e-84ca-70e154cd29c7"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:19:47 crc kubenswrapper[4949]: I1001 16:19:47.643533 4949 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/5c4b466f-a6c5-447e-84ca-70e154cd29c7-util\") on node \"crc\" DevicePath \"\"" Oct 01 16:19:47 crc kubenswrapper[4949]: I1001 16:19:47.808612 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2p4dwt" event={"ID":"5c4b466f-a6c5-447e-84ca-70e154cd29c7","Type":"ContainerDied","Data":"cebb3b9bf1f67100faf4e4e96ae6597c1c9ea21e9eee243784aa429a14fc630c"} Oct 01 16:19:47 crc kubenswrapper[4949]: I1001 16:19:47.808903 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cebb3b9bf1f67100faf4e4e96ae6597c1c9ea21e9eee243784aa429a14fc630c" Oct 01 16:19:47 crc kubenswrapper[4949]: I1001 16:19:47.808690 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2p4dwt" Oct 01 16:19:47 crc kubenswrapper[4949]: I1001 16:19:47.811620 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbf2qs" event={"ID":"bcdec972-172f-4ddb-83ed-e421a89a9a15","Type":"ContainerDied","Data":"4a63e3ad43e72266527c12e24353b1f5626e9e76060b1c50988482afb48790c0"} Oct 01 16:19:47 crc kubenswrapper[4949]: I1001 16:19:47.811656 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a63e3ad43e72266527c12e24353b1f5626e9e76060b1c50988482afb48790c0" Oct 01 16:19:47 crc kubenswrapper[4949]: I1001 16:19:47.811679 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbf2qs" Oct 01 16:19:49 crc kubenswrapper[4949]: I1001 16:19:49.832214 4949 generic.go:334] "Generic (PLEG): container finished" podID="bf83f788-14e7-4c60-bdb0-174b3d343b75" containerID="041ad9885ca6678b3f866d642169b3432e90dbe407ba3945821c2a799fd78a2a" exitCode=0 Oct 01 16:19:49 crc kubenswrapper[4949]: I1001 16:19:49.832281 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nw9r5" event={"ID":"bf83f788-14e7-4c60-bdb0-174b3d343b75","Type":"ContainerDied","Data":"041ad9885ca6678b3f866d642169b3432e90dbe407ba3945821c2a799fd78a2a"} Oct 01 16:19:50 crc kubenswrapper[4949]: I1001 16:19:50.601909 4949 scope.go:117] "RemoveContainer" containerID="757b624790258f8fb52b1acdc82513f0413ed869329247fea2c8ea990d6338de" Oct 01 16:19:50 crc kubenswrapper[4949]: E1001 16:19:50.602681 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:19:51 crc kubenswrapper[4949]: I1001 16:19:51.358269 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nw9r5" Oct 01 16:19:51 crc kubenswrapper[4949]: I1001 16:19:51.524388 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf83f788-14e7-4c60-bdb0-174b3d343b75-bootstrap-combined-ca-bundle\") pod \"bf83f788-14e7-4c60-bdb0-174b3d343b75\" (UID: \"bf83f788-14e7-4c60-bdb0-174b3d343b75\") " Oct 01 16:19:51 crc kubenswrapper[4949]: I1001 16:19:51.524460 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrvsj\" (UniqueName: \"kubernetes.io/projected/bf83f788-14e7-4c60-bdb0-174b3d343b75-kube-api-access-lrvsj\") pod \"bf83f788-14e7-4c60-bdb0-174b3d343b75\" (UID: \"bf83f788-14e7-4c60-bdb0-174b3d343b75\") " Oct 01 16:19:51 crc kubenswrapper[4949]: I1001 16:19:51.524669 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bf83f788-14e7-4c60-bdb0-174b3d343b75-ceph\") pod \"bf83f788-14e7-4c60-bdb0-174b3d343b75\" (UID: \"bf83f788-14e7-4c60-bdb0-174b3d343b75\") " Oct 01 16:19:51 crc kubenswrapper[4949]: I1001 16:19:51.524857 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf83f788-14e7-4c60-bdb0-174b3d343b75-inventory\") pod \"bf83f788-14e7-4c60-bdb0-174b3d343b75\" (UID: \"bf83f788-14e7-4c60-bdb0-174b3d343b75\") " Oct 01 16:19:51 crc kubenswrapper[4949]: I1001 16:19:51.524947 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bf83f788-14e7-4c60-bdb0-174b3d343b75-ssh-key\") pod \"bf83f788-14e7-4c60-bdb0-174b3d343b75\" (UID: \"bf83f788-14e7-4c60-bdb0-174b3d343b75\") " Oct 01 16:19:51 crc kubenswrapper[4949]: I1001 16:19:51.531297 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf83f788-14e7-4c60-bdb0-174b3d343b75-ceph" (OuterVolumeSpecName: "ceph") pod "bf83f788-14e7-4c60-bdb0-174b3d343b75" (UID: "bf83f788-14e7-4c60-bdb0-174b3d343b75"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:19:51 crc kubenswrapper[4949]: I1001 16:19:51.548285 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf83f788-14e7-4c60-bdb0-174b3d343b75-kube-api-access-lrvsj" (OuterVolumeSpecName: "kube-api-access-lrvsj") pod "bf83f788-14e7-4c60-bdb0-174b3d343b75" (UID: "bf83f788-14e7-4c60-bdb0-174b3d343b75"). InnerVolumeSpecName "kube-api-access-lrvsj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:19:51 crc kubenswrapper[4949]: I1001 16:19:51.548458 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf83f788-14e7-4c60-bdb0-174b3d343b75-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "bf83f788-14e7-4c60-bdb0-174b3d343b75" (UID: "bf83f788-14e7-4c60-bdb0-174b3d343b75"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:19:51 crc kubenswrapper[4949]: I1001 16:19:51.561659 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf83f788-14e7-4c60-bdb0-174b3d343b75-inventory" (OuterVolumeSpecName: "inventory") pod "bf83f788-14e7-4c60-bdb0-174b3d343b75" (UID: "bf83f788-14e7-4c60-bdb0-174b3d343b75"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:19:51 crc kubenswrapper[4949]: I1001 16:19:51.578521 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf83f788-14e7-4c60-bdb0-174b3d343b75-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "bf83f788-14e7-4c60-bdb0-174b3d343b75" (UID: "bf83f788-14e7-4c60-bdb0-174b3d343b75"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:19:51 crc kubenswrapper[4949]: I1001 16:19:51.627706 4949 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bf83f788-14e7-4c60-bdb0-174b3d343b75-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 16:19:51 crc kubenswrapper[4949]: I1001 16:19:51.627905 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrvsj\" (UniqueName: \"kubernetes.io/projected/bf83f788-14e7-4c60-bdb0-174b3d343b75-kube-api-access-lrvsj\") on node \"crc\" DevicePath \"\"" Oct 01 16:19:51 crc kubenswrapper[4949]: I1001 16:19:51.627963 4949 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bf83f788-14e7-4c60-bdb0-174b3d343b75-ceph\") on node \"crc\" DevicePath \"\"" Oct 01 16:19:51 crc kubenswrapper[4949]: I1001 16:19:51.628051 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bf83f788-14e7-4c60-bdb0-174b3d343b75-inventory\") on node \"crc\" DevicePath \"\"" Oct 01 16:19:51 crc kubenswrapper[4949]: I1001 16:19:51.628160 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bf83f788-14e7-4c60-bdb0-174b3d343b75-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 01 16:19:51 crc kubenswrapper[4949]: I1001 16:19:51.852763 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nw9r5" event={"ID":"bf83f788-14e7-4c60-bdb0-174b3d343b75","Type":"ContainerDied","Data":"75e7ed6e50fbaee2791d9ae17fd27270236918dfe684f9507ad4ea6a47a25867"} Oct 01 16:19:51 crc kubenswrapper[4949]: I1001 16:19:51.852801 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="75e7ed6e50fbaee2791d9ae17fd27270236918dfe684f9507ad4ea6a47a25867" Oct 01 16:19:51 crc kubenswrapper[4949]: I1001 16:19:51.853116 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/bootstrap-edpm-deployment-openstack-edpm-ipam-nw9r5" Oct 01 16:19:51 crc kubenswrapper[4949]: I1001 16:19:51.934541 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mhfqj"] Oct 01 16:19:51 crc kubenswrapper[4949]: E1001 16:19:51.934935 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c4b466f-a6c5-447e-84ca-70e154cd29c7" containerName="util" Oct 01 16:19:51 crc kubenswrapper[4949]: I1001 16:19:51.934953 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c4b466f-a6c5-447e-84ca-70e154cd29c7" containerName="util" Oct 01 16:19:51 crc kubenswrapper[4949]: E1001 16:19:51.934970 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcdec972-172f-4ddb-83ed-e421a89a9a15" containerName="pull" Oct 01 16:19:51 crc kubenswrapper[4949]: I1001 16:19:51.934976 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcdec972-172f-4ddb-83ed-e421a89a9a15" containerName="pull" Oct 01 16:19:51 crc kubenswrapper[4949]: E1001 16:19:51.934985 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c4b466f-a6c5-447e-84ca-70e154cd29c7" containerName="pull" Oct 01 16:19:51 crc kubenswrapper[4949]: I1001 16:19:51.934992 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c4b466f-a6c5-447e-84ca-70e154cd29c7" containerName="pull" Oct 01 16:19:51 crc kubenswrapper[4949]: E1001 16:19:51.935015 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcdec972-172f-4ddb-83ed-e421a89a9a15" containerName="extract" Oct 01 16:19:51 crc kubenswrapper[4949]: I1001 16:19:51.935022 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcdec972-172f-4ddb-83ed-e421a89a9a15" containerName="extract" Oct 01 16:19:51 crc kubenswrapper[4949]: E1001 16:19:51.935038 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bcdec972-172f-4ddb-83ed-e421a89a9a15" containerName="util" Oct 01 16:19:51 crc kubenswrapper[4949]: I1001 16:19:51.935045 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="bcdec972-172f-4ddb-83ed-e421a89a9a15" containerName="util" Oct 01 16:19:51 crc kubenswrapper[4949]: E1001 16:19:51.935053 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bf83f788-14e7-4c60-bdb0-174b3d343b75" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 01 16:19:51 crc kubenswrapper[4949]: I1001 16:19:51.935060 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="bf83f788-14e7-4c60-bdb0-174b3d343b75" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 01 16:19:51 crc kubenswrapper[4949]: E1001 16:19:51.935077 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c4b466f-a6c5-447e-84ca-70e154cd29c7" containerName="extract" Oct 01 16:19:51 crc kubenswrapper[4949]: I1001 16:19:51.935083 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c4b466f-a6c5-447e-84ca-70e154cd29c7" containerName="extract" Oct 01 16:19:51 crc kubenswrapper[4949]: I1001 16:19:51.935272 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="bf83f788-14e7-4c60-bdb0-174b3d343b75" containerName="bootstrap-edpm-deployment-openstack-edpm-ipam" Oct 01 16:19:51 crc kubenswrapper[4949]: I1001 16:19:51.935284 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c4b466f-a6c5-447e-84ca-70e154cd29c7" containerName="extract" Oct 01 16:19:51 crc kubenswrapper[4949]: I1001 16:19:51.935292 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="bcdec972-172f-4ddb-83ed-e421a89a9a15" containerName="extract" Oct 01 16:19:51 crc kubenswrapper[4949]: I1001 16:19:51.935848 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mhfqj" Oct 01 16:19:51 crc kubenswrapper[4949]: I1001 16:19:51.941378 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 01 16:19:51 crc kubenswrapper[4949]: I1001 16:19:51.942223 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dwhcl" Oct 01 16:19:51 crc kubenswrapper[4949]: I1001 16:19:51.942423 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 01 16:19:51 crc kubenswrapper[4949]: I1001 16:19:51.942558 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 01 16:19:51 crc kubenswrapper[4949]: I1001 16:19:51.942686 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 01 16:19:51 crc kubenswrapper[4949]: I1001 16:19:51.960735 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mhfqj"] Oct 01 16:19:52 crc kubenswrapper[4949]: I1001 16:19:52.040406 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05704de2-46b5-4cce-bea8-9e1a00e0d2a5-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-mhfqj\" (UID: \"05704de2-46b5-4cce-bea8-9e1a00e0d2a5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mhfqj" Oct 01 16:19:52 crc kubenswrapper[4949]: I1001 16:19:52.040824 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/05704de2-46b5-4cce-bea8-9e1a00e0d2a5-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-mhfqj\" (UID: \"05704de2-46b5-4cce-bea8-9e1a00e0d2a5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mhfqj" Oct 01 16:19:52 crc kubenswrapper[4949]: I1001 16:19:52.041009 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ckzfn\" (UniqueName: \"kubernetes.io/projected/05704de2-46b5-4cce-bea8-9e1a00e0d2a5-kube-api-access-ckzfn\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-mhfqj\" (UID: \"05704de2-46b5-4cce-bea8-9e1a00e0d2a5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mhfqj" Oct 01 16:19:52 crc kubenswrapper[4949]: I1001 16:19:52.041196 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/05704de2-46b5-4cce-bea8-9e1a00e0d2a5-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-mhfqj\" (UID: \"05704de2-46b5-4cce-bea8-9e1a00e0d2a5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mhfqj" Oct 01 16:19:52 crc kubenswrapper[4949]: I1001 16:19:52.142969 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ckzfn\" (UniqueName: \"kubernetes.io/projected/05704de2-46b5-4cce-bea8-9e1a00e0d2a5-kube-api-access-ckzfn\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-mhfqj\" (UID: \"05704de2-46b5-4cce-bea8-9e1a00e0d2a5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mhfqj" Oct 01 16:19:52 crc kubenswrapper[4949]: I1001 16:19:52.143084 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/05704de2-46b5-4cce-bea8-9e1a00e0d2a5-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-mhfqj\" (UID: \"05704de2-46b5-4cce-bea8-9e1a00e0d2a5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mhfqj" Oct 01 16:19:52 crc kubenswrapper[4949]: I1001 16:19:52.143268 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05704de2-46b5-4cce-bea8-9e1a00e0d2a5-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-mhfqj\" (UID: \"05704de2-46b5-4cce-bea8-9e1a00e0d2a5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mhfqj" Oct 01 16:19:52 crc kubenswrapper[4949]: I1001 16:19:52.143432 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/05704de2-46b5-4cce-bea8-9e1a00e0d2a5-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-mhfqj\" (UID: \"05704de2-46b5-4cce-bea8-9e1a00e0d2a5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mhfqj" Oct 01 16:19:52 crc kubenswrapper[4949]: I1001 16:19:52.148203 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/05704de2-46b5-4cce-bea8-9e1a00e0d2a5-ceph\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-mhfqj\" (UID: \"05704de2-46b5-4cce-bea8-9e1a00e0d2a5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mhfqj" Oct 01 16:19:52 crc kubenswrapper[4949]: I1001 16:19:52.148367 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/05704de2-46b5-4cce-bea8-9e1a00e0d2a5-ssh-key\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-mhfqj\" (UID: \"05704de2-46b5-4cce-bea8-9e1a00e0d2a5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mhfqj" Oct 01 16:19:52 crc kubenswrapper[4949]: I1001 16:19:52.149595 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05704de2-46b5-4cce-bea8-9e1a00e0d2a5-inventory\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-mhfqj\" (UID: \"05704de2-46b5-4cce-bea8-9e1a00e0d2a5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mhfqj" Oct 01 16:19:52 crc kubenswrapper[4949]: I1001 16:19:52.162959 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ckzfn\" (UniqueName: \"kubernetes.io/projected/05704de2-46b5-4cce-bea8-9e1a00e0d2a5-kube-api-access-ckzfn\") pod \"configure-network-edpm-deployment-openstack-edpm-ipam-mhfqj\" (UID: \"05704de2-46b5-4cce-bea8-9e1a00e0d2a5\") " pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mhfqj" Oct 01 16:19:52 crc kubenswrapper[4949]: I1001 16:19:52.250955 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mhfqj" Oct 01 16:19:52 crc kubenswrapper[4949]: I1001 16:19:52.854204 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mhfqj"] Oct 01 16:19:52 crc kubenswrapper[4949]: W1001 16:19:52.864041 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod05704de2_46b5_4cce_bea8_9e1a00e0d2a5.slice/crio-ad7583957ade49049dacf2a0a3e5324c480fbce66d751502f16b23ff4d2ce9c3 WatchSource:0}: Error finding container ad7583957ade49049dacf2a0a3e5324c480fbce66d751502f16b23ff4d2ce9c3: Status 404 returned error can't find the container with id ad7583957ade49049dacf2a0a3e5324c480fbce66d751502f16b23ff4d2ce9c3 Oct 01 16:19:53 crc kubenswrapper[4949]: I1001 16:19:53.883284 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mhfqj" event={"ID":"05704de2-46b5-4cce-bea8-9e1a00e0d2a5","Type":"ContainerStarted","Data":"06388d814765b728a78e162d71cbb2f14dcbb834f5b736344045ce170d79fe89"} Oct 01 16:19:53 crc kubenswrapper[4949]: I1001 16:19:53.883527 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mhfqj" event={"ID":"05704de2-46b5-4cce-bea8-9e1a00e0d2a5","Type":"ContainerStarted","Data":"ad7583957ade49049dacf2a0a3e5324c480fbce66d751502f16b23ff4d2ce9c3"} Oct 01 16:19:53 crc kubenswrapper[4949]: I1001 16:19:53.905284 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mhfqj" podStartSLOduration=2.46660195 podStartE2EDuration="2.905264374s" podCreationTimestamp="2025-10-01 16:19:51 +0000 UTC" firstStartedPulling="2025-10-01 16:19:52.87242892 +0000 UTC m=+2292.178035111" lastFinishedPulling="2025-10-01 16:19:53.311091334 +0000 UTC m=+2292.616697535" observedRunningTime="2025-10-01 16:19:53.899237459 +0000 UTC m=+2293.204843650" watchObservedRunningTime="2025-10-01 16:19:53.905264374 +0000 UTC m=+2293.210870565" Oct 01 16:19:58 crc kubenswrapper[4949]: I1001 16:19:58.387425 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-lw9lc"] Oct 01 16:19:58 crc kubenswrapper[4949]: I1001 16:19:58.389326 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-858ddd8f98-lw9lc" Oct 01 16:19:58 crc kubenswrapper[4949]: I1001 16:19:58.401497 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-lw9lc"] Oct 01 16:19:58 crc kubenswrapper[4949]: I1001 16:19:58.479731 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q65rh\" (UniqueName: \"kubernetes.io/projected/1d72f965-53c3-4cf1-915b-41cec48f788b-kube-api-access-q65rh\") pod \"nmstate-operator-858ddd8f98-lw9lc\" (UID: \"1d72f965-53c3-4cf1-915b-41cec48f788b\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-lw9lc" Oct 01 16:19:58 crc kubenswrapper[4949]: I1001 16:19:58.581570 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q65rh\" (UniqueName: \"kubernetes.io/projected/1d72f965-53c3-4cf1-915b-41cec48f788b-kube-api-access-q65rh\") pod \"nmstate-operator-858ddd8f98-lw9lc\" (UID: \"1d72f965-53c3-4cf1-915b-41cec48f788b\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-lw9lc" Oct 01 16:19:58 crc kubenswrapper[4949]: I1001 16:19:58.602666 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q65rh\" (UniqueName: \"kubernetes.io/projected/1d72f965-53c3-4cf1-915b-41cec48f788b-kube-api-access-q65rh\") pod \"nmstate-operator-858ddd8f98-lw9lc\" (UID: \"1d72f965-53c3-4cf1-915b-41cec48f788b\") " pod="openshift-nmstate/nmstate-operator-858ddd8f98-lw9lc" Oct 01 16:19:58 crc kubenswrapper[4949]: I1001 16:19:58.705361 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-858ddd8f98-lw9lc" Oct 01 16:19:59 crc kubenswrapper[4949]: I1001 16:19:59.235626 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-operator-858ddd8f98-lw9lc"] Oct 01 16:19:59 crc kubenswrapper[4949]: I1001 16:19:59.935389 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-858ddd8f98-lw9lc" event={"ID":"1d72f965-53c3-4cf1-915b-41cec48f788b","Type":"ContainerStarted","Data":"05e11b67610d49d313eedf0fd618befcfe7b294a2ded350abd03329f99557b67"} Oct 01 16:20:01 crc kubenswrapper[4949]: I1001 16:20:01.871701 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-nmstate/nmstate-metrics-58fcddf996-tgtpv"] Oct 01 16:20:01 crc kubenswrapper[4949]: I1001 16:20:01.891259 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-nmstate/nmstate-webhook-6d689559c5-gzrq5"] Oct 01 16:20:01 crc kubenswrapper[4949]: I1001 16:20:01.891689 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-nmstate/nmstate-webhook-6d689559c5-gzrq5" podUID="d8e345e7-61b3-4723-8332-bb171b328a6a" containerName="nmstate-webhook" containerID="cri-o://7d84429cb334d8af2b7d77a848dbc2692901ce13f82f55f8ec04f02c417016d2" gracePeriod=30 Oct 01 16:20:01 crc kubenswrapper[4949]: I1001 16:20:01.906693 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-nmstate/nmstate-handler-g2cvf"] Oct 01 16:20:01 crc kubenswrapper[4949]: I1001 16:20:01.906924 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-nmstate/nmstate-handler-g2cvf" podUID="3628bbb1-354b-40b0-89ac-3cc73d3a0ec9" containerName="nmstate-handler" containerID="cri-o://96c52a3add26e2c7255e9782705759508fc82619dc576c0dfaaafc25fc385141" gracePeriod=30 Oct 01 16:20:01 crc kubenswrapper[4949]: I1001 16:20:01.954980 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-nmstate/nmstate-metrics-58fcddf996-tgtpv" podUID="42191e90-2de0-4988-860e-61d057d81232" containerName="nmstate-metrics" containerID="cri-o://2f83d858ea177a2239d168688614f932cf62f1acca2133a2e5638af2e52fa5a6" gracePeriod=30 Oct 01 16:20:01 crc kubenswrapper[4949]: I1001 16:20:01.955389 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-858ddd8f98-lw9lc" event={"ID":"1d72f965-53c3-4cf1-915b-41cec48f788b","Type":"ContainerStarted","Data":"3f2935804337340f3cc40a8f5dccaed089f8e14e71788d0906fcf7aab9807504"} Oct 01 16:20:01 crc kubenswrapper[4949]: I1001 16:20:01.955516 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-nmstate/nmstate-metrics-58fcddf996-tgtpv" podUID="42191e90-2de0-4988-860e-61d057d81232" containerName="kube-rbac-proxy" containerID="cri-o://bf16fea18da54745bde0857da1d9f8f017df4bf44be029e3e1711f6808b1e564" gracePeriod=30 Oct 01 16:20:01 crc kubenswrapper[4949]: I1001 16:20:01.989059 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-operator-858ddd8f98-lw9lc" podStartSLOduration=2.008658051 podStartE2EDuration="3.989040265s" podCreationTimestamp="2025-10-01 16:19:58 +0000 UTC" firstStartedPulling="2025-10-01 16:19:59.239297976 +0000 UTC m=+2298.544904177" lastFinishedPulling="2025-10-01 16:20:01.2196802 +0000 UTC m=+2300.525286391" observedRunningTime="2025-10-01 16:20:01.983520934 +0000 UTC m=+2301.289127115" watchObservedRunningTime="2025-10-01 16:20:01.989040265 +0000 UTC m=+2301.294646456" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.030505 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-nmstate/nmstate-operator-5d6f6cfd66-bpnxg"] Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.030778 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-nmstate/nmstate-operator-5d6f6cfd66-bpnxg" podUID="7045f77f-0c3a-4e54-8378-2fcda1244f0c" containerName="nmstate-operator" containerID="cri-o://1e04c7dd31a3c4877439051f5269f3ade73ccbaff51e483ebacf167c09746e0a" gracePeriod=30 Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.082565 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-b9x5t"] Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.083842 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-b9x5t" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.105072 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-b9x5t"] Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.127796 4949 patch_prober.go:28] interesting pod/nmstate-webhook-6d689559c5-gzrq5 container/nmstate-webhook namespace/openshift-nmstate: Readiness probe status=failure output="Get \"https://10.217.0.37:9443/readyz\": dial tcp 10.217.0.37:9443: connect: connection refused" start-of-body= Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.127849 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-nmstate/nmstate-webhook-6d689559c5-gzrq5" podUID="d8e345e7-61b3-4723-8332-bb171b328a6a" containerName="nmstate-webhook" probeResult="failure" output="Get \"https://10.217.0.37:9443/readyz\": dial tcp 10.217.0.37:9443: connect: connection refused" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.159658 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8a0b8afa-4a92-444c-b7a1-cf9939e5d88c-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-b9x5t\" (UID: \"8a0b8afa-4a92-444c-b7a1-cf9939e5d88c\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-b9x5t" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.159708 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bjrgv\" (UniqueName: \"kubernetes.io/projected/8a0b8afa-4a92-444c-b7a1-cf9939e5d88c-kube-api-access-bjrgv\") pod \"nmstate-console-plugin-6b874cbd85-b9x5t\" (UID: \"8a0b8afa-4a92-444c-b7a1-cf9939e5d88c\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-b9x5t" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.159820 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a0b8afa-4a92-444c-b7a1-cf9939e5d88c-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-b9x5t\" (UID: \"8a0b8afa-4a92-444c-b7a1-cf9939e5d88c\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-b9x5t" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.261365 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8a0b8afa-4a92-444c-b7a1-cf9939e5d88c-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-b9x5t\" (UID: \"8a0b8afa-4a92-444c-b7a1-cf9939e5d88c\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-b9x5t" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.261409 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bjrgv\" (UniqueName: \"kubernetes.io/projected/8a0b8afa-4a92-444c-b7a1-cf9939e5d88c-kube-api-access-bjrgv\") pod \"nmstate-console-plugin-6b874cbd85-b9x5t\" (UID: \"8a0b8afa-4a92-444c-b7a1-cf9939e5d88c\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-b9x5t" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.261522 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a0b8afa-4a92-444c-b7a1-cf9939e5d88c-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-b9x5t\" (UID: \"8a0b8afa-4a92-444c-b7a1-cf9939e5d88c\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-b9x5t" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.262358 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/8a0b8afa-4a92-444c-b7a1-cf9939e5d88c-nginx-conf\") pod \"nmstate-console-plugin-6b874cbd85-b9x5t\" (UID: \"8a0b8afa-4a92-444c-b7a1-cf9939e5d88c\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-b9x5t" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.286207 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/8a0b8afa-4a92-444c-b7a1-cf9939e5d88c-plugin-serving-cert\") pod \"nmstate-console-plugin-6b874cbd85-b9x5t\" (UID: \"8a0b8afa-4a92-444c-b7a1-cf9939e5d88c\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-b9x5t" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.290906 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bjrgv\" (UniqueName: \"kubernetes.io/projected/8a0b8afa-4a92-444c-b7a1-cf9939e5d88c-kube-api-access-bjrgv\") pod \"nmstate-console-plugin-6b874cbd85-b9x5t\" (UID: \"8a0b8afa-4a92-444c-b7a1-cf9939e5d88c\") " pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-b9x5t" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.382714 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-b9x5t" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.400226 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-g2cvf" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.448820 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-handler-g5nwf"] Oct 01 16:20:02 crc kubenswrapper[4949]: E1001 16:20:02.449177 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3628bbb1-354b-40b0-89ac-3cc73d3a0ec9" containerName="nmstate-handler" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.449188 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="3628bbb1-354b-40b0-89ac-3cc73d3a0ec9" containerName="nmstate-handler" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.449341 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="3628bbb1-354b-40b0-89ac-3cc73d3a0ec9" containerName="nmstate-handler" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.449917 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-g5nwf" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.566621 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3628bbb1-354b-40b0-89ac-3cc73d3a0ec9-ovs-socket\") pod \"3628bbb1-354b-40b0-89ac-3cc73d3a0ec9\" (UID: \"3628bbb1-354b-40b0-89ac-3cc73d3a0ec9\") " Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.566726 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3628bbb1-354b-40b0-89ac-3cc73d3a0ec9-nmstate-lock\") pod \"3628bbb1-354b-40b0-89ac-3cc73d3a0ec9\" (UID: \"3628bbb1-354b-40b0-89ac-3cc73d3a0ec9\") " Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.566786 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3628bbb1-354b-40b0-89ac-3cc73d3a0ec9-dbus-socket\") pod \"3628bbb1-354b-40b0-89ac-3cc73d3a0ec9\" (UID: \"3628bbb1-354b-40b0-89ac-3cc73d3a0ec9\") " Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.566868 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tj5qn\" (UniqueName: \"kubernetes.io/projected/3628bbb1-354b-40b0-89ac-3cc73d3a0ec9-kube-api-access-tj5qn\") pod \"3628bbb1-354b-40b0-89ac-3cc73d3a0ec9\" (UID: \"3628bbb1-354b-40b0-89ac-3cc73d3a0ec9\") " Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.567171 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3c565206-451b-4bfc-bef6-20b6f3a33546-dbus-socket\") pod \"nmstate-handler-g5nwf\" (UID: \"3c565206-451b-4bfc-bef6-20b6f3a33546\") " pod="openshift-nmstate/nmstate-handler-g5nwf" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.567200 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3c565206-451b-4bfc-bef6-20b6f3a33546-ovs-socket\") pod \"nmstate-handler-g5nwf\" (UID: \"3c565206-451b-4bfc-bef6-20b6f3a33546\") " pod="openshift-nmstate/nmstate-handler-g5nwf" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.567192 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3628bbb1-354b-40b0-89ac-3cc73d3a0ec9-dbus-socket" (OuterVolumeSpecName: "dbus-socket") pod "3628bbb1-354b-40b0-89ac-3cc73d3a0ec9" (UID: "3628bbb1-354b-40b0-89ac-3cc73d3a0ec9"). InnerVolumeSpecName "dbus-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.567231 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hxbz\" (UniqueName: \"kubernetes.io/projected/3c565206-451b-4bfc-bef6-20b6f3a33546-kube-api-access-8hxbz\") pod \"nmstate-handler-g5nwf\" (UID: \"3c565206-451b-4bfc-bef6-20b6f3a33546\") " pod="openshift-nmstate/nmstate-handler-g5nwf" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.567237 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3628bbb1-354b-40b0-89ac-3cc73d3a0ec9-ovs-socket" (OuterVolumeSpecName: "ovs-socket") pod "3628bbb1-354b-40b0-89ac-3cc73d3a0ec9" (UID: "3628bbb1-354b-40b0-89ac-3cc73d3a0ec9"). InnerVolumeSpecName "ovs-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.567254 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3628bbb1-354b-40b0-89ac-3cc73d3a0ec9-nmstate-lock" (OuterVolumeSpecName: "nmstate-lock") pod "3628bbb1-354b-40b0-89ac-3cc73d3a0ec9" (UID: "3628bbb1-354b-40b0-89ac-3cc73d3a0ec9"). InnerVolumeSpecName "nmstate-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.570222 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3c565206-451b-4bfc-bef6-20b6f3a33546-nmstate-lock\") pod \"nmstate-handler-g5nwf\" (UID: \"3c565206-451b-4bfc-bef6-20b6f3a33546\") " pod="openshift-nmstate/nmstate-handler-g5nwf" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.570540 4949 reconciler_common.go:293] "Volume detached for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3628bbb1-354b-40b0-89ac-3cc73d3a0ec9-ovs-socket\") on node \"crc\" DevicePath \"\"" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.570558 4949 reconciler_common.go:293] "Volume detached for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3628bbb1-354b-40b0-89ac-3cc73d3a0ec9-nmstate-lock\") on node \"crc\" DevicePath \"\"" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.570605 4949 reconciler_common.go:293] "Volume detached for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3628bbb1-354b-40b0-89ac-3cc73d3a0ec9-dbus-socket\") on node \"crc\" DevicePath \"\"" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.573657 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3628bbb1-354b-40b0-89ac-3cc73d3a0ec9-kube-api-access-tj5qn" (OuterVolumeSpecName: "kube-api-access-tj5qn") pod "3628bbb1-354b-40b0-89ac-3cc73d3a0ec9" (UID: "3628bbb1-354b-40b0-89ac-3cc73d3a0ec9"). InnerVolumeSpecName "kube-api-access-tj5qn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.606275 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6d689559c5-gzrq5" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.673593 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/d8e345e7-61b3-4723-8332-bb171b328a6a-tls-key-pair\") pod \"d8e345e7-61b3-4723-8332-bb171b328a6a\" (UID: \"d8e345e7-61b3-4723-8332-bb171b328a6a\") " Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.673694 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdprg\" (UniqueName: \"kubernetes.io/projected/d8e345e7-61b3-4723-8332-bb171b328a6a-kube-api-access-fdprg\") pod \"d8e345e7-61b3-4723-8332-bb171b328a6a\" (UID: \"d8e345e7-61b3-4723-8332-bb171b328a6a\") " Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.673826 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3c565206-451b-4bfc-bef6-20b6f3a33546-dbus-socket\") pod \"nmstate-handler-g5nwf\" (UID: \"3c565206-451b-4bfc-bef6-20b6f3a33546\") " pod="openshift-nmstate/nmstate-handler-g5nwf" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.673885 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3c565206-451b-4bfc-bef6-20b6f3a33546-ovs-socket\") pod \"nmstate-handler-g5nwf\" (UID: \"3c565206-451b-4bfc-bef6-20b6f3a33546\") " pod="openshift-nmstate/nmstate-handler-g5nwf" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.673954 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hxbz\" (UniqueName: \"kubernetes.io/projected/3c565206-451b-4bfc-bef6-20b6f3a33546-kube-api-access-8hxbz\") pod \"nmstate-handler-g5nwf\" (UID: \"3c565206-451b-4bfc-bef6-20b6f3a33546\") " pod="openshift-nmstate/nmstate-handler-g5nwf" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.674029 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3c565206-451b-4bfc-bef6-20b6f3a33546-nmstate-lock\") pod \"nmstate-handler-g5nwf\" (UID: \"3c565206-451b-4bfc-bef6-20b6f3a33546\") " pod="openshift-nmstate/nmstate-handler-g5nwf" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.674339 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tj5qn\" (UniqueName: \"kubernetes.io/projected/3628bbb1-354b-40b0-89ac-3cc73d3a0ec9-kube-api-access-tj5qn\") on node \"crc\" DevicePath \"\"" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.676358 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nmstate-lock\" (UniqueName: \"kubernetes.io/host-path/3c565206-451b-4bfc-bef6-20b6f3a33546-nmstate-lock\") pod \"nmstate-handler-g5nwf\" (UID: \"3c565206-451b-4bfc-bef6-20b6f3a33546\") " pod="openshift-nmstate/nmstate-handler-g5nwf" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.676813 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dbus-socket\" (UniqueName: \"kubernetes.io/host-path/3c565206-451b-4bfc-bef6-20b6f3a33546-dbus-socket\") pod \"nmstate-handler-g5nwf\" (UID: \"3c565206-451b-4bfc-bef6-20b6f3a33546\") " pod="openshift-nmstate/nmstate-handler-g5nwf" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.676856 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovs-socket\" (UniqueName: \"kubernetes.io/host-path/3c565206-451b-4bfc-bef6-20b6f3a33546-ovs-socket\") pod \"nmstate-handler-g5nwf\" (UID: \"3c565206-451b-4bfc-bef6-20b6f3a33546\") " pod="openshift-nmstate/nmstate-handler-g5nwf" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.681085 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d8e345e7-61b3-4723-8332-bb171b328a6a-tls-key-pair" (OuterVolumeSpecName: "tls-key-pair") pod "d8e345e7-61b3-4723-8332-bb171b328a6a" (UID: "d8e345e7-61b3-4723-8332-bb171b328a6a"). InnerVolumeSpecName "tls-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.690272 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8e345e7-61b3-4723-8332-bb171b328a6a-kube-api-access-fdprg" (OuterVolumeSpecName: "kube-api-access-fdprg") pod "d8e345e7-61b3-4723-8332-bb171b328a6a" (UID: "d8e345e7-61b3-4723-8332-bb171b328a6a"). InnerVolumeSpecName "kube-api-access-fdprg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.695446 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hxbz\" (UniqueName: \"kubernetes.io/projected/3c565206-451b-4bfc-bef6-20b6f3a33546-kube-api-access-8hxbz\") pod \"nmstate-handler-g5nwf\" (UID: \"3c565206-451b-4bfc-bef6-20b6f3a33546\") " pod="openshift-nmstate/nmstate-handler-g5nwf" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.728507 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58fcddf996-tgtpv" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.776731 4949 reconciler_common.go:293] "Volume detached for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/d8e345e7-61b3-4723-8332-bb171b328a6a-tls-key-pair\") on node \"crc\" DevicePath \"\"" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.776766 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdprg\" (UniqueName: \"kubernetes.io/projected/d8e345e7-61b3-4723-8332-bb171b328a6a-kube-api-access-fdprg\") on node \"crc\" DevicePath \"\"" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.803669 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-hknwz"] Oct 01 16:20:02 crc kubenswrapper[4949]: E1001 16:20:02.816947 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d8e345e7-61b3-4723-8332-bb171b328a6a" containerName="nmstate-webhook" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.816997 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8e345e7-61b3-4723-8332-bb171b328a6a" containerName="nmstate-webhook" Oct 01 16:20:02 crc kubenswrapper[4949]: E1001 16:20:02.817062 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42191e90-2de0-4988-860e-61d057d81232" containerName="kube-rbac-proxy" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.817071 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="42191e90-2de0-4988-860e-61d057d81232" containerName="kube-rbac-proxy" Oct 01 16:20:02 crc kubenswrapper[4949]: E1001 16:20:02.817092 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42191e90-2de0-4988-860e-61d057d81232" containerName="nmstate-metrics" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.817098 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="42191e90-2de0-4988-860e-61d057d81232" containerName="nmstate-metrics" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.817498 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="42191e90-2de0-4988-860e-61d057d81232" containerName="kube-rbac-proxy" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.817592 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="d8e345e7-61b3-4723-8332-bb171b328a6a" containerName="nmstate-webhook" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.817606 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="42191e90-2de0-4988-860e-61d057d81232" containerName="nmstate-metrics" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.822461 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-hknwz" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.833475 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5d6f6cfd66-bpnxg" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.840203 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-hknwz"] Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.877439 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hjqwk\" (UniqueName: \"kubernetes.io/projected/42191e90-2de0-4988-860e-61d057d81232-kube-api-access-hjqwk\") pod \"42191e90-2de0-4988-860e-61d057d81232\" (UID: \"42191e90-2de0-4988-860e-61d057d81232\") " Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.883252 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42191e90-2de0-4988-860e-61d057d81232-kube-api-access-hjqwk" (OuterVolumeSpecName: "kube-api-access-hjqwk") pod "42191e90-2de0-4988-860e-61d057d81232" (UID: "42191e90-2de0-4988-860e-61d057d81232"). InnerVolumeSpecName "kube-api-access-hjqwk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.898617 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-g5nwf" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.912531 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-bqplx"] Oct 01 16:20:02 crc kubenswrapper[4949]: E1001 16:20:02.912935 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7045f77f-0c3a-4e54-8378-2fcda1244f0c" containerName="nmstate-operator" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.912951 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="7045f77f-0c3a-4e54-8378-2fcda1244f0c" containerName="nmstate-operator" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.913140 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="7045f77f-0c3a-4e54-8378-2fcda1244f0c" containerName="nmstate-operator" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.914071 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-bqplx" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.934195 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-bqplx"] Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.965193 4949 generic.go:334] "Generic (PLEG): container finished" podID="3628bbb1-354b-40b0-89ac-3cc73d3a0ec9" containerID="96c52a3add26e2c7255e9782705759508fc82619dc576c0dfaaafc25fc385141" exitCode=0 Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.965249 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-g2cvf" event={"ID":"3628bbb1-354b-40b0-89ac-3cc73d3a0ec9","Type":"ContainerDied","Data":"96c52a3add26e2c7255e9782705759508fc82619dc576c0dfaaafc25fc385141"} Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.965276 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-g2cvf" event={"ID":"3628bbb1-354b-40b0-89ac-3cc73d3a0ec9","Type":"ContainerDied","Data":"41afdae1f10045eb6276c010cc8a7af333fededfabbb81660eb0dc03911919a2"} Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.965297 4949 scope.go:117] "RemoveContainer" containerID="96c52a3add26e2c7255e9782705759508fc82619dc576c0dfaaafc25fc385141" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.965495 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-handler-g2cvf" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.976259 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-g5nwf" event={"ID":"3c565206-451b-4bfc-bef6-20b6f3a33546","Type":"ContainerStarted","Data":"411a92948416506691299e1b53419e7cb8178a8c47e8b5a508fc8ed33a495055"} Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.978708 4949 generic.go:334] "Generic (PLEG): container finished" podID="7045f77f-0c3a-4e54-8378-2fcda1244f0c" containerID="1e04c7dd31a3c4877439051f5269f3ade73ccbaff51e483ebacf167c09746e0a" exitCode=0 Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.978858 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-operator-5d6f6cfd66-bpnxg" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.978954 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5d6f6cfd66-bpnxg" event={"ID":"7045f77f-0c3a-4e54-8378-2fcda1244f0c","Type":"ContainerDied","Data":"1e04c7dd31a3c4877439051f5269f3ade73ccbaff51e483ebacf167c09746e0a"} Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.978983 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-operator-5d6f6cfd66-bpnxg" event={"ID":"7045f77f-0c3a-4e54-8378-2fcda1244f0c","Type":"ContainerDied","Data":"cb619d1c26605c6f9635e8e33d0f4593bee8596df894b7ee264cb6512d30e9ad"} Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.979110 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-64mqp\" (UniqueName: \"kubernetes.io/projected/7045f77f-0c3a-4e54-8378-2fcda1244f0c-kube-api-access-64mqp\") pod \"7045f77f-0c3a-4e54-8378-2fcda1244f0c\" (UID: \"7045f77f-0c3a-4e54-8378-2fcda1244f0c\") " Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.979484 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/70acd657-66a2-4cc3-90e9-1a2a1448155c-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-hknwz\" (UID: \"70acd657-66a2-4cc3-90e9-1a2a1448155c\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-hknwz" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.979572 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bf6nh\" (UniqueName: \"kubernetes.io/projected/70acd657-66a2-4cc3-90e9-1a2a1448155c-kube-api-access-bf6nh\") pod \"nmstate-webhook-6cdbc54649-hknwz\" (UID: \"70acd657-66a2-4cc3-90e9-1a2a1448155c\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-hknwz" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.982578 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hjqwk\" (UniqueName: \"kubernetes.io/projected/42191e90-2de0-4988-860e-61d057d81232-kube-api-access-hjqwk\") on node \"crc\" DevicePath \"\"" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.983474 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7045f77f-0c3a-4e54-8378-2fcda1244f0c-kube-api-access-64mqp" (OuterVolumeSpecName: "kube-api-access-64mqp") pod "7045f77f-0c3a-4e54-8378-2fcda1244f0c" (UID: "7045f77f-0c3a-4e54-8378-2fcda1244f0c"). InnerVolumeSpecName "kube-api-access-64mqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.987964 4949 generic.go:334] "Generic (PLEG): container finished" podID="42191e90-2de0-4988-860e-61d057d81232" containerID="bf16fea18da54745bde0857da1d9f8f017df4bf44be029e3e1711f6808b1e564" exitCode=0 Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.987996 4949 generic.go:334] "Generic (PLEG): container finished" podID="42191e90-2de0-4988-860e-61d057d81232" containerID="2f83d858ea177a2239d168688614f932cf62f1acca2133a2e5638af2e52fa5a6" exitCode=0 Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.988068 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-58fcddf996-tgtpv" Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.988078 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58fcddf996-tgtpv" event={"ID":"42191e90-2de0-4988-860e-61d057d81232","Type":"ContainerDied","Data":"bf16fea18da54745bde0857da1d9f8f017df4bf44be029e3e1711f6808b1e564"} Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.988104 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58fcddf996-tgtpv" event={"ID":"42191e90-2de0-4988-860e-61d057d81232","Type":"ContainerDied","Data":"2f83d858ea177a2239d168688614f932cf62f1acca2133a2e5638af2e52fa5a6"} Oct 01 16:20:02 crc kubenswrapper[4949]: I1001 16:20:02.988113 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-58fcddf996-tgtpv" event={"ID":"42191e90-2de0-4988-860e-61d057d81232","Type":"ContainerDied","Data":"8d9b2e370934c63ca2484685f613c9cb3a0092b895d9084a99a869e89537c9d2"} Oct 01 16:20:03 crc kubenswrapper[4949]: I1001 16:20:03.002911 4949 generic.go:334] "Generic (PLEG): container finished" podID="d8e345e7-61b3-4723-8332-bb171b328a6a" containerID="7d84429cb334d8af2b7d77a848dbc2692901ce13f82f55f8ec04f02c417016d2" exitCode=0 Oct 01 16:20:03 crc kubenswrapper[4949]: I1001 16:20:03.003079 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6d689559c5-gzrq5" event={"ID":"d8e345e7-61b3-4723-8332-bb171b328a6a","Type":"ContainerDied","Data":"7d84429cb334d8af2b7d77a848dbc2692901ce13f82f55f8ec04f02c417016d2"} Oct 01 16:20:03 crc kubenswrapper[4949]: I1001 16:20:03.003114 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6d689559c5-gzrq5" event={"ID":"d8e345e7-61b3-4723-8332-bb171b328a6a","Type":"ContainerDied","Data":"cba03a42a5fcfb234361bdfe93c70657b64117ecc54c43c42bd33e5bcec3c15f"} Oct 01 16:20:03 crc kubenswrapper[4949]: I1001 16:20:03.003991 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6d689559c5-gzrq5" Oct 01 16:20:03 crc kubenswrapper[4949]: I1001 16:20:03.009153 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-nmstate/nmstate-handler-g2cvf"] Oct 01 16:20:03 crc kubenswrapper[4949]: I1001 16:20:03.025232 4949 scope.go:117] "RemoveContainer" containerID="96c52a3add26e2c7255e9782705759508fc82619dc576c0dfaaafc25fc385141" Oct 01 16:20:03 crc kubenswrapper[4949]: I1001 16:20:03.025510 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-nmstate/nmstate-handler-g2cvf"] Oct 01 16:20:03 crc kubenswrapper[4949]: E1001 16:20:03.025672 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"96c52a3add26e2c7255e9782705759508fc82619dc576c0dfaaafc25fc385141\": container with ID starting with 96c52a3add26e2c7255e9782705759508fc82619dc576c0dfaaafc25fc385141 not found: ID does not exist" containerID="96c52a3add26e2c7255e9782705759508fc82619dc576c0dfaaafc25fc385141" Oct 01 16:20:03 crc kubenswrapper[4949]: I1001 16:20:03.025699 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"96c52a3add26e2c7255e9782705759508fc82619dc576c0dfaaafc25fc385141"} err="failed to get container status \"96c52a3add26e2c7255e9782705759508fc82619dc576c0dfaaafc25fc385141\": rpc error: code = NotFound desc = could not find container \"96c52a3add26e2c7255e9782705759508fc82619dc576c0dfaaafc25fc385141\": container with ID starting with 96c52a3add26e2c7255e9782705759508fc82619dc576c0dfaaafc25fc385141 not found: ID does not exist" Oct 01 16:20:03 crc kubenswrapper[4949]: I1001 16:20:03.025719 4949 scope.go:117] "RemoveContainer" containerID="1e04c7dd31a3c4877439051f5269f3ade73ccbaff51e483ebacf167c09746e0a" Oct 01 16:20:03 crc kubenswrapper[4949]: I1001 16:20:03.033780 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-nmstate/nmstate-metrics-58fcddf996-tgtpv"] Oct 01 16:20:03 crc kubenswrapper[4949]: I1001 16:20:03.043784 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-nmstate/nmstate-metrics-58fcddf996-tgtpv"] Oct 01 16:20:03 crc kubenswrapper[4949]: I1001 16:20:03.050819 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-6b874cbd85-b9x5t"] Oct 01 16:20:03 crc kubenswrapper[4949]: I1001 16:20:03.058869 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-nmstate/nmstate-webhook-6d689559c5-gzrq5"] Oct 01 16:20:03 crc kubenswrapper[4949]: I1001 16:20:03.061659 4949 scope.go:117] "RemoveContainer" containerID="1e04c7dd31a3c4877439051f5269f3ade73ccbaff51e483ebacf167c09746e0a" Oct 01 16:20:03 crc kubenswrapper[4949]: E1001 16:20:03.062246 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1e04c7dd31a3c4877439051f5269f3ade73ccbaff51e483ebacf167c09746e0a\": container with ID starting with 1e04c7dd31a3c4877439051f5269f3ade73ccbaff51e483ebacf167c09746e0a not found: ID does not exist" containerID="1e04c7dd31a3c4877439051f5269f3ade73ccbaff51e483ebacf167c09746e0a" Oct 01 16:20:03 crc kubenswrapper[4949]: I1001 16:20:03.062276 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1e04c7dd31a3c4877439051f5269f3ade73ccbaff51e483ebacf167c09746e0a"} err="failed to get container status \"1e04c7dd31a3c4877439051f5269f3ade73ccbaff51e483ebacf167c09746e0a\": rpc error: code = NotFound desc = could not find container \"1e04c7dd31a3c4877439051f5269f3ade73ccbaff51e483ebacf167c09746e0a\": container with ID starting with 1e04c7dd31a3c4877439051f5269f3ade73ccbaff51e483ebacf167c09746e0a not found: ID does not exist" Oct 01 16:20:03 crc kubenswrapper[4949]: I1001 16:20:03.062299 4949 scope.go:117] "RemoveContainer" containerID="bf16fea18da54745bde0857da1d9f8f017df4bf44be029e3e1711f6808b1e564" Oct 01 16:20:03 crc kubenswrapper[4949]: I1001 16:20:03.063713 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-nmstate/nmstate-webhook-6d689559c5-gzrq5"] Oct 01 16:20:03 crc kubenswrapper[4949]: I1001 16:20:03.078697 4949 scope.go:117] "RemoveContainer" containerID="2f83d858ea177a2239d168688614f932cf62f1acca2133a2e5638af2e52fa5a6" Oct 01 16:20:03 crc kubenswrapper[4949]: I1001 16:20:03.083785 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phlfn\" (UniqueName: \"kubernetes.io/projected/9826cb28-c103-4f8b-88a6-6476badd1cf7-kube-api-access-phlfn\") pod \"nmstate-metrics-fdff9cb8d-bqplx\" (UID: \"9826cb28-c103-4f8b-88a6-6476badd1cf7\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-bqplx" Oct 01 16:20:03 crc kubenswrapper[4949]: I1001 16:20:03.084448 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/70acd657-66a2-4cc3-90e9-1a2a1448155c-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-hknwz\" (UID: \"70acd657-66a2-4cc3-90e9-1a2a1448155c\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-hknwz" Oct 01 16:20:03 crc kubenswrapper[4949]: I1001 16:20:03.084622 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bf6nh\" (UniqueName: \"kubernetes.io/projected/70acd657-66a2-4cc3-90e9-1a2a1448155c-kube-api-access-bf6nh\") pod \"nmstate-webhook-6cdbc54649-hknwz\" (UID: \"70acd657-66a2-4cc3-90e9-1a2a1448155c\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-hknwz" Oct 01 16:20:03 crc kubenswrapper[4949]: I1001 16:20:03.084740 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-64mqp\" (UniqueName: \"kubernetes.io/projected/7045f77f-0c3a-4e54-8378-2fcda1244f0c-kube-api-access-64mqp\") on node \"crc\" DevicePath \"\"" Oct 01 16:20:03 crc kubenswrapper[4949]: I1001 16:20:03.089851 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-key-pair\" (UniqueName: \"kubernetes.io/secret/70acd657-66a2-4cc3-90e9-1a2a1448155c-tls-key-pair\") pod \"nmstate-webhook-6cdbc54649-hknwz\" (UID: \"70acd657-66a2-4cc3-90e9-1a2a1448155c\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-hknwz" Oct 01 16:20:03 crc kubenswrapper[4949]: I1001 16:20:03.095756 4949 scope.go:117] "RemoveContainer" containerID="bf16fea18da54745bde0857da1d9f8f017df4bf44be029e3e1711f6808b1e564" Oct 01 16:20:03 crc kubenswrapper[4949]: E1001 16:20:03.096110 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bf16fea18da54745bde0857da1d9f8f017df4bf44be029e3e1711f6808b1e564\": container with ID starting with bf16fea18da54745bde0857da1d9f8f017df4bf44be029e3e1711f6808b1e564 not found: ID does not exist" containerID="bf16fea18da54745bde0857da1d9f8f017df4bf44be029e3e1711f6808b1e564" Oct 01 16:20:03 crc kubenswrapper[4949]: I1001 16:20:03.096153 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf16fea18da54745bde0857da1d9f8f017df4bf44be029e3e1711f6808b1e564"} err="failed to get container status \"bf16fea18da54745bde0857da1d9f8f017df4bf44be029e3e1711f6808b1e564\": rpc error: code = NotFound desc = could not find container \"bf16fea18da54745bde0857da1d9f8f017df4bf44be029e3e1711f6808b1e564\": container with ID starting with bf16fea18da54745bde0857da1d9f8f017df4bf44be029e3e1711f6808b1e564 not found: ID does not exist" Oct 01 16:20:03 crc kubenswrapper[4949]: I1001 16:20:03.096173 4949 scope.go:117] "RemoveContainer" containerID="2f83d858ea177a2239d168688614f932cf62f1acca2133a2e5638af2e52fa5a6" Oct 01 16:20:03 crc kubenswrapper[4949]: E1001 16:20:03.096632 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f83d858ea177a2239d168688614f932cf62f1acca2133a2e5638af2e52fa5a6\": container with ID starting with 2f83d858ea177a2239d168688614f932cf62f1acca2133a2e5638af2e52fa5a6 not found: ID does not exist" containerID="2f83d858ea177a2239d168688614f932cf62f1acca2133a2e5638af2e52fa5a6" Oct 01 16:20:03 crc kubenswrapper[4949]: I1001 16:20:03.096699 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f83d858ea177a2239d168688614f932cf62f1acca2133a2e5638af2e52fa5a6"} err="failed to get container status \"2f83d858ea177a2239d168688614f932cf62f1acca2133a2e5638af2e52fa5a6\": rpc error: code = NotFound desc = could not find container \"2f83d858ea177a2239d168688614f932cf62f1acca2133a2e5638af2e52fa5a6\": container with ID starting with 2f83d858ea177a2239d168688614f932cf62f1acca2133a2e5638af2e52fa5a6 not found: ID does not exist" Oct 01 16:20:03 crc kubenswrapper[4949]: I1001 16:20:03.096745 4949 scope.go:117] "RemoveContainer" containerID="bf16fea18da54745bde0857da1d9f8f017df4bf44be029e3e1711f6808b1e564" Oct 01 16:20:03 crc kubenswrapper[4949]: I1001 16:20:03.097048 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bf16fea18da54745bde0857da1d9f8f017df4bf44be029e3e1711f6808b1e564"} err="failed to get container status \"bf16fea18da54745bde0857da1d9f8f017df4bf44be029e3e1711f6808b1e564\": rpc error: code = NotFound desc = could not find container \"bf16fea18da54745bde0857da1d9f8f017df4bf44be029e3e1711f6808b1e564\": container with ID starting with bf16fea18da54745bde0857da1d9f8f017df4bf44be029e3e1711f6808b1e564 not found: ID does not exist" Oct 01 16:20:03 crc kubenswrapper[4949]: I1001 16:20:03.097072 4949 scope.go:117] "RemoveContainer" containerID="2f83d858ea177a2239d168688614f932cf62f1acca2133a2e5638af2e52fa5a6" Oct 01 16:20:03 crc kubenswrapper[4949]: I1001 16:20:03.097719 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f83d858ea177a2239d168688614f932cf62f1acca2133a2e5638af2e52fa5a6"} err="failed to get container status \"2f83d858ea177a2239d168688614f932cf62f1acca2133a2e5638af2e52fa5a6\": rpc error: code = NotFound desc = could not find container \"2f83d858ea177a2239d168688614f932cf62f1acca2133a2e5638af2e52fa5a6\": container with ID starting with 2f83d858ea177a2239d168688614f932cf62f1acca2133a2e5638af2e52fa5a6 not found: ID does not exist" Oct 01 16:20:03 crc kubenswrapper[4949]: I1001 16:20:03.097756 4949 scope.go:117] "RemoveContainer" containerID="7d84429cb334d8af2b7d77a848dbc2692901ce13f82f55f8ec04f02c417016d2" Oct 01 16:20:03 crc kubenswrapper[4949]: I1001 16:20:03.100443 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bf6nh\" (UniqueName: \"kubernetes.io/projected/70acd657-66a2-4cc3-90e9-1a2a1448155c-kube-api-access-bf6nh\") pod \"nmstate-webhook-6cdbc54649-hknwz\" (UID: \"70acd657-66a2-4cc3-90e9-1a2a1448155c\") " pod="openshift-nmstate/nmstate-webhook-6cdbc54649-hknwz" Oct 01 16:20:03 crc kubenswrapper[4949]: I1001 16:20:03.115511 4949 scope.go:117] "RemoveContainer" containerID="7d84429cb334d8af2b7d77a848dbc2692901ce13f82f55f8ec04f02c417016d2" Oct 01 16:20:03 crc kubenswrapper[4949]: E1001 16:20:03.117432 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d84429cb334d8af2b7d77a848dbc2692901ce13f82f55f8ec04f02c417016d2\": container with ID starting with 7d84429cb334d8af2b7d77a848dbc2692901ce13f82f55f8ec04f02c417016d2 not found: ID does not exist" containerID="7d84429cb334d8af2b7d77a848dbc2692901ce13f82f55f8ec04f02c417016d2" Oct 01 16:20:03 crc kubenswrapper[4949]: I1001 16:20:03.117466 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d84429cb334d8af2b7d77a848dbc2692901ce13f82f55f8ec04f02c417016d2"} err="failed to get container status \"7d84429cb334d8af2b7d77a848dbc2692901ce13f82f55f8ec04f02c417016d2\": rpc error: code = NotFound desc = could not find container \"7d84429cb334d8af2b7d77a848dbc2692901ce13f82f55f8ec04f02c417016d2\": container with ID starting with 7d84429cb334d8af2b7d77a848dbc2692901ce13f82f55f8ec04f02c417016d2 not found: ID does not exist" Oct 01 16:20:03 crc kubenswrapper[4949]: I1001 16:20:03.154692 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-hknwz" Oct 01 16:20:03 crc kubenswrapper[4949]: I1001 16:20:03.186385 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-phlfn\" (UniqueName: \"kubernetes.io/projected/9826cb28-c103-4f8b-88a6-6476badd1cf7-kube-api-access-phlfn\") pod \"nmstate-metrics-fdff9cb8d-bqplx\" (UID: \"9826cb28-c103-4f8b-88a6-6476badd1cf7\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-bqplx" Oct 01 16:20:03 crc kubenswrapper[4949]: I1001 16:20:03.209251 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-phlfn\" (UniqueName: \"kubernetes.io/projected/9826cb28-c103-4f8b-88a6-6476badd1cf7-kube-api-access-phlfn\") pod \"nmstate-metrics-fdff9cb8d-bqplx\" (UID: \"9826cb28-c103-4f8b-88a6-6476badd1cf7\") " pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-bqplx" Oct 01 16:20:03 crc kubenswrapper[4949]: I1001 16:20:03.239565 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-bqplx" Oct 01 16:20:03 crc kubenswrapper[4949]: I1001 16:20:03.343208 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-nmstate/nmstate-operator-5d6f6cfd66-bpnxg"] Oct 01 16:20:03 crc kubenswrapper[4949]: I1001 16:20:03.343271 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-nmstate/nmstate-operator-5d6f6cfd66-bpnxg"] Oct 01 16:20:03 crc kubenswrapper[4949]: I1001 16:20:03.619183 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3628bbb1-354b-40b0-89ac-3cc73d3a0ec9" path="/var/lib/kubelet/pods/3628bbb1-354b-40b0-89ac-3cc73d3a0ec9/volumes" Oct 01 16:20:03 crc kubenswrapper[4949]: I1001 16:20:03.620166 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42191e90-2de0-4988-860e-61d057d81232" path="/var/lib/kubelet/pods/42191e90-2de0-4988-860e-61d057d81232/volumes" Oct 01 16:20:03 crc kubenswrapper[4949]: I1001 16:20:03.620788 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7045f77f-0c3a-4e54-8378-2fcda1244f0c" path="/var/lib/kubelet/pods/7045f77f-0c3a-4e54-8378-2fcda1244f0c/volumes" Oct 01 16:20:03 crc kubenswrapper[4949]: I1001 16:20:03.623007 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8e345e7-61b3-4723-8332-bb171b328a6a" path="/var/lib/kubelet/pods/d8e345e7-61b3-4723-8332-bb171b328a6a/volumes" Oct 01 16:20:03 crc kubenswrapper[4949]: I1001 16:20:03.624008 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-webhook-6cdbc54649-hknwz"] Oct 01 16:20:03 crc kubenswrapper[4949]: I1001 16:20:03.728960 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-nmstate/nmstate-metrics-fdff9cb8d-bqplx"] Oct 01 16:20:04 crc kubenswrapper[4949]: I1001 16:20:04.010712 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-b9x5t" event={"ID":"8a0b8afa-4a92-444c-b7a1-cf9939e5d88c","Type":"ContainerStarted","Data":"4e3b259c9ead1e11efd807edbaa1143139db0ee1d4eda722a4370c5c2c4bde5c"} Oct 01 16:20:04 crc kubenswrapper[4949]: I1001 16:20:04.012795 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-bqplx" event={"ID":"9826cb28-c103-4f8b-88a6-6476badd1cf7","Type":"ContainerStarted","Data":"59d7a96cb8975e69ffeeb9694c121303de71de97ea30958d65019dc1a17cde14"} Oct 01 16:20:04 crc kubenswrapper[4949]: I1001 16:20:04.014410 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-hknwz" event={"ID":"70acd657-66a2-4cc3-90e9-1a2a1448155c","Type":"ContainerStarted","Data":"c34f7f4facd071aef7a96eda65d1dcb03197824e521e8bb5961e23afe461328d"} Oct 01 16:20:04 crc kubenswrapper[4949]: I1001 16:20:04.602560 4949 scope.go:117] "RemoveContainer" containerID="757b624790258f8fb52b1acdc82513f0413ed869329247fea2c8ea990d6338de" Oct 01 16:20:04 crc kubenswrapper[4949]: E1001 16:20:04.603261 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:20:07 crc kubenswrapper[4949]: I1001 16:20:07.040870 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-b9x5t" event={"ID":"8a0b8afa-4a92-444c-b7a1-cf9939e5d88c","Type":"ContainerStarted","Data":"53119c86e12766654c08c2a26bf9d2f3ff72b639aea1e4b9ba0f62a9e868ea64"} Oct 01 16:20:07 crc kubenswrapper[4949]: I1001 16:20:07.042960 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-bqplx" event={"ID":"9826cb28-c103-4f8b-88a6-6476badd1cf7","Type":"ContainerStarted","Data":"eee78c7d9d6142acb5eecfddc767a4813da4ee1392763eecedd9ae17612433cf"} Oct 01 16:20:07 crc kubenswrapper[4949]: I1001 16:20:07.045819 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-hknwz" event={"ID":"70acd657-66a2-4cc3-90e9-1a2a1448155c","Type":"ContainerStarted","Data":"63567ace78ac1e0a4f00a5eb143e6c16dda59414a0e1e18fc6209a730df3a372"} Oct 01 16:20:07 crc kubenswrapper[4949]: I1001 16:20:07.045917 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-hknwz" Oct 01 16:20:07 crc kubenswrapper[4949]: I1001 16:20:07.047285 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-handler-g5nwf" event={"ID":"3c565206-451b-4bfc-bef6-20b6f3a33546","Type":"ContainerStarted","Data":"b97113f4c4c9d0cd9cd25971f1e7a13b3529721f24fed29d951a76c23edfde47"} Oct 01 16:20:07 crc kubenswrapper[4949]: I1001 16:20:07.047794 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-nmstate/nmstate-handler-g5nwf" Oct 01 16:20:07 crc kubenswrapper[4949]: I1001 16:20:07.061154 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-console-plugin-6b874cbd85-b9x5t" podStartSLOduration=1.922403005 podStartE2EDuration="5.061114948s" podCreationTimestamp="2025-10-01 16:20:02 +0000 UTC" firstStartedPulling="2025-10-01 16:20:03.061939176 +0000 UTC m=+2302.367545367" lastFinishedPulling="2025-10-01 16:20:06.200651119 +0000 UTC m=+2305.506257310" observedRunningTime="2025-10-01 16:20:07.054104665 +0000 UTC m=+2306.359710856" watchObservedRunningTime="2025-10-01 16:20:07.061114948 +0000 UTC m=+2306.366721139" Oct 01 16:20:07 crc kubenswrapper[4949]: I1001 16:20:07.129939 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-handler-g5nwf" podStartSLOduration=1.873427112 podStartE2EDuration="5.12992027s" podCreationTimestamp="2025-10-01 16:20:02 +0000 UTC" firstStartedPulling="2025-10-01 16:20:02.945200041 +0000 UTC m=+2302.250806232" lastFinishedPulling="2025-10-01 16:20:06.201693199 +0000 UTC m=+2305.507299390" observedRunningTime="2025-10-01 16:20:07.121813868 +0000 UTC m=+2306.427420069" watchObservedRunningTime="2025-10-01 16:20:07.12992027 +0000 UTC m=+2306.435526461" Oct 01 16:20:07 crc kubenswrapper[4949]: I1001 16:20:07.139046 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-864bb6dfb5-lf4tp"] Oct 01 16:20:07 crc kubenswrapper[4949]: I1001 16:20:07.139274 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-lf4tp" podUID="a1e85787-64f5-453e-805e-59446da74677" containerName="nmstate-console-plugin" containerID="cri-o://3983b635678d5ce4336eca166c55ba6042e9ecd6ffdcafd10bf170210a5d76d8" gracePeriod=30 Oct 01 16:20:07 crc kubenswrapper[4949]: I1001 16:20:07.644283 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-lf4tp" Oct 01 16:20:07 crc kubenswrapper[4949]: I1001 16:20:07.662750 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-hknwz" podStartSLOduration=3.067957102 podStartE2EDuration="5.662730291s" podCreationTimestamp="2025-10-01 16:20:02 +0000 UTC" firstStartedPulling="2025-10-01 16:20:03.605927363 +0000 UTC m=+2302.911533554" lastFinishedPulling="2025-10-01 16:20:06.200700552 +0000 UTC m=+2305.506306743" observedRunningTime="2025-10-01 16:20:07.1438123 +0000 UTC m=+2306.449418501" watchObservedRunningTime="2025-10-01 16:20:07.662730291 +0000 UTC m=+2306.968336482" Oct 01 16:20:07 crc kubenswrapper[4949]: I1001 16:20:07.781481 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwfk9\" (UniqueName: \"kubernetes.io/projected/a1e85787-64f5-453e-805e-59446da74677-kube-api-access-hwfk9\") pod \"a1e85787-64f5-453e-805e-59446da74677\" (UID: \"a1e85787-64f5-453e-805e-59446da74677\") " Oct 01 16:20:07 crc kubenswrapper[4949]: I1001 16:20:07.781696 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a1e85787-64f5-453e-805e-59446da74677-plugin-serving-cert\") pod \"a1e85787-64f5-453e-805e-59446da74677\" (UID: \"a1e85787-64f5-453e-805e-59446da74677\") " Oct 01 16:20:07 crc kubenswrapper[4949]: I1001 16:20:07.781810 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a1e85787-64f5-453e-805e-59446da74677-nginx-conf\") pod \"a1e85787-64f5-453e-805e-59446da74677\" (UID: \"a1e85787-64f5-453e-805e-59446da74677\") " Oct 01 16:20:07 crc kubenswrapper[4949]: I1001 16:20:07.800448 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1e85787-64f5-453e-805e-59446da74677-kube-api-access-hwfk9" (OuterVolumeSpecName: "kube-api-access-hwfk9") pod "a1e85787-64f5-453e-805e-59446da74677" (UID: "a1e85787-64f5-453e-805e-59446da74677"). InnerVolumeSpecName "kube-api-access-hwfk9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:20:07 crc kubenswrapper[4949]: I1001 16:20:07.802023 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1e85787-64f5-453e-805e-59446da74677-plugin-serving-cert" (OuterVolumeSpecName: "plugin-serving-cert") pod "a1e85787-64f5-453e-805e-59446da74677" (UID: "a1e85787-64f5-453e-805e-59446da74677"). InnerVolumeSpecName "plugin-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:20:07 crc kubenswrapper[4949]: I1001 16:20:07.819675 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a1e85787-64f5-453e-805e-59446da74677-nginx-conf" (OuterVolumeSpecName: "nginx-conf") pod "a1e85787-64f5-453e-805e-59446da74677" (UID: "a1e85787-64f5-453e-805e-59446da74677"). InnerVolumeSpecName "nginx-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 16:20:07 crc kubenswrapper[4949]: I1001 16:20:07.884725 4949 reconciler_common.go:293] "Volume detached for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/a1e85787-64f5-453e-805e-59446da74677-nginx-conf\") on node \"crc\" DevicePath \"\"" Oct 01 16:20:07 crc kubenswrapper[4949]: I1001 16:20:07.884977 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hwfk9\" (UniqueName: \"kubernetes.io/projected/a1e85787-64f5-453e-805e-59446da74677-kube-api-access-hwfk9\") on node \"crc\" DevicePath \"\"" Oct 01 16:20:07 crc kubenswrapper[4949]: I1001 16:20:07.885056 4949 reconciler_common.go:293] "Volume detached for volume \"plugin-serving-cert\" (UniqueName: \"kubernetes.io/secret/a1e85787-64f5-453e-805e-59446da74677-plugin-serving-cert\") on node \"crc\" DevicePath \"\"" Oct 01 16:20:08 crc kubenswrapper[4949]: I1001 16:20:08.057805 4949 generic.go:334] "Generic (PLEG): container finished" podID="a1e85787-64f5-453e-805e-59446da74677" containerID="3983b635678d5ce4336eca166c55ba6042e9ecd6ffdcafd10bf170210a5d76d8" exitCode=0 Oct 01 16:20:08 crc kubenswrapper[4949]: I1001 16:20:08.057875 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-lf4tp" Oct 01 16:20:08 crc kubenswrapper[4949]: I1001 16:20:08.057920 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-lf4tp" event={"ID":"a1e85787-64f5-453e-805e-59446da74677","Type":"ContainerDied","Data":"3983b635678d5ce4336eca166c55ba6042e9ecd6ffdcafd10bf170210a5d76d8"} Oct 01 16:20:08 crc kubenswrapper[4949]: I1001 16:20:08.057955 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-console-plugin-864bb6dfb5-lf4tp" event={"ID":"a1e85787-64f5-453e-805e-59446da74677","Type":"ContainerDied","Data":"f990ba22142551f435af3d8516a57bc696074a2f9001558209f09797500a3ec3"} Oct 01 16:20:08 crc kubenswrapper[4949]: I1001 16:20:08.057980 4949 scope.go:117] "RemoveContainer" containerID="3983b635678d5ce4336eca166c55ba6042e9ecd6ffdcafd10bf170210a5d76d8" Oct 01 16:20:08 crc kubenswrapper[4949]: I1001 16:20:08.092839 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-864bb6dfb5-lf4tp"] Oct 01 16:20:08 crc kubenswrapper[4949]: I1001 16:20:08.094530 4949 scope.go:117] "RemoveContainer" containerID="3983b635678d5ce4336eca166c55ba6042e9ecd6ffdcafd10bf170210a5d76d8" Oct 01 16:20:08 crc kubenswrapper[4949]: I1001 16:20:08.098291 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-nmstate/nmstate-console-plugin-864bb6dfb5-lf4tp"] Oct 01 16:20:08 crc kubenswrapper[4949]: E1001 16:20:08.102320 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3983b635678d5ce4336eca166c55ba6042e9ecd6ffdcafd10bf170210a5d76d8\": container with ID starting with 3983b635678d5ce4336eca166c55ba6042e9ecd6ffdcafd10bf170210a5d76d8 not found: ID does not exist" containerID="3983b635678d5ce4336eca166c55ba6042e9ecd6ffdcafd10bf170210a5d76d8" Oct 01 16:20:08 crc kubenswrapper[4949]: I1001 16:20:08.102371 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3983b635678d5ce4336eca166c55ba6042e9ecd6ffdcafd10bf170210a5d76d8"} err="failed to get container status \"3983b635678d5ce4336eca166c55ba6042e9ecd6ffdcafd10bf170210a5d76d8\": rpc error: code = NotFound desc = could not find container \"3983b635678d5ce4336eca166c55ba6042e9ecd6ffdcafd10bf170210a5d76d8\": container with ID starting with 3983b635678d5ce4336eca166c55ba6042e9ecd6ffdcafd10bf170210a5d76d8 not found: ID does not exist" Oct 01 16:20:09 crc kubenswrapper[4949]: I1001 16:20:09.579977 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6fcd764774-zd8pg"] Oct 01 16:20:09 crc kubenswrapper[4949]: E1001 16:20:09.580773 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1e85787-64f5-453e-805e-59446da74677" containerName="nmstate-console-plugin" Oct 01 16:20:09 crc kubenswrapper[4949]: I1001 16:20:09.580783 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1e85787-64f5-453e-805e-59446da74677" containerName="nmstate-console-plugin" Oct 01 16:20:09 crc kubenswrapper[4949]: I1001 16:20:09.580940 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1e85787-64f5-453e-805e-59446da74677" containerName="nmstate-console-plugin" Oct 01 16:20:09 crc kubenswrapper[4949]: I1001 16:20:09.581514 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6fcd764774-zd8pg" Oct 01 16:20:09 crc kubenswrapper[4949]: I1001 16:20:09.637265 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bc880bcc-217b-4bff-aa0f-c9f7d6fff779-webhook-cert\") pod \"metallb-operator-controller-manager-6fcd764774-zd8pg\" (UID: \"bc880bcc-217b-4bff-aa0f-c9f7d6fff779\") " pod="metallb-system/metallb-operator-controller-manager-6fcd764774-zd8pg" Oct 01 16:20:09 crc kubenswrapper[4949]: I1001 16:20:09.637386 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bc880bcc-217b-4bff-aa0f-c9f7d6fff779-apiservice-cert\") pod \"metallb-operator-controller-manager-6fcd764774-zd8pg\" (UID: \"bc880bcc-217b-4bff-aa0f-c9f7d6fff779\") " pod="metallb-system/metallb-operator-controller-manager-6fcd764774-zd8pg" Oct 01 16:20:09 crc kubenswrapper[4949]: I1001 16:20:09.637424 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btm6m\" (UniqueName: \"kubernetes.io/projected/bc880bcc-217b-4bff-aa0f-c9f7d6fff779-kube-api-access-btm6m\") pod \"metallb-operator-controller-manager-6fcd764774-zd8pg\" (UID: \"bc880bcc-217b-4bff-aa0f-c9f7d6fff779\") " pod="metallb-system/metallb-operator-controller-manager-6fcd764774-zd8pg" Oct 01 16:20:09 crc kubenswrapper[4949]: I1001 16:20:09.645258 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1e85787-64f5-453e-805e-59446da74677" path="/var/lib/kubelet/pods/a1e85787-64f5-453e-805e-59446da74677/volumes" Oct 01 16:20:09 crc kubenswrapper[4949]: I1001 16:20:09.645859 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6fcd764774-zd8pg"] Oct 01 16:20:09 crc kubenswrapper[4949]: I1001 16:20:09.739764 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bc880bcc-217b-4bff-aa0f-c9f7d6fff779-apiservice-cert\") pod \"metallb-operator-controller-manager-6fcd764774-zd8pg\" (UID: \"bc880bcc-217b-4bff-aa0f-c9f7d6fff779\") " pod="metallb-system/metallb-operator-controller-manager-6fcd764774-zd8pg" Oct 01 16:20:09 crc kubenswrapper[4949]: I1001 16:20:09.739841 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btm6m\" (UniqueName: \"kubernetes.io/projected/bc880bcc-217b-4bff-aa0f-c9f7d6fff779-kube-api-access-btm6m\") pod \"metallb-operator-controller-manager-6fcd764774-zd8pg\" (UID: \"bc880bcc-217b-4bff-aa0f-c9f7d6fff779\") " pod="metallb-system/metallb-operator-controller-manager-6fcd764774-zd8pg" Oct 01 16:20:09 crc kubenswrapper[4949]: I1001 16:20:09.739915 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bc880bcc-217b-4bff-aa0f-c9f7d6fff779-webhook-cert\") pod \"metallb-operator-controller-manager-6fcd764774-zd8pg\" (UID: \"bc880bcc-217b-4bff-aa0f-c9f7d6fff779\") " pod="metallb-system/metallb-operator-controller-manager-6fcd764774-zd8pg" Oct 01 16:20:09 crc kubenswrapper[4949]: I1001 16:20:09.747108 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bc880bcc-217b-4bff-aa0f-c9f7d6fff779-apiservice-cert\") pod \"metallb-operator-controller-manager-6fcd764774-zd8pg\" (UID: \"bc880bcc-217b-4bff-aa0f-c9f7d6fff779\") " pod="metallb-system/metallb-operator-controller-manager-6fcd764774-zd8pg" Oct 01 16:20:09 crc kubenswrapper[4949]: I1001 16:20:09.749792 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bc880bcc-217b-4bff-aa0f-c9f7d6fff779-webhook-cert\") pod \"metallb-operator-controller-manager-6fcd764774-zd8pg\" (UID: \"bc880bcc-217b-4bff-aa0f-c9f7d6fff779\") " pod="metallb-system/metallb-operator-controller-manager-6fcd764774-zd8pg" Oct 01 16:20:09 crc kubenswrapper[4949]: I1001 16:20:09.758447 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btm6m\" (UniqueName: \"kubernetes.io/projected/bc880bcc-217b-4bff-aa0f-c9f7d6fff779-kube-api-access-btm6m\") pod \"metallb-operator-controller-manager-6fcd764774-zd8pg\" (UID: \"bc880bcc-217b-4bff-aa0f-c9f7d6fff779\") " pod="metallb-system/metallb-operator-controller-manager-6fcd764774-zd8pg" Oct 01 16:20:09 crc kubenswrapper[4949]: I1001 16:20:09.905212 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-75b55bf9f4-xhb9p"] Oct 01 16:20:09 crc kubenswrapper[4949]: I1001 16:20:09.911425 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-75b55bf9f4-xhb9p" Oct 01 16:20:09 crc kubenswrapper[4949]: I1001 16:20:09.925926 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6fcd764774-zd8pg" Oct 01 16:20:09 crc kubenswrapper[4949]: I1001 16:20:09.947302 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-75b55bf9f4-xhb9p"] Oct 01 16:20:10 crc kubenswrapper[4949]: I1001 16:20:10.047504 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/72d082fe-c501-45dc-a52c-39abe6d7326a-webhook-cert\") pod \"metallb-operator-webhook-server-75b55bf9f4-xhb9p\" (UID: \"72d082fe-c501-45dc-a52c-39abe6d7326a\") " pod="metallb-system/metallb-operator-webhook-server-75b55bf9f4-xhb9p" Oct 01 16:20:10 crc kubenswrapper[4949]: I1001 16:20:10.047563 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xdpc\" (UniqueName: \"kubernetes.io/projected/72d082fe-c501-45dc-a52c-39abe6d7326a-kube-api-access-8xdpc\") pod \"metallb-operator-webhook-server-75b55bf9f4-xhb9p\" (UID: \"72d082fe-c501-45dc-a52c-39abe6d7326a\") " pod="metallb-system/metallb-operator-webhook-server-75b55bf9f4-xhb9p" Oct 01 16:20:10 crc kubenswrapper[4949]: I1001 16:20:10.047824 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/72d082fe-c501-45dc-a52c-39abe6d7326a-apiservice-cert\") pod \"metallb-operator-webhook-server-75b55bf9f4-xhb9p\" (UID: \"72d082fe-c501-45dc-a52c-39abe6d7326a\") " pod="metallb-system/metallb-operator-webhook-server-75b55bf9f4-xhb9p" Oct 01 16:20:10 crc kubenswrapper[4949]: I1001 16:20:10.149596 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/72d082fe-c501-45dc-a52c-39abe6d7326a-webhook-cert\") pod \"metallb-operator-webhook-server-75b55bf9f4-xhb9p\" (UID: \"72d082fe-c501-45dc-a52c-39abe6d7326a\") " pod="metallb-system/metallb-operator-webhook-server-75b55bf9f4-xhb9p" Oct 01 16:20:10 crc kubenswrapper[4949]: I1001 16:20:10.149667 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xdpc\" (UniqueName: \"kubernetes.io/projected/72d082fe-c501-45dc-a52c-39abe6d7326a-kube-api-access-8xdpc\") pod \"metallb-operator-webhook-server-75b55bf9f4-xhb9p\" (UID: \"72d082fe-c501-45dc-a52c-39abe6d7326a\") " pod="metallb-system/metallb-operator-webhook-server-75b55bf9f4-xhb9p" Oct 01 16:20:10 crc kubenswrapper[4949]: I1001 16:20:10.149742 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/72d082fe-c501-45dc-a52c-39abe6d7326a-apiservice-cert\") pod \"metallb-operator-webhook-server-75b55bf9f4-xhb9p\" (UID: \"72d082fe-c501-45dc-a52c-39abe6d7326a\") " pod="metallb-system/metallb-operator-webhook-server-75b55bf9f4-xhb9p" Oct 01 16:20:10 crc kubenswrapper[4949]: I1001 16:20:10.154710 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/72d082fe-c501-45dc-a52c-39abe6d7326a-apiservice-cert\") pod \"metallb-operator-webhook-server-75b55bf9f4-xhb9p\" (UID: \"72d082fe-c501-45dc-a52c-39abe6d7326a\") " pod="metallb-system/metallb-operator-webhook-server-75b55bf9f4-xhb9p" Oct 01 16:20:10 crc kubenswrapper[4949]: I1001 16:20:10.155878 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/72d082fe-c501-45dc-a52c-39abe6d7326a-webhook-cert\") pod \"metallb-operator-webhook-server-75b55bf9f4-xhb9p\" (UID: \"72d082fe-c501-45dc-a52c-39abe6d7326a\") " pod="metallb-system/metallb-operator-webhook-server-75b55bf9f4-xhb9p" Oct 01 16:20:10 crc kubenswrapper[4949]: I1001 16:20:10.200142 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xdpc\" (UniqueName: \"kubernetes.io/projected/72d082fe-c501-45dc-a52c-39abe6d7326a-kube-api-access-8xdpc\") pod \"metallb-operator-webhook-server-75b55bf9f4-xhb9p\" (UID: \"72d082fe-c501-45dc-a52c-39abe6d7326a\") " pod="metallb-system/metallb-operator-webhook-server-75b55bf9f4-xhb9p" Oct 01 16:20:10 crc kubenswrapper[4949]: I1001 16:20:10.245010 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-75b55bf9f4-xhb9p" Oct 01 16:20:10 crc kubenswrapper[4949]: I1001 16:20:10.762725 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6fcd764774-zd8pg"] Oct 01 16:20:10 crc kubenswrapper[4949]: W1001 16:20:10.775721 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc880bcc_217b_4bff_aa0f_c9f7d6fff779.slice/crio-648cc9c44cdd4033e6f5f1084ad0c98413746d710e805b8e3b2c99683ffe7caa WatchSource:0}: Error finding container 648cc9c44cdd4033e6f5f1084ad0c98413746d710e805b8e3b2c99683ffe7caa: Status 404 returned error can't find the container with id 648cc9c44cdd4033e6f5f1084ad0c98413746d710e805b8e3b2c99683ffe7caa Oct 01 16:20:10 crc kubenswrapper[4949]: I1001 16:20:10.848423 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-75b55bf9f4-xhb9p"] Oct 01 16:20:11 crc kubenswrapper[4949]: I1001 16:20:11.093904 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-bqplx" event={"ID":"9826cb28-c103-4f8b-88a6-6476badd1cf7","Type":"ContainerStarted","Data":"6a8067d5a6cfbc6f0775fa1d7d1cec1d5244461219b65e4cb2a7a1072cbccf58"} Oct 01 16:20:11 crc kubenswrapper[4949]: I1001 16:20:11.095082 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6fcd764774-zd8pg" event={"ID":"bc880bcc-217b-4bff-aa0f-c9f7d6fff779","Type":"ContainerStarted","Data":"648cc9c44cdd4033e6f5f1084ad0c98413746d710e805b8e3b2c99683ffe7caa"} Oct 01 16:20:11 crc kubenswrapper[4949]: I1001 16:20:11.097211 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-75b55bf9f4-xhb9p" event={"ID":"72d082fe-c501-45dc-a52c-39abe6d7326a","Type":"ContainerStarted","Data":"3c56abc1c9fc8e877d165cfaf82355210f3f9d45eec9579d04ae2466a490f896"} Oct 01 16:20:11 crc kubenswrapper[4949]: I1001 16:20:11.113281 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-nmstate/nmstate-metrics-fdff9cb8d-bqplx" podStartSLOduration=2.619889141 podStartE2EDuration="9.113256217s" podCreationTimestamp="2025-10-01 16:20:02 +0000 UTC" firstStartedPulling="2025-10-01 16:20:03.738346767 +0000 UTC m=+2303.043952948" lastFinishedPulling="2025-10-01 16:20:10.231713833 +0000 UTC m=+2309.537320024" observedRunningTime="2025-10-01 16:20:11.106560965 +0000 UTC m=+2310.412167156" watchObservedRunningTime="2025-10-01 16:20:11.113256217 +0000 UTC m=+2310.418862408" Oct 01 16:20:12 crc kubenswrapper[4949]: I1001 16:20:12.954300 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-handler-g5nwf" Oct 01 16:20:18 crc kubenswrapper[4949]: I1001 16:20:18.601775 4949 scope.go:117] "RemoveContainer" containerID="757b624790258f8fb52b1acdc82513f0413ed869329247fea2c8ea990d6338de" Oct 01 16:20:18 crc kubenswrapper[4949]: E1001 16:20:18.602512 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:20:19 crc kubenswrapper[4949]: I1001 16:20:19.194951 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6fcd764774-zd8pg" event={"ID":"bc880bcc-217b-4bff-aa0f-c9f7d6fff779","Type":"ContainerStarted","Data":"a7a1e7525601012a3e5bfd7fe73bbbe11778c22f3809d3750eb758f4247c0d11"} Oct 01 16:20:19 crc kubenswrapper[4949]: I1001 16:20:19.196006 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6fcd764774-zd8pg" Oct 01 16:20:19 crc kubenswrapper[4949]: I1001 16:20:19.198080 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-75b55bf9f4-xhb9p" event={"ID":"72d082fe-c501-45dc-a52c-39abe6d7326a","Type":"ContainerStarted","Data":"a480d81ef37de1008b4719578786822a626b9e4ab82fa386802ac28dcd1eea95"} Oct 01 16:20:19 crc kubenswrapper[4949]: I1001 16:20:19.198590 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-75b55bf9f4-xhb9p" Oct 01 16:20:19 crc kubenswrapper[4949]: I1001 16:20:19.221303 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6fcd764774-zd8pg" podStartSLOduration=2.479225782 podStartE2EDuration="10.221285412s" podCreationTimestamp="2025-10-01 16:20:09 +0000 UTC" firstStartedPulling="2025-10-01 16:20:10.791291496 +0000 UTC m=+2310.096897687" lastFinishedPulling="2025-10-01 16:20:18.533351126 +0000 UTC m=+2317.838957317" observedRunningTime="2025-10-01 16:20:19.217359194 +0000 UTC m=+2318.522965395" watchObservedRunningTime="2025-10-01 16:20:19.221285412 +0000 UTC m=+2318.526891623" Oct 01 16:20:19 crc kubenswrapper[4949]: I1001 16:20:19.242291 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-75b55bf9f4-xhb9p" podStartSLOduration=2.543690825 podStartE2EDuration="10.242276246s" podCreationTimestamp="2025-10-01 16:20:09 +0000 UTC" firstStartedPulling="2025-10-01 16:20:10.864709245 +0000 UTC m=+2310.170315436" lastFinishedPulling="2025-10-01 16:20:18.563294676 +0000 UTC m=+2317.868900857" observedRunningTime="2025-10-01 16:20:19.236316973 +0000 UTC m=+2318.541923174" watchObservedRunningTime="2025-10-01 16:20:19.242276246 +0000 UTC m=+2318.547882427" Oct 01 16:20:20 crc kubenswrapper[4949]: I1001 16:20:20.212008 4949 generic.go:334] "Generic (PLEG): container finished" podID="05704de2-46b5-4cce-bea8-9e1a00e0d2a5" containerID="06388d814765b728a78e162d71cbb2f14dcbb834f5b736344045ce170d79fe89" exitCode=0 Oct 01 16:20:20 crc kubenswrapper[4949]: I1001 16:20:20.212195 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mhfqj" event={"ID":"05704de2-46b5-4cce-bea8-9e1a00e0d2a5","Type":"ContainerDied","Data":"06388d814765b728a78e162d71cbb2f14dcbb834f5b736344045ce170d79fe89"} Oct 01 16:20:21 crc kubenswrapper[4949]: I1001 16:20:21.743721 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mhfqj" Oct 01 16:20:21 crc kubenswrapper[4949]: I1001 16:20:21.902000 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/05704de2-46b5-4cce-bea8-9e1a00e0d2a5-ssh-key\") pod \"05704de2-46b5-4cce-bea8-9e1a00e0d2a5\" (UID: \"05704de2-46b5-4cce-bea8-9e1a00e0d2a5\") " Oct 01 16:20:21 crc kubenswrapper[4949]: I1001 16:20:21.902542 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05704de2-46b5-4cce-bea8-9e1a00e0d2a5-inventory\") pod \"05704de2-46b5-4cce-bea8-9e1a00e0d2a5\" (UID: \"05704de2-46b5-4cce-bea8-9e1a00e0d2a5\") " Oct 01 16:20:21 crc kubenswrapper[4949]: I1001 16:20:21.902687 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/05704de2-46b5-4cce-bea8-9e1a00e0d2a5-ceph\") pod \"05704de2-46b5-4cce-bea8-9e1a00e0d2a5\" (UID: \"05704de2-46b5-4cce-bea8-9e1a00e0d2a5\") " Oct 01 16:20:21 crc kubenswrapper[4949]: I1001 16:20:21.902929 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ckzfn\" (UniqueName: \"kubernetes.io/projected/05704de2-46b5-4cce-bea8-9e1a00e0d2a5-kube-api-access-ckzfn\") pod \"05704de2-46b5-4cce-bea8-9e1a00e0d2a5\" (UID: \"05704de2-46b5-4cce-bea8-9e1a00e0d2a5\") " Oct 01 16:20:21 crc kubenswrapper[4949]: I1001 16:20:21.910628 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05704de2-46b5-4cce-bea8-9e1a00e0d2a5-kube-api-access-ckzfn" (OuterVolumeSpecName: "kube-api-access-ckzfn") pod "05704de2-46b5-4cce-bea8-9e1a00e0d2a5" (UID: "05704de2-46b5-4cce-bea8-9e1a00e0d2a5"). InnerVolumeSpecName "kube-api-access-ckzfn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:20:21 crc kubenswrapper[4949]: I1001 16:20:21.913928 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05704de2-46b5-4cce-bea8-9e1a00e0d2a5-ceph" (OuterVolumeSpecName: "ceph") pod "05704de2-46b5-4cce-bea8-9e1a00e0d2a5" (UID: "05704de2-46b5-4cce-bea8-9e1a00e0d2a5"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:20:21 crc kubenswrapper[4949]: I1001 16:20:21.930350 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05704de2-46b5-4cce-bea8-9e1a00e0d2a5-inventory" (OuterVolumeSpecName: "inventory") pod "05704de2-46b5-4cce-bea8-9e1a00e0d2a5" (UID: "05704de2-46b5-4cce-bea8-9e1a00e0d2a5"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:20:21 crc kubenswrapper[4949]: I1001 16:20:21.932694 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05704de2-46b5-4cce-bea8-9e1a00e0d2a5-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "05704de2-46b5-4cce-bea8-9e1a00e0d2a5" (UID: "05704de2-46b5-4cce-bea8-9e1a00e0d2a5"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:20:22 crc kubenswrapper[4949]: I1001 16:20:22.005499 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/05704de2-46b5-4cce-bea8-9e1a00e0d2a5-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 01 16:20:22 crc kubenswrapper[4949]: I1001 16:20:22.005536 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/05704de2-46b5-4cce-bea8-9e1a00e0d2a5-inventory\") on node \"crc\" DevicePath \"\"" Oct 01 16:20:22 crc kubenswrapper[4949]: I1001 16:20:22.005546 4949 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/05704de2-46b5-4cce-bea8-9e1a00e0d2a5-ceph\") on node \"crc\" DevicePath \"\"" Oct 01 16:20:22 crc kubenswrapper[4949]: I1001 16:20:22.005554 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ckzfn\" (UniqueName: \"kubernetes.io/projected/05704de2-46b5-4cce-bea8-9e1a00e0d2a5-kube-api-access-ckzfn\") on node \"crc\" DevicePath \"\"" Oct 01 16:20:22 crc kubenswrapper[4949]: I1001 16:20:22.231171 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mhfqj" event={"ID":"05704de2-46b5-4cce-bea8-9e1a00e0d2a5","Type":"ContainerDied","Data":"ad7583957ade49049dacf2a0a3e5324c480fbce66d751502f16b23ff4d2ce9c3"} Oct 01 16:20:22 crc kubenswrapper[4949]: I1001 16:20:22.231235 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad7583957ade49049dacf2a0a3e5324c480fbce66d751502f16b23ff4d2ce9c3" Oct 01 16:20:22 crc kubenswrapper[4949]: I1001 16:20:22.231335 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-network-edpm-deployment-openstack-edpm-ipam-mhfqj" Oct 01 16:20:22 crc kubenswrapper[4949]: I1001 16:20:22.320877 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x2gzz"] Oct 01 16:20:22 crc kubenswrapper[4949]: E1001 16:20:22.321651 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05704de2-46b5-4cce-bea8-9e1a00e0d2a5" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Oct 01 16:20:22 crc kubenswrapper[4949]: I1001 16:20:22.321747 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="05704de2-46b5-4cce-bea8-9e1a00e0d2a5" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Oct 01 16:20:22 crc kubenswrapper[4949]: I1001 16:20:22.322027 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="05704de2-46b5-4cce-bea8-9e1a00e0d2a5" containerName="configure-network-edpm-deployment-openstack-edpm-ipam" Oct 01 16:20:22 crc kubenswrapper[4949]: I1001 16:20:22.323148 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x2gzz" Oct 01 16:20:22 crc kubenswrapper[4949]: I1001 16:20:22.324959 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 01 16:20:22 crc kubenswrapper[4949]: I1001 16:20:22.325391 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 01 16:20:22 crc kubenswrapper[4949]: I1001 16:20:22.326842 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 01 16:20:22 crc kubenswrapper[4949]: I1001 16:20:22.327030 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dwhcl" Oct 01 16:20:22 crc kubenswrapper[4949]: I1001 16:20:22.327086 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 01 16:20:22 crc kubenswrapper[4949]: I1001 16:20:22.344267 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x2gzz"] Oct 01 16:20:22 crc kubenswrapper[4949]: I1001 16:20:22.412602 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/410ce0aa-73a6-4d01-998e-bcd51879be8e-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-x2gzz\" (UID: \"410ce0aa-73a6-4d01-998e-bcd51879be8e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x2gzz" Oct 01 16:20:22 crc kubenswrapper[4949]: I1001 16:20:22.412645 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2bst\" (UniqueName: \"kubernetes.io/projected/410ce0aa-73a6-4d01-998e-bcd51879be8e-kube-api-access-p2bst\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-x2gzz\" (UID: \"410ce0aa-73a6-4d01-998e-bcd51879be8e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x2gzz" Oct 01 16:20:22 crc kubenswrapper[4949]: I1001 16:20:22.412697 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/410ce0aa-73a6-4d01-998e-bcd51879be8e-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-x2gzz\" (UID: \"410ce0aa-73a6-4d01-998e-bcd51879be8e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x2gzz" Oct 01 16:20:22 crc kubenswrapper[4949]: I1001 16:20:22.412754 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/410ce0aa-73a6-4d01-998e-bcd51879be8e-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-x2gzz\" (UID: \"410ce0aa-73a6-4d01-998e-bcd51879be8e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x2gzz" Oct 01 16:20:22 crc kubenswrapper[4949]: I1001 16:20:22.514292 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/410ce0aa-73a6-4d01-998e-bcd51879be8e-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-x2gzz\" (UID: \"410ce0aa-73a6-4d01-998e-bcd51879be8e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x2gzz" Oct 01 16:20:22 crc kubenswrapper[4949]: I1001 16:20:22.514338 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2bst\" (UniqueName: \"kubernetes.io/projected/410ce0aa-73a6-4d01-998e-bcd51879be8e-kube-api-access-p2bst\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-x2gzz\" (UID: \"410ce0aa-73a6-4d01-998e-bcd51879be8e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x2gzz" Oct 01 16:20:22 crc kubenswrapper[4949]: I1001 16:20:22.514386 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/410ce0aa-73a6-4d01-998e-bcd51879be8e-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-x2gzz\" (UID: \"410ce0aa-73a6-4d01-998e-bcd51879be8e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x2gzz" Oct 01 16:20:22 crc kubenswrapper[4949]: I1001 16:20:22.514412 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/410ce0aa-73a6-4d01-998e-bcd51879be8e-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-x2gzz\" (UID: \"410ce0aa-73a6-4d01-998e-bcd51879be8e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x2gzz" Oct 01 16:20:22 crc kubenswrapper[4949]: I1001 16:20:22.519333 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/410ce0aa-73a6-4d01-998e-bcd51879be8e-inventory\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-x2gzz\" (UID: \"410ce0aa-73a6-4d01-998e-bcd51879be8e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x2gzz" Oct 01 16:20:22 crc kubenswrapper[4949]: I1001 16:20:22.521000 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/410ce0aa-73a6-4d01-998e-bcd51879be8e-ceph\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-x2gzz\" (UID: \"410ce0aa-73a6-4d01-998e-bcd51879be8e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x2gzz" Oct 01 16:20:22 crc kubenswrapper[4949]: I1001 16:20:22.537859 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/410ce0aa-73a6-4d01-998e-bcd51879be8e-ssh-key\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-x2gzz\" (UID: \"410ce0aa-73a6-4d01-998e-bcd51879be8e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x2gzz" Oct 01 16:20:22 crc kubenswrapper[4949]: I1001 16:20:22.538146 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2bst\" (UniqueName: \"kubernetes.io/projected/410ce0aa-73a6-4d01-998e-bcd51879be8e-kube-api-access-p2bst\") pod \"validate-network-edpm-deployment-openstack-edpm-ipam-x2gzz\" (UID: \"410ce0aa-73a6-4d01-998e-bcd51879be8e\") " pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x2gzz" Oct 01 16:20:22 crc kubenswrapper[4949]: I1001 16:20:22.640423 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x2gzz" Oct 01 16:20:23 crc kubenswrapper[4949]: I1001 16:20:23.161625 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-nmstate/nmstate-webhook-6cdbc54649-hknwz" Oct 01 16:20:23 crc kubenswrapper[4949]: I1001 16:20:23.276923 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x2gzz"] Oct 01 16:20:23 crc kubenswrapper[4949]: W1001 16:20:23.281270 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod410ce0aa_73a6_4d01_998e_bcd51879be8e.slice/crio-93acbf63ac72f8e06c5040cf8807e13040ea55cfe22f1a78a8b1440e0d3a72f5 WatchSource:0}: Error finding container 93acbf63ac72f8e06c5040cf8807e13040ea55cfe22f1a78a8b1440e0d3a72f5: Status 404 returned error can't find the container with id 93acbf63ac72f8e06c5040cf8807e13040ea55cfe22f1a78a8b1440e0d3a72f5 Oct 01 16:20:24 crc kubenswrapper[4949]: I1001 16:20:24.257274 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x2gzz" event={"ID":"410ce0aa-73a6-4d01-998e-bcd51879be8e","Type":"ContainerStarted","Data":"93acbf63ac72f8e06c5040cf8807e13040ea55cfe22f1a78a8b1440e0d3a72f5"} Oct 01 16:20:25 crc kubenswrapper[4949]: I1001 16:20:25.267891 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x2gzz" event={"ID":"410ce0aa-73a6-4d01-998e-bcd51879be8e","Type":"ContainerStarted","Data":"f97e7ae45f2ec987ebe3415801210a07bda8d9cb81100c4f90c3564bbeadf27e"} Oct 01 16:20:25 crc kubenswrapper[4949]: I1001 16:20:25.281698 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x2gzz" podStartSLOduration=1.8171539920000002 podStartE2EDuration="3.281684541s" podCreationTimestamp="2025-10-01 16:20:22 +0000 UTC" firstStartedPulling="2025-10-01 16:20:23.28320651 +0000 UTC m=+2322.588812701" lastFinishedPulling="2025-10-01 16:20:24.747737059 +0000 UTC m=+2324.053343250" observedRunningTime="2025-10-01 16:20:25.280607851 +0000 UTC m=+2324.586214042" watchObservedRunningTime="2025-10-01 16:20:25.281684541 +0000 UTC m=+2324.587290732" Oct 01 16:20:29 crc kubenswrapper[4949]: I1001 16:20:29.601653 4949 scope.go:117] "RemoveContainer" containerID="757b624790258f8fb52b1acdc82513f0413ed869329247fea2c8ea990d6338de" Oct 01 16:20:29 crc kubenswrapper[4949]: E1001 16:20:29.602269 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:20:30 crc kubenswrapper[4949]: I1001 16:20:30.252145 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-75b55bf9f4-xhb9p" Oct 01 16:20:30 crc kubenswrapper[4949]: I1001 16:20:30.334384 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5d485496f6-kg4s9"] Oct 01 16:20:30 crc kubenswrapper[4949]: I1001 16:20:30.334707 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/metallb-operator-webhook-server-5d485496f6-kg4s9" podUID="37b8bea9-55e5-46a9-9217-d001b1157e9f" containerName="webhook-server" containerID="cri-o://54999edece2bd43148be09e2ad7cd8eae9c7dac0b312dac220b390145f7db59c" gracePeriod=2 Oct 01 16:20:30 crc kubenswrapper[4949]: I1001 16:20:30.344736 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5d485496f6-kg4s9"] Oct 01 16:20:30 crc kubenswrapper[4949]: I1001 16:20:30.347930 4949 generic.go:334] "Generic (PLEG): container finished" podID="410ce0aa-73a6-4d01-998e-bcd51879be8e" containerID="f97e7ae45f2ec987ebe3415801210a07bda8d9cb81100c4f90c3564bbeadf27e" exitCode=0 Oct 01 16:20:30 crc kubenswrapper[4949]: I1001 16:20:30.347995 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x2gzz" event={"ID":"410ce0aa-73a6-4d01-998e-bcd51879be8e","Type":"ContainerDied","Data":"f97e7ae45f2ec987ebe3415801210a07bda8d9cb81100c4f90c3564bbeadf27e"} Oct 01 16:20:30 crc kubenswrapper[4949]: I1001 16:20:30.800874 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5d485496f6-kg4s9" Oct 01 16:20:30 crc kubenswrapper[4949]: I1001 16:20:30.954838 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/37b8bea9-55e5-46a9-9217-d001b1157e9f-webhook-cert\") pod \"37b8bea9-55e5-46a9-9217-d001b1157e9f\" (UID: \"37b8bea9-55e5-46a9-9217-d001b1157e9f\") " Oct 01 16:20:30 crc kubenswrapper[4949]: I1001 16:20:30.954906 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/37b8bea9-55e5-46a9-9217-d001b1157e9f-apiservice-cert\") pod \"37b8bea9-55e5-46a9-9217-d001b1157e9f\" (UID: \"37b8bea9-55e5-46a9-9217-d001b1157e9f\") " Oct 01 16:20:30 crc kubenswrapper[4949]: I1001 16:20:30.955054 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwm9b\" (UniqueName: \"kubernetes.io/projected/37b8bea9-55e5-46a9-9217-d001b1157e9f-kube-api-access-xwm9b\") pod \"37b8bea9-55e5-46a9-9217-d001b1157e9f\" (UID: \"37b8bea9-55e5-46a9-9217-d001b1157e9f\") " Oct 01 16:20:30 crc kubenswrapper[4949]: I1001 16:20:30.971271 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37b8bea9-55e5-46a9-9217-d001b1157e9f-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "37b8bea9-55e5-46a9-9217-d001b1157e9f" (UID: "37b8bea9-55e5-46a9-9217-d001b1157e9f"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:20:30 crc kubenswrapper[4949]: I1001 16:20:30.974685 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37b8bea9-55e5-46a9-9217-d001b1157e9f-kube-api-access-xwm9b" (OuterVolumeSpecName: "kube-api-access-xwm9b") pod "37b8bea9-55e5-46a9-9217-d001b1157e9f" (UID: "37b8bea9-55e5-46a9-9217-d001b1157e9f"). InnerVolumeSpecName "kube-api-access-xwm9b". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:20:30 crc kubenswrapper[4949]: I1001 16:20:30.981268 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37b8bea9-55e5-46a9-9217-d001b1157e9f-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "37b8bea9-55e5-46a9-9217-d001b1157e9f" (UID: "37b8bea9-55e5-46a9-9217-d001b1157e9f"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:20:31 crc kubenswrapper[4949]: I1001 16:20:31.056991 4949 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/37b8bea9-55e5-46a9-9217-d001b1157e9f-webhook-cert\") on node \"crc\" DevicePath \"\"" Oct 01 16:20:31 crc kubenswrapper[4949]: I1001 16:20:31.057022 4949 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/37b8bea9-55e5-46a9-9217-d001b1157e9f-apiservice-cert\") on node \"crc\" DevicePath \"\"" Oct 01 16:20:31 crc kubenswrapper[4949]: I1001 16:20:31.057033 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xwm9b\" (UniqueName: \"kubernetes.io/projected/37b8bea9-55e5-46a9-9217-d001b1157e9f-kube-api-access-xwm9b\") on node \"crc\" DevicePath \"\"" Oct 01 16:20:31 crc kubenswrapper[4949]: I1001 16:20:31.359011 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5d485496f6-kg4s9" Oct 01 16:20:31 crc kubenswrapper[4949]: I1001 16:20:31.359010 4949 generic.go:334] "Generic (PLEG): container finished" podID="37b8bea9-55e5-46a9-9217-d001b1157e9f" containerID="54999edece2bd43148be09e2ad7cd8eae9c7dac0b312dac220b390145f7db59c" exitCode=0 Oct 01 16:20:31 crc kubenswrapper[4949]: I1001 16:20:31.359114 4949 scope.go:117] "RemoveContainer" containerID="54999edece2bd43148be09e2ad7cd8eae9c7dac0b312dac220b390145f7db59c" Oct 01 16:20:31 crc kubenswrapper[4949]: I1001 16:20:31.394302 4949 scope.go:117] "RemoveContainer" containerID="54999edece2bd43148be09e2ad7cd8eae9c7dac0b312dac220b390145f7db59c" Oct 01 16:20:31 crc kubenswrapper[4949]: E1001 16:20:31.394927 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"54999edece2bd43148be09e2ad7cd8eae9c7dac0b312dac220b390145f7db59c\": container with ID starting with 54999edece2bd43148be09e2ad7cd8eae9c7dac0b312dac220b390145f7db59c not found: ID does not exist" containerID="54999edece2bd43148be09e2ad7cd8eae9c7dac0b312dac220b390145f7db59c" Oct 01 16:20:31 crc kubenswrapper[4949]: I1001 16:20:31.394988 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"54999edece2bd43148be09e2ad7cd8eae9c7dac0b312dac220b390145f7db59c"} err="failed to get container status \"54999edece2bd43148be09e2ad7cd8eae9c7dac0b312dac220b390145f7db59c\": rpc error: code = NotFound desc = could not find container \"54999edece2bd43148be09e2ad7cd8eae9c7dac0b312dac220b390145f7db59c\": container with ID starting with 54999edece2bd43148be09e2ad7cd8eae9c7dac0b312dac220b390145f7db59c not found: ID does not exist" Oct 01 16:20:31 crc kubenswrapper[4949]: I1001 16:20:31.619521 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37b8bea9-55e5-46a9-9217-d001b1157e9f" path="/var/lib/kubelet/pods/37b8bea9-55e5-46a9-9217-d001b1157e9f/volumes" Oct 01 16:20:31 crc kubenswrapper[4949]: I1001 16:20:31.839040 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x2gzz" Oct 01 16:20:31 crc kubenswrapper[4949]: I1001 16:20:31.972261 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/410ce0aa-73a6-4d01-998e-bcd51879be8e-ssh-key\") pod \"410ce0aa-73a6-4d01-998e-bcd51879be8e\" (UID: \"410ce0aa-73a6-4d01-998e-bcd51879be8e\") " Oct 01 16:20:31 crc kubenswrapper[4949]: I1001 16:20:31.972365 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/410ce0aa-73a6-4d01-998e-bcd51879be8e-ceph\") pod \"410ce0aa-73a6-4d01-998e-bcd51879be8e\" (UID: \"410ce0aa-73a6-4d01-998e-bcd51879be8e\") " Oct 01 16:20:31 crc kubenswrapper[4949]: I1001 16:20:31.972503 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/410ce0aa-73a6-4d01-998e-bcd51879be8e-inventory\") pod \"410ce0aa-73a6-4d01-998e-bcd51879be8e\" (UID: \"410ce0aa-73a6-4d01-998e-bcd51879be8e\") " Oct 01 16:20:31 crc kubenswrapper[4949]: I1001 16:20:31.972536 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2bst\" (UniqueName: \"kubernetes.io/projected/410ce0aa-73a6-4d01-998e-bcd51879be8e-kube-api-access-p2bst\") pod \"410ce0aa-73a6-4d01-998e-bcd51879be8e\" (UID: \"410ce0aa-73a6-4d01-998e-bcd51879be8e\") " Oct 01 16:20:31 crc kubenswrapper[4949]: I1001 16:20:31.976993 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/410ce0aa-73a6-4d01-998e-bcd51879be8e-ceph" (OuterVolumeSpecName: "ceph") pod "410ce0aa-73a6-4d01-998e-bcd51879be8e" (UID: "410ce0aa-73a6-4d01-998e-bcd51879be8e"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:20:31 crc kubenswrapper[4949]: I1001 16:20:31.977777 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/410ce0aa-73a6-4d01-998e-bcd51879be8e-kube-api-access-p2bst" (OuterVolumeSpecName: "kube-api-access-p2bst") pod "410ce0aa-73a6-4d01-998e-bcd51879be8e" (UID: "410ce0aa-73a6-4d01-998e-bcd51879be8e"). InnerVolumeSpecName "kube-api-access-p2bst". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:20:32 crc kubenswrapper[4949]: I1001 16:20:32.001879 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/410ce0aa-73a6-4d01-998e-bcd51879be8e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "410ce0aa-73a6-4d01-998e-bcd51879be8e" (UID: "410ce0aa-73a6-4d01-998e-bcd51879be8e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:20:32 crc kubenswrapper[4949]: I1001 16:20:32.009822 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/410ce0aa-73a6-4d01-998e-bcd51879be8e-inventory" (OuterVolumeSpecName: "inventory") pod "410ce0aa-73a6-4d01-998e-bcd51879be8e" (UID: "410ce0aa-73a6-4d01-998e-bcd51879be8e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:20:32 crc kubenswrapper[4949]: I1001 16:20:32.074509 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/410ce0aa-73a6-4d01-998e-bcd51879be8e-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 01 16:20:32 crc kubenswrapper[4949]: I1001 16:20:32.074537 4949 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/410ce0aa-73a6-4d01-998e-bcd51879be8e-ceph\") on node \"crc\" DevicePath \"\"" Oct 01 16:20:32 crc kubenswrapper[4949]: I1001 16:20:32.074546 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/410ce0aa-73a6-4d01-998e-bcd51879be8e-inventory\") on node \"crc\" DevicePath \"\"" Oct 01 16:20:32 crc kubenswrapper[4949]: I1001 16:20:32.074556 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2bst\" (UniqueName: \"kubernetes.io/projected/410ce0aa-73a6-4d01-998e-bcd51879be8e-kube-api-access-p2bst\") on node \"crc\" DevicePath \"\"" Oct 01 16:20:32 crc kubenswrapper[4949]: I1001 16:20:32.380519 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x2gzz" event={"ID":"410ce0aa-73a6-4d01-998e-bcd51879be8e","Type":"ContainerDied","Data":"93acbf63ac72f8e06c5040cf8807e13040ea55cfe22f1a78a8b1440e0d3a72f5"} Oct 01 16:20:32 crc kubenswrapper[4949]: I1001 16:20:32.380561 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="93acbf63ac72f8e06c5040cf8807e13040ea55cfe22f1a78a8b1440e0d3a72f5" Oct 01 16:20:32 crc kubenswrapper[4949]: I1001 16:20:32.380627 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/validate-network-edpm-deployment-openstack-edpm-ipam-x2gzz" Oct 01 16:20:32 crc kubenswrapper[4949]: I1001 16:20:32.480609 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-279tn"] Oct 01 16:20:32 crc kubenswrapper[4949]: E1001 16:20:32.481309 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="410ce0aa-73a6-4d01-998e-bcd51879be8e" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Oct 01 16:20:32 crc kubenswrapper[4949]: I1001 16:20:32.481338 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="410ce0aa-73a6-4d01-998e-bcd51879be8e" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Oct 01 16:20:32 crc kubenswrapper[4949]: E1001 16:20:32.481367 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="37b8bea9-55e5-46a9-9217-d001b1157e9f" containerName="webhook-server" Oct 01 16:20:32 crc kubenswrapper[4949]: I1001 16:20:32.481375 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="37b8bea9-55e5-46a9-9217-d001b1157e9f" containerName="webhook-server" Oct 01 16:20:32 crc kubenswrapper[4949]: I1001 16:20:32.481611 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="37b8bea9-55e5-46a9-9217-d001b1157e9f" containerName="webhook-server" Oct 01 16:20:32 crc kubenswrapper[4949]: I1001 16:20:32.481642 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="410ce0aa-73a6-4d01-998e-bcd51879be8e" containerName="validate-network-edpm-deployment-openstack-edpm-ipam" Oct 01 16:20:32 crc kubenswrapper[4949]: I1001 16:20:32.482944 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-279tn" Oct 01 16:20:32 crc kubenswrapper[4949]: I1001 16:20:32.487011 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 01 16:20:32 crc kubenswrapper[4949]: I1001 16:20:32.487306 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 01 16:20:32 crc kubenswrapper[4949]: I1001 16:20:32.487488 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 01 16:20:32 crc kubenswrapper[4949]: I1001 16:20:32.487670 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dwhcl" Oct 01 16:20:32 crc kubenswrapper[4949]: I1001 16:20:32.488401 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 01 16:20:32 crc kubenswrapper[4949]: I1001 16:20:32.495889 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-279tn"] Oct 01 16:20:32 crc kubenswrapper[4949]: I1001 16:20:32.587828 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bd38b0c3-ae51-4ad3-ae6c-2c926614301c-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-279tn\" (UID: \"bd38b0c3-ae51-4ad3-ae6c-2c926614301c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-279tn" Oct 01 16:20:32 crc kubenswrapper[4949]: I1001 16:20:32.587898 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bd38b0c3-ae51-4ad3-ae6c-2c926614301c-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-279tn\" (UID: \"bd38b0c3-ae51-4ad3-ae6c-2c926614301c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-279tn" Oct 01 16:20:32 crc kubenswrapper[4949]: I1001 16:20:32.587925 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bd38b0c3-ae51-4ad3-ae6c-2c926614301c-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-279tn\" (UID: \"bd38b0c3-ae51-4ad3-ae6c-2c926614301c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-279tn" Oct 01 16:20:32 crc kubenswrapper[4949]: I1001 16:20:32.587963 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5vrhs\" (UniqueName: \"kubernetes.io/projected/bd38b0c3-ae51-4ad3-ae6c-2c926614301c-kube-api-access-5vrhs\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-279tn\" (UID: \"bd38b0c3-ae51-4ad3-ae6c-2c926614301c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-279tn" Oct 01 16:20:32 crc kubenswrapper[4949]: I1001 16:20:32.690313 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bd38b0c3-ae51-4ad3-ae6c-2c926614301c-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-279tn\" (UID: \"bd38b0c3-ae51-4ad3-ae6c-2c926614301c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-279tn" Oct 01 16:20:32 crc kubenswrapper[4949]: I1001 16:20:32.690469 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bd38b0c3-ae51-4ad3-ae6c-2c926614301c-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-279tn\" (UID: \"bd38b0c3-ae51-4ad3-ae6c-2c926614301c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-279tn" Oct 01 16:20:32 crc kubenswrapper[4949]: I1001 16:20:32.690520 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bd38b0c3-ae51-4ad3-ae6c-2c926614301c-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-279tn\" (UID: \"bd38b0c3-ae51-4ad3-ae6c-2c926614301c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-279tn" Oct 01 16:20:32 crc kubenswrapper[4949]: I1001 16:20:32.690608 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5vrhs\" (UniqueName: \"kubernetes.io/projected/bd38b0c3-ae51-4ad3-ae6c-2c926614301c-kube-api-access-5vrhs\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-279tn\" (UID: \"bd38b0c3-ae51-4ad3-ae6c-2c926614301c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-279tn" Oct 01 16:20:32 crc kubenswrapper[4949]: I1001 16:20:32.697984 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bd38b0c3-ae51-4ad3-ae6c-2c926614301c-ceph\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-279tn\" (UID: \"bd38b0c3-ae51-4ad3-ae6c-2c926614301c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-279tn" Oct 01 16:20:32 crc kubenswrapper[4949]: I1001 16:20:32.699475 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bd38b0c3-ae51-4ad3-ae6c-2c926614301c-ssh-key\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-279tn\" (UID: \"bd38b0c3-ae51-4ad3-ae6c-2c926614301c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-279tn" Oct 01 16:20:32 crc kubenswrapper[4949]: I1001 16:20:32.702105 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bd38b0c3-ae51-4ad3-ae6c-2c926614301c-inventory\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-279tn\" (UID: \"bd38b0c3-ae51-4ad3-ae6c-2c926614301c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-279tn" Oct 01 16:20:32 crc kubenswrapper[4949]: I1001 16:20:32.706768 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5vrhs\" (UniqueName: \"kubernetes.io/projected/bd38b0c3-ae51-4ad3-ae6c-2c926614301c-kube-api-access-5vrhs\") pod \"install-os-edpm-deployment-openstack-edpm-ipam-279tn\" (UID: \"bd38b0c3-ae51-4ad3-ae6c-2c926614301c\") " pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-279tn" Oct 01 16:20:32 crc kubenswrapper[4949]: I1001 16:20:32.842171 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-279tn" Oct 01 16:20:33 crc kubenswrapper[4949]: I1001 16:20:33.490478 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-os-edpm-deployment-openstack-edpm-ipam-279tn"] Oct 01 16:20:33 crc kubenswrapper[4949]: W1001 16:20:33.495020 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbd38b0c3_ae51_4ad3_ae6c_2c926614301c.slice/crio-09bad2000fbf4865fe2b5105179577facad92634378f71ba0e3c3730cd27ff75 WatchSource:0}: Error finding container 09bad2000fbf4865fe2b5105179577facad92634378f71ba0e3c3730cd27ff75: Status 404 returned error can't find the container with id 09bad2000fbf4865fe2b5105179577facad92634378f71ba0e3c3730cd27ff75 Oct 01 16:20:34 crc kubenswrapper[4949]: I1001 16:20:34.417094 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-279tn" event={"ID":"bd38b0c3-ae51-4ad3-ae6c-2c926614301c","Type":"ContainerStarted","Data":"09bad2000fbf4865fe2b5105179577facad92634378f71ba0e3c3730cd27ff75"} Oct 01 16:20:35 crc kubenswrapper[4949]: I1001 16:20:35.427139 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-279tn" event={"ID":"bd38b0c3-ae51-4ad3-ae6c-2c926614301c","Type":"ContainerStarted","Data":"2caaf8670408608e8b6913b517d28bbd0d947ebe02030b1a0832aa4259d32e8a"} Oct 01 16:20:35 crc kubenswrapper[4949]: I1001 16:20:35.451182 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-279tn" podStartSLOduration=2.693500695 podStartE2EDuration="3.451167549s" podCreationTimestamp="2025-10-01 16:20:32 +0000 UTC" firstStartedPulling="2025-10-01 16:20:33.497923427 +0000 UTC m=+2332.803529618" lastFinishedPulling="2025-10-01 16:20:34.255590281 +0000 UTC m=+2333.561196472" observedRunningTime="2025-10-01 16:20:35.447583751 +0000 UTC m=+2334.753189952" watchObservedRunningTime="2025-10-01 16:20:35.451167549 +0000 UTC m=+2334.756773740" Oct 01 16:20:44 crc kubenswrapper[4949]: I1001 16:20:44.602045 4949 scope.go:117] "RemoveContainer" containerID="757b624790258f8fb52b1acdc82513f0413ed869329247fea2c8ea990d6338de" Oct 01 16:20:44 crc kubenswrapper[4949]: E1001 16:20:44.603009 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:20:49 crc kubenswrapper[4949]: I1001 16:20:49.929003 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6fcd764774-zd8pg" Oct 01 16:20:50 crc kubenswrapper[4949]: I1001 16:20:50.057893 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5994bb94b4-vzvwd"] Oct 01 16:20:50 crc kubenswrapper[4949]: I1001 16:20:50.058074 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/metallb-operator-controller-manager-5994bb94b4-vzvwd" podUID="c3977b01-fc92-43d9-988f-132323039996" containerName="manager" containerID="cri-o://5f26e21452d489b74fcf1e93d9628f2840398552d33847a7b9d64f4748b83cd6" gracePeriod=10 Oct 01 16:20:50 crc kubenswrapper[4949]: I1001 16:20:50.567347 4949 generic.go:334] "Generic (PLEG): container finished" podID="c3977b01-fc92-43d9-988f-132323039996" containerID="5f26e21452d489b74fcf1e93d9628f2840398552d33847a7b9d64f4748b83cd6" exitCode=0 Oct 01 16:20:50 crc kubenswrapper[4949]: I1001 16:20:50.567392 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5994bb94b4-vzvwd" event={"ID":"c3977b01-fc92-43d9-988f-132323039996","Type":"ContainerDied","Data":"5f26e21452d489b74fcf1e93d9628f2840398552d33847a7b9d64f4748b83cd6"} Oct 01 16:20:51 crc kubenswrapper[4949]: I1001 16:20:51.326789 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5994bb94b4-vzvwd" Oct 01 16:20:51 crc kubenswrapper[4949]: I1001 16:20:51.461609 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c3977b01-fc92-43d9-988f-132323039996-webhook-cert\") pod \"c3977b01-fc92-43d9-988f-132323039996\" (UID: \"c3977b01-fc92-43d9-988f-132323039996\") " Oct 01 16:20:51 crc kubenswrapper[4949]: I1001 16:20:51.462049 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c3977b01-fc92-43d9-988f-132323039996-apiservice-cert\") pod \"c3977b01-fc92-43d9-988f-132323039996\" (UID: \"c3977b01-fc92-43d9-988f-132323039996\") " Oct 01 16:20:51 crc kubenswrapper[4949]: I1001 16:20:51.462075 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-49p5v\" (UniqueName: \"kubernetes.io/projected/c3977b01-fc92-43d9-988f-132323039996-kube-api-access-49p5v\") pod \"c3977b01-fc92-43d9-988f-132323039996\" (UID: \"c3977b01-fc92-43d9-988f-132323039996\") " Oct 01 16:20:51 crc kubenswrapper[4949]: I1001 16:20:51.467528 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3977b01-fc92-43d9-988f-132323039996-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "c3977b01-fc92-43d9-988f-132323039996" (UID: "c3977b01-fc92-43d9-988f-132323039996"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:20:51 crc kubenswrapper[4949]: I1001 16:20:51.468548 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c3977b01-fc92-43d9-988f-132323039996-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "c3977b01-fc92-43d9-988f-132323039996" (UID: "c3977b01-fc92-43d9-988f-132323039996"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:20:51 crc kubenswrapper[4949]: I1001 16:20:51.469053 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3977b01-fc92-43d9-988f-132323039996-kube-api-access-49p5v" (OuterVolumeSpecName: "kube-api-access-49p5v") pod "c3977b01-fc92-43d9-988f-132323039996" (UID: "c3977b01-fc92-43d9-988f-132323039996"). InnerVolumeSpecName "kube-api-access-49p5v". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:20:51 crc kubenswrapper[4949]: I1001 16:20:51.563657 4949 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/c3977b01-fc92-43d9-988f-132323039996-webhook-cert\") on node \"crc\" DevicePath \"\"" Oct 01 16:20:51 crc kubenswrapper[4949]: I1001 16:20:51.563691 4949 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/c3977b01-fc92-43d9-988f-132323039996-apiservice-cert\") on node \"crc\" DevicePath \"\"" Oct 01 16:20:51 crc kubenswrapper[4949]: I1001 16:20:51.563703 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-49p5v\" (UniqueName: \"kubernetes.io/projected/c3977b01-fc92-43d9-988f-132323039996-kube-api-access-49p5v\") on node \"crc\" DevicePath \"\"" Oct 01 16:20:51 crc kubenswrapper[4949]: I1001 16:20:51.577076 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-5994bb94b4-vzvwd" event={"ID":"c3977b01-fc92-43d9-988f-132323039996","Type":"ContainerDied","Data":"f12fe69cc3ed5bbbdb99773b4fa215f94b3c492b3b76bd77184134144594a112"} Oct 01 16:20:51 crc kubenswrapper[4949]: I1001 16:20:51.577153 4949 scope.go:117] "RemoveContainer" containerID="5f26e21452d489b74fcf1e93d9628f2840398552d33847a7b9d64f4748b83cd6" Oct 01 16:20:51 crc kubenswrapper[4949]: I1001 16:20:51.577617 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-5994bb94b4-vzvwd" Oct 01 16:20:51 crc kubenswrapper[4949]: I1001 16:20:51.632199 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5994bb94b4-vzvwd"] Oct 01 16:20:51 crc kubenswrapper[4949]: I1001 16:20:51.641146 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["metallb-system/metallb-operator-controller-manager-5994bb94b4-vzvwd"] Oct 01 16:20:53 crc kubenswrapper[4949]: I1001 16:20:53.613472 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3977b01-fc92-43d9-988f-132323039996" path="/var/lib/kubelet/pods/c3977b01-fc92-43d9-988f-132323039996/volumes" Oct 01 16:20:56 crc kubenswrapper[4949]: I1001 16:20:56.602515 4949 scope.go:117] "RemoveContainer" containerID="757b624790258f8fb52b1acdc82513f0413ed869329247fea2c8ea990d6338de" Oct 01 16:20:56 crc kubenswrapper[4949]: E1001 16:20:56.603044 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:20:59 crc kubenswrapper[4949]: I1001 16:20:59.640836 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-cf5f54bf-mpgzz"] Oct 01 16:20:59 crc kubenswrapper[4949]: E1001 16:20:59.641713 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c3977b01-fc92-43d9-988f-132323039996" containerName="manager" Oct 01 16:20:59 crc kubenswrapper[4949]: I1001 16:20:59.641728 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3977b01-fc92-43d9-988f-132323039996" containerName="manager" Oct 01 16:20:59 crc kubenswrapper[4949]: I1001 16:20:59.641917 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="c3977b01-fc92-43d9-988f-132323039996" containerName="manager" Oct 01 16:20:59 crc kubenswrapper[4949]: I1001 16:20:59.642499 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-cf5f54bf-mpgzz" Oct 01 16:20:59 crc kubenswrapper[4949]: I1001 16:20:59.654948 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-cf5f54bf-mpgzz"] Oct 01 16:20:59 crc kubenswrapper[4949]: I1001 16:20:59.723000 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/127d231a-5ecc-4d28-b1eb-9ac562730952-webhook-cert\") pod \"metallb-operator-controller-manager-cf5f54bf-mpgzz\" (UID: \"127d231a-5ecc-4d28-b1eb-9ac562730952\") " pod="metallb-system/metallb-operator-controller-manager-cf5f54bf-mpgzz" Oct 01 16:20:59 crc kubenswrapper[4949]: I1001 16:20:59.723300 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8xxx7\" (UniqueName: \"kubernetes.io/projected/127d231a-5ecc-4d28-b1eb-9ac562730952-kube-api-access-8xxx7\") pod \"metallb-operator-controller-manager-cf5f54bf-mpgzz\" (UID: \"127d231a-5ecc-4d28-b1eb-9ac562730952\") " pod="metallb-system/metallb-operator-controller-manager-cf5f54bf-mpgzz" Oct 01 16:20:59 crc kubenswrapper[4949]: I1001 16:20:59.723364 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/127d231a-5ecc-4d28-b1eb-9ac562730952-apiservice-cert\") pod \"metallb-operator-controller-manager-cf5f54bf-mpgzz\" (UID: \"127d231a-5ecc-4d28-b1eb-9ac562730952\") " pod="metallb-system/metallb-operator-controller-manager-cf5f54bf-mpgzz" Oct 01 16:20:59 crc kubenswrapper[4949]: I1001 16:20:59.824100 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8xxx7\" (UniqueName: \"kubernetes.io/projected/127d231a-5ecc-4d28-b1eb-9ac562730952-kube-api-access-8xxx7\") pod \"metallb-operator-controller-manager-cf5f54bf-mpgzz\" (UID: \"127d231a-5ecc-4d28-b1eb-9ac562730952\") " pod="metallb-system/metallb-operator-controller-manager-cf5f54bf-mpgzz" Oct 01 16:20:59 crc kubenswrapper[4949]: I1001 16:20:59.824170 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/127d231a-5ecc-4d28-b1eb-9ac562730952-apiservice-cert\") pod \"metallb-operator-controller-manager-cf5f54bf-mpgzz\" (UID: \"127d231a-5ecc-4d28-b1eb-9ac562730952\") " pod="metallb-system/metallb-operator-controller-manager-cf5f54bf-mpgzz" Oct 01 16:20:59 crc kubenswrapper[4949]: I1001 16:20:59.824282 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/127d231a-5ecc-4d28-b1eb-9ac562730952-webhook-cert\") pod \"metallb-operator-controller-manager-cf5f54bf-mpgzz\" (UID: \"127d231a-5ecc-4d28-b1eb-9ac562730952\") " pod="metallb-system/metallb-operator-controller-manager-cf5f54bf-mpgzz" Oct 01 16:20:59 crc kubenswrapper[4949]: I1001 16:20:59.831708 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/127d231a-5ecc-4d28-b1eb-9ac562730952-apiservice-cert\") pod \"metallb-operator-controller-manager-cf5f54bf-mpgzz\" (UID: \"127d231a-5ecc-4d28-b1eb-9ac562730952\") " pod="metallb-system/metallb-operator-controller-manager-cf5f54bf-mpgzz" Oct 01 16:20:59 crc kubenswrapper[4949]: I1001 16:20:59.831743 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/127d231a-5ecc-4d28-b1eb-9ac562730952-webhook-cert\") pod \"metallb-operator-controller-manager-cf5f54bf-mpgzz\" (UID: \"127d231a-5ecc-4d28-b1eb-9ac562730952\") " pod="metallb-system/metallb-operator-controller-manager-cf5f54bf-mpgzz" Oct 01 16:20:59 crc kubenswrapper[4949]: I1001 16:20:59.838695 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8xxx7\" (UniqueName: \"kubernetes.io/projected/127d231a-5ecc-4d28-b1eb-9ac562730952-kube-api-access-8xxx7\") pod \"metallb-operator-controller-manager-cf5f54bf-mpgzz\" (UID: \"127d231a-5ecc-4d28-b1eb-9ac562730952\") " pod="metallb-system/metallb-operator-controller-manager-cf5f54bf-mpgzz" Oct 01 16:20:59 crc kubenswrapper[4949]: I1001 16:20:59.886875 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-557dffd7fc-wmwrc"] Oct 01 16:20:59 crc kubenswrapper[4949]: I1001 16:20:59.889177 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-557dffd7fc-wmwrc" Oct 01 16:20:59 crc kubenswrapper[4949]: I1001 16:20:59.906406 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-557dffd7fc-wmwrc"] Oct 01 16:20:59 crc kubenswrapper[4949]: I1001 16:20:59.994666 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-cf5f54bf-mpgzz" Oct 01 16:21:00 crc kubenswrapper[4949]: I1001 16:21:00.029586 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hvngj\" (UniqueName: \"kubernetes.io/projected/cd4c2c4f-770f-409c-88c3-f7c05f5be013-kube-api-access-hvngj\") pod \"metallb-operator-webhook-server-557dffd7fc-wmwrc\" (UID: \"cd4c2c4f-770f-409c-88c3-f7c05f5be013\") " pod="metallb-system/metallb-operator-webhook-server-557dffd7fc-wmwrc" Oct 01 16:21:00 crc kubenswrapper[4949]: I1001 16:21:00.029686 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cd4c2c4f-770f-409c-88c3-f7c05f5be013-webhook-cert\") pod \"metallb-operator-webhook-server-557dffd7fc-wmwrc\" (UID: \"cd4c2c4f-770f-409c-88c3-f7c05f5be013\") " pod="metallb-system/metallb-operator-webhook-server-557dffd7fc-wmwrc" Oct 01 16:21:00 crc kubenswrapper[4949]: I1001 16:21:00.030357 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cd4c2c4f-770f-409c-88c3-f7c05f5be013-apiservice-cert\") pod \"metallb-operator-webhook-server-557dffd7fc-wmwrc\" (UID: \"cd4c2c4f-770f-409c-88c3-f7c05f5be013\") " pod="metallb-system/metallb-operator-webhook-server-557dffd7fc-wmwrc" Oct 01 16:21:00 crc kubenswrapper[4949]: I1001 16:21:00.131976 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cd4c2c4f-770f-409c-88c3-f7c05f5be013-webhook-cert\") pod \"metallb-operator-webhook-server-557dffd7fc-wmwrc\" (UID: \"cd4c2c4f-770f-409c-88c3-f7c05f5be013\") " pod="metallb-system/metallb-operator-webhook-server-557dffd7fc-wmwrc" Oct 01 16:21:00 crc kubenswrapper[4949]: I1001 16:21:00.132364 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cd4c2c4f-770f-409c-88c3-f7c05f5be013-apiservice-cert\") pod \"metallb-operator-webhook-server-557dffd7fc-wmwrc\" (UID: \"cd4c2c4f-770f-409c-88c3-f7c05f5be013\") " pod="metallb-system/metallb-operator-webhook-server-557dffd7fc-wmwrc" Oct 01 16:21:00 crc kubenswrapper[4949]: I1001 16:21:00.132457 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hvngj\" (UniqueName: \"kubernetes.io/projected/cd4c2c4f-770f-409c-88c3-f7c05f5be013-kube-api-access-hvngj\") pod \"metallb-operator-webhook-server-557dffd7fc-wmwrc\" (UID: \"cd4c2c4f-770f-409c-88c3-f7c05f5be013\") " pod="metallb-system/metallb-operator-webhook-server-557dffd7fc-wmwrc" Oct 01 16:21:00 crc kubenswrapper[4949]: I1001 16:21:00.137198 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/cd4c2c4f-770f-409c-88c3-f7c05f5be013-apiservice-cert\") pod \"metallb-operator-webhook-server-557dffd7fc-wmwrc\" (UID: \"cd4c2c4f-770f-409c-88c3-f7c05f5be013\") " pod="metallb-system/metallb-operator-webhook-server-557dffd7fc-wmwrc" Oct 01 16:21:00 crc kubenswrapper[4949]: I1001 16:21:00.137930 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/cd4c2c4f-770f-409c-88c3-f7c05f5be013-webhook-cert\") pod \"metallb-operator-webhook-server-557dffd7fc-wmwrc\" (UID: \"cd4c2c4f-770f-409c-88c3-f7c05f5be013\") " pod="metallb-system/metallb-operator-webhook-server-557dffd7fc-wmwrc" Oct 01 16:21:00 crc kubenswrapper[4949]: I1001 16:21:00.150874 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hvngj\" (UniqueName: \"kubernetes.io/projected/cd4c2c4f-770f-409c-88c3-f7c05f5be013-kube-api-access-hvngj\") pod \"metallb-operator-webhook-server-557dffd7fc-wmwrc\" (UID: \"cd4c2c4f-770f-409c-88c3-f7c05f5be013\") " pod="metallb-system/metallb-operator-webhook-server-557dffd7fc-wmwrc" Oct 01 16:21:00 crc kubenswrapper[4949]: I1001 16:21:00.232109 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-557dffd7fc-wmwrc" Oct 01 16:21:00 crc kubenswrapper[4949]: I1001 16:21:00.429353 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-cf5f54bf-mpgzz"] Oct 01 16:21:00 crc kubenswrapper[4949]: W1001 16:21:00.446255 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod127d231a_5ecc_4d28_b1eb_9ac562730952.slice/crio-5cd3dc516d83964f4136be0fdb8720cc47ec5a979b8c6725c8b4c9d0c479e564 WatchSource:0}: Error finding container 5cd3dc516d83964f4136be0fdb8720cc47ec5a979b8c6725c8b4c9d0c479e564: Status 404 returned error can't find the container with id 5cd3dc516d83964f4136be0fdb8720cc47ec5a979b8c6725c8b4c9d0c479e564 Oct 01 16:21:00 crc kubenswrapper[4949]: I1001 16:21:00.657387 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-cf5f54bf-mpgzz" event={"ID":"127d231a-5ecc-4d28-b1eb-9ac562730952","Type":"ContainerStarted","Data":"0d15878820ffdf0780980ab2ff075c10f18dd82c921bcdaf26e739f438803983"} Oct 01 16:21:00 crc kubenswrapper[4949]: I1001 16:21:00.657438 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-cf5f54bf-mpgzz" event={"ID":"127d231a-5ecc-4d28-b1eb-9ac562730952","Type":"ContainerStarted","Data":"5cd3dc516d83964f4136be0fdb8720cc47ec5a979b8c6725c8b4c9d0c479e564"} Oct 01 16:21:00 crc kubenswrapper[4949]: I1001 16:21:00.657492 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-cf5f54bf-mpgzz" Oct 01 16:21:00 crc kubenswrapper[4949]: I1001 16:21:00.678904 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-cf5f54bf-mpgzz" podStartSLOduration=1.678888749 podStartE2EDuration="1.678888749s" podCreationTimestamp="2025-10-01 16:20:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 16:21:00.677381749 +0000 UTC m=+2359.982987950" watchObservedRunningTime="2025-10-01 16:21:00.678888749 +0000 UTC m=+2359.984494940" Oct 01 16:21:00 crc kubenswrapper[4949]: I1001 16:21:00.707142 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-557dffd7fc-wmwrc"] Oct 01 16:21:00 crc kubenswrapper[4949]: W1001 16:21:00.715173 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd4c2c4f_770f_409c_88c3_f7c05f5be013.slice/crio-447d6ebb7a16326f86012ff0d15a887a9c0747e88a22d16086cf90db91a5dc27 WatchSource:0}: Error finding container 447d6ebb7a16326f86012ff0d15a887a9c0747e88a22d16086cf90db91a5dc27: Status 404 returned error can't find the container with id 447d6ebb7a16326f86012ff0d15a887a9c0747e88a22d16086cf90db91a5dc27 Oct 01 16:21:01 crc kubenswrapper[4949]: I1001 16:21:01.666826 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-557dffd7fc-wmwrc" event={"ID":"cd4c2c4f-770f-409c-88c3-f7c05f5be013","Type":"ContainerStarted","Data":"4798bf4d3fee5d08e21ba6d360ebe8e93dc065e71e3937b498bc11ad93e30b3c"} Oct 01 16:21:01 crc kubenswrapper[4949]: I1001 16:21:01.667506 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-557dffd7fc-wmwrc" Oct 01 16:21:01 crc kubenswrapper[4949]: I1001 16:21:01.667528 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-557dffd7fc-wmwrc" event={"ID":"cd4c2c4f-770f-409c-88c3-f7c05f5be013","Type":"ContainerStarted","Data":"447d6ebb7a16326f86012ff0d15a887a9c0747e88a22d16086cf90db91a5dc27"} Oct 01 16:21:01 crc kubenswrapper[4949]: I1001 16:21:01.695224 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-557dffd7fc-wmwrc" podStartSLOduration=2.695202872 podStartE2EDuration="2.695202872s" podCreationTimestamp="2025-10-01 16:20:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 16:21:01.689502976 +0000 UTC m=+2360.995109177" watchObservedRunningTime="2025-10-01 16:21:01.695202872 +0000 UTC m=+2361.000809223" Oct 01 16:21:07 crc kubenswrapper[4949]: I1001 16:21:07.650102 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["metallb-system/frr-k8s-d8v9f"] Oct 01 16:21:07 crc kubenswrapper[4949]: I1001 16:21:07.653140 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-d8v9f" podUID="df0f332d-5479-4bfe-846e-03805ead7d11" containerName="kube-rbac-proxy-frr" containerID="cri-o://721d51cda66030e43533e1c77438eac6f0c3e2ee07288fcfa1b37a27f92daefc" gracePeriod=2 Oct 01 16:21:07 crc kubenswrapper[4949]: I1001 16:21:07.653212 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-d8v9f" podUID="df0f332d-5479-4bfe-846e-03805ead7d11" containerName="frr-metrics" containerID="cri-o://fda606ab02323c81bd9b00d176b29e0a676d5b92ed85faf1e656f69f72761b57" gracePeriod=2 Oct 01 16:21:07 crc kubenswrapper[4949]: I1001 16:21:07.653106 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-d8v9f" podUID="df0f332d-5479-4bfe-846e-03805ead7d11" containerName="controller" containerID="cri-o://5657390087c46e46a3b36fa1a732eea973e99b4a94594f4bf35a41627421554d" gracePeriod=2 Oct 01 16:21:07 crc kubenswrapper[4949]: I1001 16:21:07.653140 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-d8v9f" podUID="df0f332d-5479-4bfe-846e-03805ead7d11" containerName="reloader" containerID="cri-o://4f8c9a5ab15b7db4053866141a5f95637490ae959d120f240e934d69a08613c7" gracePeriod=2 Oct 01 16:21:07 crc kubenswrapper[4949]: I1001 16:21:07.653166 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-d8v9f" podUID="df0f332d-5479-4bfe-846e-03805ead7d11" containerName="frr" containerID="cri-o://a117e16d45f195bf48b78a6ce4af11bf5cd926545d30112ed595b49f3f21ab5b" gracePeriod=2 Oct 01 16:21:07 crc kubenswrapper[4949]: I1001 16:21:07.653329 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-d8v9f" podUID="df0f332d-5479-4bfe-846e-03805ead7d11" containerName="kube-rbac-proxy" containerID="cri-o://143ea5406a67a3c2184d13a4494599256abc2149c4a9339822f847ca5c3e3e28" gracePeriod=2 Oct 01 16:21:07 crc kubenswrapper[4949]: I1001 16:21:07.663989 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["metallb-system/frr-k8s-d8v9f"] Oct 01 16:21:07 crc kubenswrapper[4949]: I1001 16:21:07.688822 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-zhn42"] Oct 01 16:21:07 crc kubenswrapper[4949]: E1001 16:21:07.689170 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df0f332d-5479-4bfe-846e-03805ead7d11" containerName="cp-metrics" Oct 01 16:21:07 crc kubenswrapper[4949]: I1001 16:21:07.689188 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="df0f332d-5479-4bfe-846e-03805ead7d11" containerName="cp-metrics" Oct 01 16:21:07 crc kubenswrapper[4949]: E1001 16:21:07.689204 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df0f332d-5479-4bfe-846e-03805ead7d11" containerName="reloader" Oct 01 16:21:07 crc kubenswrapper[4949]: I1001 16:21:07.689210 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="df0f332d-5479-4bfe-846e-03805ead7d11" containerName="reloader" Oct 01 16:21:07 crc kubenswrapper[4949]: E1001 16:21:07.689225 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df0f332d-5479-4bfe-846e-03805ead7d11" containerName="kube-rbac-proxy-frr" Oct 01 16:21:07 crc kubenswrapper[4949]: I1001 16:21:07.689233 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="df0f332d-5479-4bfe-846e-03805ead7d11" containerName="kube-rbac-proxy-frr" Oct 01 16:21:07 crc kubenswrapper[4949]: E1001 16:21:07.689248 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df0f332d-5479-4bfe-846e-03805ead7d11" containerName="controller" Oct 01 16:21:07 crc kubenswrapper[4949]: I1001 16:21:07.689253 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="df0f332d-5479-4bfe-846e-03805ead7d11" containerName="controller" Oct 01 16:21:07 crc kubenswrapper[4949]: E1001 16:21:07.689269 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df0f332d-5479-4bfe-846e-03805ead7d11" containerName="frr-metrics" Oct 01 16:21:07 crc kubenswrapper[4949]: I1001 16:21:07.689275 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="df0f332d-5479-4bfe-846e-03805ead7d11" containerName="frr-metrics" Oct 01 16:21:07 crc kubenswrapper[4949]: E1001 16:21:07.689291 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df0f332d-5479-4bfe-846e-03805ead7d11" containerName="cp-reloader" Oct 01 16:21:07 crc kubenswrapper[4949]: I1001 16:21:07.689298 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="df0f332d-5479-4bfe-846e-03805ead7d11" containerName="cp-reloader" Oct 01 16:21:07 crc kubenswrapper[4949]: E1001 16:21:07.689309 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df0f332d-5479-4bfe-846e-03805ead7d11" containerName="kube-rbac-proxy" Oct 01 16:21:07 crc kubenswrapper[4949]: I1001 16:21:07.689316 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="df0f332d-5479-4bfe-846e-03805ead7d11" containerName="kube-rbac-proxy" Oct 01 16:21:07 crc kubenswrapper[4949]: E1001 16:21:07.689324 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df0f332d-5479-4bfe-846e-03805ead7d11" containerName="cp-frr-files" Oct 01 16:21:07 crc kubenswrapper[4949]: I1001 16:21:07.689330 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="df0f332d-5479-4bfe-846e-03805ead7d11" containerName="cp-frr-files" Oct 01 16:21:07 crc kubenswrapper[4949]: E1001 16:21:07.689343 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="df0f332d-5479-4bfe-846e-03805ead7d11" containerName="frr" Oct 01 16:21:07 crc kubenswrapper[4949]: I1001 16:21:07.689349 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="df0f332d-5479-4bfe-846e-03805ead7d11" containerName="frr" Oct 01 16:21:07 crc kubenswrapper[4949]: I1001 16:21:07.689501 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="df0f332d-5479-4bfe-846e-03805ead7d11" containerName="frr-metrics" Oct 01 16:21:07 crc kubenswrapper[4949]: I1001 16:21:07.689547 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="df0f332d-5479-4bfe-846e-03805ead7d11" containerName="frr" Oct 01 16:21:07 crc kubenswrapper[4949]: I1001 16:21:07.689564 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="df0f332d-5479-4bfe-846e-03805ead7d11" containerName="controller" Oct 01 16:21:07 crc kubenswrapper[4949]: I1001 16:21:07.689579 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="df0f332d-5479-4bfe-846e-03805ead7d11" containerName="kube-rbac-proxy-frr" Oct 01 16:21:07 crc kubenswrapper[4949]: I1001 16:21:07.689593 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="df0f332d-5479-4bfe-846e-03805ead7d11" containerName="kube-rbac-proxy" Oct 01 16:21:07 crc kubenswrapper[4949]: I1001 16:21:07.689603 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="df0f332d-5479-4bfe-846e-03805ead7d11" containerName="reloader" Oct 01 16:21:07 crc kubenswrapper[4949]: I1001 16:21:07.690180 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-zhn42" Oct 01 16:21:07 crc kubenswrapper[4949]: I1001 16:21:07.705642 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-zhn42"] Oct 01 16:21:07 crc kubenswrapper[4949]: I1001 16:21:07.714837 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-gg5zc"] Oct 01 16:21:07 crc kubenswrapper[4949]: I1001 16:21:07.721609 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-gg5zc" Oct 01 16:21:07 crc kubenswrapper[4949]: I1001 16:21:07.795669 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["metallb-system/speaker-g9xqm"] Oct 01 16:21:07 crc kubenswrapper[4949]: I1001 16:21:07.796379 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/speaker-g9xqm" podUID="6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae" containerName="speaker" containerID="cri-o://a9de4c8be43074ae5ba849ae3115d7a58e2005585f3c5fb0d3e8af641b615c3b" gracePeriod=2 Oct 01 16:21:07 crc kubenswrapper[4949]: I1001 16:21:07.796904 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/speaker-g9xqm" podUID="6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae" containerName="kube-rbac-proxy" containerID="cri-o://b4d3eaf7c141a44b8147b2324ddee9df2a741f0fc419c40d8cc2eec13325f394" gracePeriod=2 Oct 01 16:21:07 crc kubenswrapper[4949]: I1001 16:21:07.806230 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4qdqp\" (UniqueName: \"kubernetes.io/projected/6223d63f-7f8d-429c-8526-d0c4d21798cd-kube-api-access-4qdqp\") pod \"frr-k8s-webhook-server-64bf5d555-zhn42\" (UID: \"6223d63f-7f8d-429c-8526-d0c4d21798cd\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-zhn42" Oct 01 16:21:07 crc kubenswrapper[4949]: I1001 16:21:07.806297 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6223d63f-7f8d-429c-8526-d0c4d21798cd-cert\") pod \"frr-k8s-webhook-server-64bf5d555-zhn42\" (UID: \"6223d63f-7f8d-429c-8526-d0c4d21798cd\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-zhn42" Oct 01 16:21:07 crc kubenswrapper[4949]: I1001 16:21:07.808065 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["metallb-system/speaker-g9xqm"] Oct 01 16:21:07 crc kubenswrapper[4949]: I1001 16:21:07.824741 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-mlbq6"] Oct 01 16:21:07 crc kubenswrapper[4949]: E1001 16:21:07.825382 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae" containerName="kube-rbac-proxy" Oct 01 16:21:07 crc kubenswrapper[4949]: I1001 16:21:07.825408 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae" containerName="kube-rbac-proxy" Oct 01 16:21:07 crc kubenswrapper[4949]: E1001 16:21:07.825424 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae" containerName="speaker" Oct 01 16:21:07 crc kubenswrapper[4949]: I1001 16:21:07.825434 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae" containerName="speaker" Oct 01 16:21:07 crc kubenswrapper[4949]: I1001 16:21:07.825688 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae" containerName="speaker" Oct 01 16:21:07 crc kubenswrapper[4949]: I1001 16:21:07.825727 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae" containerName="kube-rbac-proxy" Oct 01 16:21:07 crc kubenswrapper[4949]: I1001 16:21:07.827082 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-mlbq6" Oct 01 16:21:07 crc kubenswrapper[4949]: I1001 16:21:07.832452 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-68d546b9d8-pf22v"] Oct 01 16:21:07 crc kubenswrapper[4949]: I1001 16:21:07.836998 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-68d546b9d8-pf22v" Oct 01 16:21:07 crc kubenswrapper[4949]: I1001 16:21:07.851967 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-68d546b9d8-pf22v"] Oct 01 16:21:07 crc kubenswrapper[4949]: I1001 16:21:07.908530 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/655538b2-2c0d-48b6-a5e3-ada4d1150dbf-metrics-certs\") pod \"frr-k8s-gg5zc\" (UID: \"655538b2-2c0d-48b6-a5e3-ada4d1150dbf\") " pod="metallb-system/frr-k8s-gg5zc" Oct 01 16:21:07 crc kubenswrapper[4949]: I1001 16:21:07.908596 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/655538b2-2c0d-48b6-a5e3-ada4d1150dbf-frr-conf\") pod \"frr-k8s-gg5zc\" (UID: \"655538b2-2c0d-48b6-a5e3-ada4d1150dbf\") " pod="metallb-system/frr-k8s-gg5zc" Oct 01 16:21:07 crc kubenswrapper[4949]: I1001 16:21:07.908656 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4qdqp\" (UniqueName: \"kubernetes.io/projected/6223d63f-7f8d-429c-8526-d0c4d21798cd-kube-api-access-4qdqp\") pod \"frr-k8s-webhook-server-64bf5d555-zhn42\" (UID: \"6223d63f-7f8d-429c-8526-d0c4d21798cd\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-zhn42" Oct 01 16:21:07 crc kubenswrapper[4949]: I1001 16:21:07.908693 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/655538b2-2c0d-48b6-a5e3-ada4d1150dbf-metrics\") pod \"frr-k8s-gg5zc\" (UID: \"655538b2-2c0d-48b6-a5e3-ada4d1150dbf\") " pod="metallb-system/frr-k8s-gg5zc" Oct 01 16:21:07 crc kubenswrapper[4949]: I1001 16:21:07.908708 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/655538b2-2c0d-48b6-a5e3-ada4d1150dbf-reloader\") pod \"frr-k8s-gg5zc\" (UID: \"655538b2-2c0d-48b6-a5e3-ada4d1150dbf\") " pod="metallb-system/frr-k8s-gg5zc" Oct 01 16:21:07 crc kubenswrapper[4949]: I1001 16:21:07.908756 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6223d63f-7f8d-429c-8526-d0c4d21798cd-cert\") pod \"frr-k8s-webhook-server-64bf5d555-zhn42\" (UID: \"6223d63f-7f8d-429c-8526-d0c4d21798cd\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-zhn42" Oct 01 16:21:07 crc kubenswrapper[4949]: I1001 16:21:07.908824 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/655538b2-2c0d-48b6-a5e3-ada4d1150dbf-frr-sockets\") pod \"frr-k8s-gg5zc\" (UID: \"655538b2-2c0d-48b6-a5e3-ada4d1150dbf\") " pod="metallb-system/frr-k8s-gg5zc" Oct 01 16:21:07 crc kubenswrapper[4949]: I1001 16:21:07.908845 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/655538b2-2c0d-48b6-a5e3-ada4d1150dbf-frr-startup\") pod \"frr-k8s-gg5zc\" (UID: \"655538b2-2c0d-48b6-a5e3-ada4d1150dbf\") " pod="metallb-system/frr-k8s-gg5zc" Oct 01 16:21:07 crc kubenswrapper[4949]: I1001 16:21:07.908907 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nzs6q\" (UniqueName: \"kubernetes.io/projected/655538b2-2c0d-48b6-a5e3-ada4d1150dbf-kube-api-access-nzs6q\") pod \"frr-k8s-gg5zc\" (UID: \"655538b2-2c0d-48b6-a5e3-ada4d1150dbf\") " pod="metallb-system/frr-k8s-gg5zc" Oct 01 16:21:07 crc kubenswrapper[4949]: I1001 16:21:07.915836 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6223d63f-7f8d-429c-8526-d0c4d21798cd-cert\") pod \"frr-k8s-webhook-server-64bf5d555-zhn42\" (UID: \"6223d63f-7f8d-429c-8526-d0c4d21798cd\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-zhn42" Oct 01 16:21:07 crc kubenswrapper[4949]: I1001 16:21:07.928970 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4qdqp\" (UniqueName: \"kubernetes.io/projected/6223d63f-7f8d-429c-8526-d0c4d21798cd-kube-api-access-4qdqp\") pod \"frr-k8s-webhook-server-64bf5d555-zhn42\" (UID: \"6223d63f-7f8d-429c-8526-d0c4d21798cd\") " pod="metallb-system/frr-k8s-webhook-server-64bf5d555-zhn42" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.010482 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9f15709f-4836-45cd-a3c1-1fba0a51817e-memberlist\") pod \"speaker-mlbq6\" (UID: \"9f15709f-4836-45cd-a3c1-1fba0a51817e\") " pod="metallb-system/speaker-mlbq6" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.010548 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vt4sr\" (UniqueName: \"kubernetes.io/projected/b6da831f-cb7b-40a9-bf29-b340db8658a0-kube-api-access-vt4sr\") pod \"controller-68d546b9d8-pf22v\" (UID: \"b6da831f-cb7b-40a9-bf29-b340db8658a0\") " pod="metallb-system/controller-68d546b9d8-pf22v" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.010568 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lc6w\" (UniqueName: \"kubernetes.io/projected/9f15709f-4836-45cd-a3c1-1fba0a51817e-kube-api-access-5lc6w\") pod \"speaker-mlbq6\" (UID: \"9f15709f-4836-45cd-a3c1-1fba0a51817e\") " pod="metallb-system/speaker-mlbq6" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.010602 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/655538b2-2c0d-48b6-a5e3-ada4d1150dbf-frr-sockets\") pod \"frr-k8s-gg5zc\" (UID: \"655538b2-2c0d-48b6-a5e3-ada4d1150dbf\") " pod="metallb-system/frr-k8s-gg5zc" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.010622 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/655538b2-2c0d-48b6-a5e3-ada4d1150dbf-frr-startup\") pod \"frr-k8s-gg5zc\" (UID: \"655538b2-2c0d-48b6-a5e3-ada4d1150dbf\") " pod="metallb-system/frr-k8s-gg5zc" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.010646 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9f15709f-4836-45cd-a3c1-1fba0a51817e-metrics-certs\") pod \"speaker-mlbq6\" (UID: \"9f15709f-4836-45cd-a3c1-1fba0a51817e\") " pod="metallb-system/speaker-mlbq6" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.010664 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/9f15709f-4836-45cd-a3c1-1fba0a51817e-metallb-excludel2\") pod \"speaker-mlbq6\" (UID: \"9f15709f-4836-45cd-a3c1-1fba0a51817e\") " pod="metallb-system/speaker-mlbq6" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.010691 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b6da831f-cb7b-40a9-bf29-b340db8658a0-metrics-certs\") pod \"controller-68d546b9d8-pf22v\" (UID: \"b6da831f-cb7b-40a9-bf29-b340db8658a0\") " pod="metallb-system/controller-68d546b9d8-pf22v" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.010715 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nzs6q\" (UniqueName: \"kubernetes.io/projected/655538b2-2c0d-48b6-a5e3-ada4d1150dbf-kube-api-access-nzs6q\") pod \"frr-k8s-gg5zc\" (UID: \"655538b2-2c0d-48b6-a5e3-ada4d1150dbf\") " pod="metallb-system/frr-k8s-gg5zc" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.010757 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/655538b2-2c0d-48b6-a5e3-ada4d1150dbf-metrics-certs\") pod \"frr-k8s-gg5zc\" (UID: \"655538b2-2c0d-48b6-a5e3-ada4d1150dbf\") " pod="metallb-system/frr-k8s-gg5zc" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.010779 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/655538b2-2c0d-48b6-a5e3-ada4d1150dbf-frr-conf\") pod \"frr-k8s-gg5zc\" (UID: \"655538b2-2c0d-48b6-a5e3-ada4d1150dbf\") " pod="metallb-system/frr-k8s-gg5zc" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.010820 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/655538b2-2c0d-48b6-a5e3-ada4d1150dbf-metrics\") pod \"frr-k8s-gg5zc\" (UID: \"655538b2-2c0d-48b6-a5e3-ada4d1150dbf\") " pod="metallb-system/frr-k8s-gg5zc" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.010835 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/655538b2-2c0d-48b6-a5e3-ada4d1150dbf-reloader\") pod \"frr-k8s-gg5zc\" (UID: \"655538b2-2c0d-48b6-a5e3-ada4d1150dbf\") " pod="metallb-system/frr-k8s-gg5zc" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.010849 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b6da831f-cb7b-40a9-bf29-b340db8658a0-cert\") pod \"controller-68d546b9d8-pf22v\" (UID: \"b6da831f-cb7b-40a9-bf29-b340db8658a0\") " pod="metallb-system/controller-68d546b9d8-pf22v" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.011282 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/655538b2-2c0d-48b6-a5e3-ada4d1150dbf-frr-sockets\") pod \"frr-k8s-gg5zc\" (UID: \"655538b2-2c0d-48b6-a5e3-ada4d1150dbf\") " pod="metallb-system/frr-k8s-gg5zc" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.011547 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/655538b2-2c0d-48b6-a5e3-ada4d1150dbf-metrics\") pod \"frr-k8s-gg5zc\" (UID: \"655538b2-2c0d-48b6-a5e3-ada4d1150dbf\") " pod="metallb-system/frr-k8s-gg5zc" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.011615 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/655538b2-2c0d-48b6-a5e3-ada4d1150dbf-reloader\") pod \"frr-k8s-gg5zc\" (UID: \"655538b2-2c0d-48b6-a5e3-ada4d1150dbf\") " pod="metallb-system/frr-k8s-gg5zc" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.012798 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/655538b2-2c0d-48b6-a5e3-ada4d1150dbf-frr-startup\") pod \"frr-k8s-gg5zc\" (UID: \"655538b2-2c0d-48b6-a5e3-ada4d1150dbf\") " pod="metallb-system/frr-k8s-gg5zc" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.013157 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/655538b2-2c0d-48b6-a5e3-ada4d1150dbf-frr-conf\") pod \"frr-k8s-gg5zc\" (UID: \"655538b2-2c0d-48b6-a5e3-ada4d1150dbf\") " pod="metallb-system/frr-k8s-gg5zc" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.018280 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/655538b2-2c0d-48b6-a5e3-ada4d1150dbf-metrics-certs\") pod \"frr-k8s-gg5zc\" (UID: \"655538b2-2c0d-48b6-a5e3-ada4d1150dbf\") " pod="metallb-system/frr-k8s-gg5zc" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.025651 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-zhn42" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.026237 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nzs6q\" (UniqueName: \"kubernetes.io/projected/655538b2-2c0d-48b6-a5e3-ada4d1150dbf-kube-api-access-nzs6q\") pod \"frr-k8s-gg5zc\" (UID: \"655538b2-2c0d-48b6-a5e3-ada4d1150dbf\") " pod="metallb-system/frr-k8s-gg5zc" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.042482 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-gg5zc" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.112302 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9f15709f-4836-45cd-a3c1-1fba0a51817e-metrics-certs\") pod \"speaker-mlbq6\" (UID: \"9f15709f-4836-45cd-a3c1-1fba0a51817e\") " pod="metallb-system/speaker-mlbq6" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.112578 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/9f15709f-4836-45cd-a3c1-1fba0a51817e-metallb-excludel2\") pod \"speaker-mlbq6\" (UID: \"9f15709f-4836-45cd-a3c1-1fba0a51817e\") " pod="metallb-system/speaker-mlbq6" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.112616 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b6da831f-cb7b-40a9-bf29-b340db8658a0-metrics-certs\") pod \"controller-68d546b9d8-pf22v\" (UID: \"b6da831f-cb7b-40a9-bf29-b340db8658a0\") " pod="metallb-system/controller-68d546b9d8-pf22v" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.112707 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b6da831f-cb7b-40a9-bf29-b340db8658a0-cert\") pod \"controller-68d546b9d8-pf22v\" (UID: \"b6da831f-cb7b-40a9-bf29-b340db8658a0\") " pod="metallb-system/controller-68d546b9d8-pf22v" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.112746 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9f15709f-4836-45cd-a3c1-1fba0a51817e-memberlist\") pod \"speaker-mlbq6\" (UID: \"9f15709f-4836-45cd-a3c1-1fba0a51817e\") " pod="metallb-system/speaker-mlbq6" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.112772 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5lc6w\" (UniqueName: \"kubernetes.io/projected/9f15709f-4836-45cd-a3c1-1fba0a51817e-kube-api-access-5lc6w\") pod \"speaker-mlbq6\" (UID: \"9f15709f-4836-45cd-a3c1-1fba0a51817e\") " pod="metallb-system/speaker-mlbq6" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.112788 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vt4sr\" (UniqueName: \"kubernetes.io/projected/b6da831f-cb7b-40a9-bf29-b340db8658a0-kube-api-access-vt4sr\") pod \"controller-68d546b9d8-pf22v\" (UID: \"b6da831f-cb7b-40a9-bf29-b340db8658a0\") " pod="metallb-system/controller-68d546b9d8-pf22v" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.113694 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/9f15709f-4836-45cd-a3c1-1fba0a51817e-metallb-excludel2\") pod \"speaker-mlbq6\" (UID: \"9f15709f-4836-45cd-a3c1-1fba0a51817e\") " pod="metallb-system/speaker-mlbq6" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.116841 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/9f15709f-4836-45cd-a3c1-1fba0a51817e-metrics-certs\") pod \"speaker-mlbq6\" (UID: \"9f15709f-4836-45cd-a3c1-1fba0a51817e\") " pod="metallb-system/speaker-mlbq6" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.119370 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/b6da831f-cb7b-40a9-bf29-b340db8658a0-cert\") pod \"controller-68d546b9d8-pf22v\" (UID: \"b6da831f-cb7b-40a9-bf29-b340db8658a0\") " pod="metallb-system/controller-68d546b9d8-pf22v" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.120293 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b6da831f-cb7b-40a9-bf29-b340db8658a0-metrics-certs\") pod \"controller-68d546b9d8-pf22v\" (UID: \"b6da831f-cb7b-40a9-bf29-b340db8658a0\") " pod="metallb-system/controller-68d546b9d8-pf22v" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.132627 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/9f15709f-4836-45cd-a3c1-1fba0a51817e-memberlist\") pod \"speaker-mlbq6\" (UID: \"9f15709f-4836-45cd-a3c1-1fba0a51817e\") " pod="metallb-system/speaker-mlbq6" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.139530 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5lc6w\" (UniqueName: \"kubernetes.io/projected/9f15709f-4836-45cd-a3c1-1fba0a51817e-kube-api-access-5lc6w\") pod \"speaker-mlbq6\" (UID: \"9f15709f-4836-45cd-a3c1-1fba0a51817e\") " pod="metallb-system/speaker-mlbq6" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.140452 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vt4sr\" (UniqueName: \"kubernetes.io/projected/b6da831f-cb7b-40a9-bf29-b340db8658a0-kube-api-access-vt4sr\") pod \"controller-68d546b9d8-pf22v\" (UID: \"b6da831f-cb7b-40a9-bf29-b340db8658a0\") " pod="metallb-system/controller-68d546b9d8-pf22v" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.140954 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-d8v9f" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.185813 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-mlbq6" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.201290 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-68d546b9d8-pf22v" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.315426 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/df0f332d-5479-4bfe-846e-03805ead7d11-frr-sockets\") pod \"df0f332d-5479-4bfe-846e-03805ead7d11\" (UID: \"df0f332d-5479-4bfe-846e-03805ead7d11\") " Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.315939 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p8qhr\" (UniqueName: \"kubernetes.io/projected/df0f332d-5479-4bfe-846e-03805ead7d11-kube-api-access-p8qhr\") pod \"df0f332d-5479-4bfe-846e-03805ead7d11\" (UID: \"df0f332d-5479-4bfe-846e-03805ead7d11\") " Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.316316 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df0f332d-5479-4bfe-846e-03805ead7d11-metrics-certs\") pod \"df0f332d-5479-4bfe-846e-03805ead7d11\" (UID: \"df0f332d-5479-4bfe-846e-03805ead7d11\") " Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.316363 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/df0f332d-5479-4bfe-846e-03805ead7d11-frr-conf\") pod \"df0f332d-5479-4bfe-846e-03805ead7d11\" (UID: \"df0f332d-5479-4bfe-846e-03805ead7d11\") " Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.316600 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/df0f332d-5479-4bfe-846e-03805ead7d11-frr-startup\") pod \"df0f332d-5479-4bfe-846e-03805ead7d11\" (UID: \"df0f332d-5479-4bfe-846e-03805ead7d11\") " Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.316668 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/df0f332d-5479-4bfe-846e-03805ead7d11-reloader\") pod \"df0f332d-5479-4bfe-846e-03805ead7d11\" (UID: \"df0f332d-5479-4bfe-846e-03805ead7d11\") " Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.316817 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/df0f332d-5479-4bfe-846e-03805ead7d11-metrics\") pod \"df0f332d-5479-4bfe-846e-03805ead7d11\" (UID: \"df0f332d-5479-4bfe-846e-03805ead7d11\") " Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.317267 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df0f332d-5479-4bfe-846e-03805ead7d11-reloader" (OuterVolumeSpecName: "reloader") pod "df0f332d-5479-4bfe-846e-03805ead7d11" (UID: "df0f332d-5479-4bfe-846e-03805ead7d11"). InnerVolumeSpecName "reloader". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.317308 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/df0f332d-5479-4bfe-846e-03805ead7d11-frr-startup" (OuterVolumeSpecName: "frr-startup") pod "df0f332d-5479-4bfe-846e-03805ead7d11" (UID: "df0f332d-5479-4bfe-846e-03805ead7d11"). InnerVolumeSpecName "frr-startup". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.317451 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df0f332d-5479-4bfe-846e-03805ead7d11-frr-conf" (OuterVolumeSpecName: "frr-conf") pod "df0f332d-5479-4bfe-846e-03805ead7d11" (UID: "df0f332d-5479-4bfe-846e-03805ead7d11"). InnerVolumeSpecName "frr-conf". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.317665 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df0f332d-5479-4bfe-846e-03805ead7d11-frr-sockets" (OuterVolumeSpecName: "frr-sockets") pod "df0f332d-5479-4bfe-846e-03805ead7d11" (UID: "df0f332d-5479-4bfe-846e-03805ead7d11"). InnerVolumeSpecName "frr-sockets". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.318020 4949 reconciler_common.go:293] "Volume detached for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/df0f332d-5479-4bfe-846e-03805ead7d11-frr-startup\") on node \"crc\" DevicePath \"\"" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.318043 4949 reconciler_common.go:293] "Volume detached for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/df0f332d-5479-4bfe-846e-03805ead7d11-reloader\") on node \"crc\" DevicePath \"\"" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.318054 4949 reconciler_common.go:293] "Volume detached for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/df0f332d-5479-4bfe-846e-03805ead7d11-frr-conf\") on node \"crc\" DevicePath \"\"" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.321962 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/df0f332d-5479-4bfe-846e-03805ead7d11-kube-api-access-p8qhr" (OuterVolumeSpecName: "kube-api-access-p8qhr") pod "df0f332d-5479-4bfe-846e-03805ead7d11" (UID: "df0f332d-5479-4bfe-846e-03805ead7d11"). InnerVolumeSpecName "kube-api-access-p8qhr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.323182 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/df0f332d-5479-4bfe-846e-03805ead7d11-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "df0f332d-5479-4bfe-846e-03805ead7d11" (UID: "df0f332d-5479-4bfe-846e-03805ead7d11"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.324599 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/df0f332d-5479-4bfe-846e-03805ead7d11-metrics" (OuterVolumeSpecName: "metrics") pod "df0f332d-5479-4bfe-846e-03805ead7d11" (UID: "df0f332d-5479-4bfe-846e-03805ead7d11"). InnerVolumeSpecName "metrics". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.419560 4949 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/df0f332d-5479-4bfe-846e-03805ead7d11-metrics-certs\") on node \"crc\" DevicePath \"\"" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.419597 4949 reconciler_common.go:293] "Volume detached for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/df0f332d-5479-4bfe-846e-03805ead7d11-metrics\") on node \"crc\" DevicePath \"\"" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.419606 4949 reconciler_common.go:293] "Volume detached for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/df0f332d-5479-4bfe-846e-03805ead7d11-frr-sockets\") on node \"crc\" DevicePath \"\"" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.419616 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p8qhr\" (UniqueName: \"kubernetes.io/projected/df0f332d-5479-4bfe-846e-03805ead7d11-kube-api-access-p8qhr\") on node \"crc\" DevicePath \"\"" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.438528 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-g9xqm" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.475626 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-64bf5d555-zhn42"] Oct 01 16:21:08 crc kubenswrapper[4949]: W1001 16:21:08.490738 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6223d63f_7f8d_429c_8526_d0c4d21798cd.slice/crio-2abde5b159f567554f0cf7dd23de6be2cbb1c9bfc48965d98eb2c72e674c328c WatchSource:0}: Error finding container 2abde5b159f567554f0cf7dd23de6be2cbb1c9bfc48965d98eb2c72e674c328c: Status 404 returned error can't find the container with id 2abde5b159f567554f0cf7dd23de6be2cbb1c9bfc48965d98eb2c72e674c328c Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.623094 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae-metallb-excludel2\") pod \"6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae\" (UID: \"6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae\") " Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.623560 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae-memberlist\") pod \"6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae\" (UID: \"6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae\") " Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.623685 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae-metrics-certs\") pod \"6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae\" (UID: \"6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae\") " Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.623711 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjd4d\" (UniqueName: \"kubernetes.io/projected/6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae-kube-api-access-wjd4d\") pod \"6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae\" (UID: \"6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae\") " Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.623675 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae-metallb-excludel2" (OuterVolumeSpecName: "metallb-excludel2") pod "6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae" (UID: "6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae"). InnerVolumeSpecName "metallb-excludel2". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.628189 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae" (UID: "6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.628346 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae-kube-api-access-wjd4d" (OuterVolumeSpecName: "kube-api-access-wjd4d") pod "6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae" (UID: "6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae"). InnerVolumeSpecName "kube-api-access-wjd4d". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.630563 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae-memberlist" (OuterVolumeSpecName: "memberlist") pod "6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae" (UID: "6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae"). InnerVolumeSpecName "memberlist". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.638094 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-68d546b9d8-pf22v"] Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.725655 4949 reconciler_common.go:293] "Volume detached for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae-metallb-excludel2\") on node \"crc\" DevicePath \"\"" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.725685 4949 reconciler_common.go:293] "Volume detached for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae-memberlist\") on node \"crc\" DevicePath \"\"" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.725695 4949 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae-metrics-certs\") on node \"crc\" DevicePath \"\"" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.725704 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjd4d\" (UniqueName: \"kubernetes.io/projected/6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae-kube-api-access-wjd4d\") on node \"crc\" DevicePath \"\"" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.748687 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-zhn42" event={"ID":"6223d63f-7f8d-429c-8526-d0c4d21798cd","Type":"ContainerStarted","Data":"2abde5b159f567554f0cf7dd23de6be2cbb1c9bfc48965d98eb2c72e674c328c"} Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.750734 4949 generic.go:334] "Generic (PLEG): container finished" podID="6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae" containerID="b4d3eaf7c141a44b8147b2324ddee9df2a741f0fc419c40d8cc2eec13325f394" exitCode=0 Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.750779 4949 generic.go:334] "Generic (PLEG): container finished" podID="6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae" containerID="a9de4c8be43074ae5ba849ae3115d7a58e2005585f3c5fb0d3e8af641b615c3b" exitCode=0 Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.750786 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-g9xqm" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.750853 4949 scope.go:117] "RemoveContainer" containerID="b4d3eaf7c141a44b8147b2324ddee9df2a741f0fc419c40d8cc2eec13325f394" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.764724 4949 generic.go:334] "Generic (PLEG): container finished" podID="df0f332d-5479-4bfe-846e-03805ead7d11" containerID="721d51cda66030e43533e1c77438eac6f0c3e2ee07288fcfa1b37a27f92daefc" exitCode=0 Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.764771 4949 generic.go:334] "Generic (PLEG): container finished" podID="df0f332d-5479-4bfe-846e-03805ead7d11" containerID="143ea5406a67a3c2184d13a4494599256abc2149c4a9339822f847ca5c3e3e28" exitCode=0 Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.764786 4949 generic.go:334] "Generic (PLEG): container finished" podID="df0f332d-5479-4bfe-846e-03805ead7d11" containerID="fda606ab02323c81bd9b00d176b29e0a676d5b92ed85faf1e656f69f72761b57" exitCode=143 Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.764801 4949 generic.go:334] "Generic (PLEG): container finished" podID="df0f332d-5479-4bfe-846e-03805ead7d11" containerID="4f8c9a5ab15b7db4053866141a5f95637490ae959d120f240e934d69a08613c7" exitCode=0 Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.764814 4949 generic.go:334] "Generic (PLEG): container finished" podID="df0f332d-5479-4bfe-846e-03805ead7d11" containerID="a117e16d45f195bf48b78a6ce4af11bf5cd926545d30112ed595b49f3f21ab5b" exitCode=143 Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.764826 4949 generic.go:334] "Generic (PLEG): container finished" podID="df0f332d-5479-4bfe-846e-03805ead7d11" containerID="5657390087c46e46a3b36fa1a732eea973e99b4a94594f4bf35a41627421554d" exitCode=0 Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.764901 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"721d51cda66030e43533e1c77438eac6f0c3e2ee07288fcfa1b37a27f92daefc"} Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.764918 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"143ea5406a67a3c2184d13a4494599256abc2149c4a9339822f847ca5c3e3e28"} Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.764926 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fda606ab02323c81bd9b00d176b29e0a676d5b92ed85faf1e656f69f72761b57"} Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.764933 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4f8c9a5ab15b7db4053866141a5f95637490ae959d120f240e934d69a08613c7"} Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.764941 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a117e16d45f195bf48b78a6ce4af11bf5cd926545d30112ed595b49f3f21ab5b"} Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.764948 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5657390087c46e46a3b36fa1a732eea973e99b4a94594f4bf35a41627421554d"} Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.764956 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"893109630b07b8a64bc691bf0b5cbf8e7bc7c8a36c3fa9391db366750a072882"} Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.764964 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c7452318866e052928a6393a52b66a5b45a8745e28e5c6272ebe04d403e6e894"} Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.764971 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4e1ec65c181866ac4ac1d31a7d13e2b4c65a77fb8f3f1350922323603d86680c"} Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.764982 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"721d51cda66030e43533e1c77438eac6f0c3e2ee07288fcfa1b37a27f92daefc"} Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.764990 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"143ea5406a67a3c2184d13a4494599256abc2149c4a9339822f847ca5c3e3e28"} Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.764999 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"fda606ab02323c81bd9b00d176b29e0a676d5b92ed85faf1e656f69f72761b57"} Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.765007 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4f8c9a5ab15b7db4053866141a5f95637490ae959d120f240e934d69a08613c7"} Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.765015 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"a117e16d45f195bf48b78a6ce4af11bf5cd926545d30112ed595b49f3f21ab5b"} Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.765022 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"5657390087c46e46a3b36fa1a732eea973e99b4a94594f4bf35a41627421554d"} Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.765030 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"893109630b07b8a64bc691bf0b5cbf8e7bc7c8a36c3fa9391db366750a072882"} Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.765037 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c7452318866e052928a6393a52b66a5b45a8745e28e5c6272ebe04d403e6e894"} Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.765045 4949 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"4e1ec65c181866ac4ac1d31a7d13e2b4c65a77fb8f3f1350922323603d86680c"} Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.765185 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-d8v9f" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.768287 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gg5zc" event={"ID":"655538b2-2c0d-48b6-a5e3-ada4d1150dbf","Type":"ContainerStarted","Data":"1ec3170b7d8b7f25dc7a6a8b736cf53d09e93f1f0de8bac08bd0cb156a05d4f5"} Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.769881 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-mlbq6" event={"ID":"9f15709f-4836-45cd-a3c1-1fba0a51817e","Type":"ContainerStarted","Data":"df045c267ea4582b0aa63ca2775c5424e3aa9d678ff599a5ea098393a3fed48d"} Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.769907 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-mlbq6" event={"ID":"9f15709f-4836-45cd-a3c1-1fba0a51817e","Type":"ContainerStarted","Data":"830c5f39c16fe68ad8468620cc39502fa7092d154d0e87ad0d62eafa2225e425"} Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.771136 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-pf22v" event={"ID":"b6da831f-cb7b-40a9-bf29-b340db8658a0","Type":"ContainerStarted","Data":"efeb785a74c389301e7e841d6061859ffd99a8c374a4a054f8c999f8620b6471"} Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.787569 4949 scope.go:117] "RemoveContainer" containerID="a9de4c8be43074ae5ba849ae3115d7a58e2005585f3c5fb0d3e8af641b615c3b" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.890942 4949 scope.go:117] "RemoveContainer" containerID="b4d3eaf7c141a44b8147b2324ddee9df2a741f0fc419c40d8cc2eec13325f394" Oct 01 16:21:08 crc kubenswrapper[4949]: E1001 16:21:08.891614 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b4d3eaf7c141a44b8147b2324ddee9df2a741f0fc419c40d8cc2eec13325f394\": container with ID starting with b4d3eaf7c141a44b8147b2324ddee9df2a741f0fc419c40d8cc2eec13325f394 not found: ID does not exist" containerID="b4d3eaf7c141a44b8147b2324ddee9df2a741f0fc419c40d8cc2eec13325f394" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.891655 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4d3eaf7c141a44b8147b2324ddee9df2a741f0fc419c40d8cc2eec13325f394"} err="failed to get container status \"b4d3eaf7c141a44b8147b2324ddee9df2a741f0fc419c40d8cc2eec13325f394\": rpc error: code = NotFound desc = could not find container \"b4d3eaf7c141a44b8147b2324ddee9df2a741f0fc419c40d8cc2eec13325f394\": container with ID starting with b4d3eaf7c141a44b8147b2324ddee9df2a741f0fc419c40d8cc2eec13325f394 not found: ID does not exist" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.891683 4949 scope.go:117] "RemoveContainer" containerID="a9de4c8be43074ae5ba849ae3115d7a58e2005585f3c5fb0d3e8af641b615c3b" Oct 01 16:21:08 crc kubenswrapper[4949]: E1001 16:21:08.891984 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a9de4c8be43074ae5ba849ae3115d7a58e2005585f3c5fb0d3e8af641b615c3b\": container with ID starting with a9de4c8be43074ae5ba849ae3115d7a58e2005585f3c5fb0d3e8af641b615c3b not found: ID does not exist" containerID="a9de4c8be43074ae5ba849ae3115d7a58e2005585f3c5fb0d3e8af641b615c3b" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.892006 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9de4c8be43074ae5ba849ae3115d7a58e2005585f3c5fb0d3e8af641b615c3b"} err="failed to get container status \"a9de4c8be43074ae5ba849ae3115d7a58e2005585f3c5fb0d3e8af641b615c3b\": rpc error: code = NotFound desc = could not find container \"a9de4c8be43074ae5ba849ae3115d7a58e2005585f3c5fb0d3e8af641b615c3b\": container with ID starting with a9de4c8be43074ae5ba849ae3115d7a58e2005585f3c5fb0d3e8af641b615c3b not found: ID does not exist" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.892019 4949 scope.go:117] "RemoveContainer" containerID="b4d3eaf7c141a44b8147b2324ddee9df2a741f0fc419c40d8cc2eec13325f394" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.892902 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b4d3eaf7c141a44b8147b2324ddee9df2a741f0fc419c40d8cc2eec13325f394"} err="failed to get container status \"b4d3eaf7c141a44b8147b2324ddee9df2a741f0fc419c40d8cc2eec13325f394\": rpc error: code = NotFound desc = could not find container \"b4d3eaf7c141a44b8147b2324ddee9df2a741f0fc419c40d8cc2eec13325f394\": container with ID starting with b4d3eaf7c141a44b8147b2324ddee9df2a741f0fc419c40d8cc2eec13325f394 not found: ID does not exist" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.892920 4949 scope.go:117] "RemoveContainer" containerID="a9de4c8be43074ae5ba849ae3115d7a58e2005585f3c5fb0d3e8af641b615c3b" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.893251 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a9de4c8be43074ae5ba849ae3115d7a58e2005585f3c5fb0d3e8af641b615c3b"} err="failed to get container status \"a9de4c8be43074ae5ba849ae3115d7a58e2005585f3c5fb0d3e8af641b615c3b\": rpc error: code = NotFound desc = could not find container \"a9de4c8be43074ae5ba849ae3115d7a58e2005585f3c5fb0d3e8af641b615c3b\": container with ID starting with a9de4c8be43074ae5ba849ae3115d7a58e2005585f3c5fb0d3e8af641b615c3b not found: ID does not exist" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.893269 4949 scope.go:117] "RemoveContainer" containerID="721d51cda66030e43533e1c77438eac6f0c3e2ee07288fcfa1b37a27f92daefc" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.944435 4949 scope.go:117] "RemoveContainer" containerID="143ea5406a67a3c2184d13a4494599256abc2149c4a9339822f847ca5c3e3e28" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.960924 4949 scope.go:117] "RemoveContainer" containerID="fda606ab02323c81bd9b00d176b29e0a676d5b92ed85faf1e656f69f72761b57" Oct 01 16:21:08 crc kubenswrapper[4949]: I1001 16:21:08.986935 4949 scope.go:117] "RemoveContainer" containerID="4f8c9a5ab15b7db4053866141a5f95637490ae959d120f240e934d69a08613c7" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.005210 4949 scope.go:117] "RemoveContainer" containerID="a117e16d45f195bf48b78a6ce4af11bf5cd926545d30112ed595b49f3f21ab5b" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.029197 4949 scope.go:117] "RemoveContainer" containerID="5657390087c46e46a3b36fa1a732eea973e99b4a94594f4bf35a41627421554d" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.048942 4949 scope.go:117] "RemoveContainer" containerID="893109630b07b8a64bc691bf0b5cbf8e7bc7c8a36c3fa9391db366750a072882" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.067040 4949 scope.go:117] "RemoveContainer" containerID="c7452318866e052928a6393a52b66a5b45a8745e28e5c6272ebe04d403e6e894" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.143940 4949 scope.go:117] "RemoveContainer" containerID="4e1ec65c181866ac4ac1d31a7d13e2b4c65a77fb8f3f1350922323603d86680c" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.184369 4949 scope.go:117] "RemoveContainer" containerID="721d51cda66030e43533e1c77438eac6f0c3e2ee07288fcfa1b37a27f92daefc" Oct 01 16:21:09 crc kubenswrapper[4949]: E1001 16:21:09.184771 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"721d51cda66030e43533e1c77438eac6f0c3e2ee07288fcfa1b37a27f92daefc\": container with ID starting with 721d51cda66030e43533e1c77438eac6f0c3e2ee07288fcfa1b37a27f92daefc not found: ID does not exist" containerID="721d51cda66030e43533e1c77438eac6f0c3e2ee07288fcfa1b37a27f92daefc" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.184801 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"721d51cda66030e43533e1c77438eac6f0c3e2ee07288fcfa1b37a27f92daefc"} err="failed to get container status \"721d51cda66030e43533e1c77438eac6f0c3e2ee07288fcfa1b37a27f92daefc\": rpc error: code = NotFound desc = could not find container \"721d51cda66030e43533e1c77438eac6f0c3e2ee07288fcfa1b37a27f92daefc\": container with ID starting with 721d51cda66030e43533e1c77438eac6f0c3e2ee07288fcfa1b37a27f92daefc not found: ID does not exist" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.184821 4949 scope.go:117] "RemoveContainer" containerID="143ea5406a67a3c2184d13a4494599256abc2149c4a9339822f847ca5c3e3e28" Oct 01 16:21:09 crc kubenswrapper[4949]: E1001 16:21:09.185086 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"143ea5406a67a3c2184d13a4494599256abc2149c4a9339822f847ca5c3e3e28\": container with ID starting with 143ea5406a67a3c2184d13a4494599256abc2149c4a9339822f847ca5c3e3e28 not found: ID does not exist" containerID="143ea5406a67a3c2184d13a4494599256abc2149c4a9339822f847ca5c3e3e28" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.185160 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"143ea5406a67a3c2184d13a4494599256abc2149c4a9339822f847ca5c3e3e28"} err="failed to get container status \"143ea5406a67a3c2184d13a4494599256abc2149c4a9339822f847ca5c3e3e28\": rpc error: code = NotFound desc = could not find container \"143ea5406a67a3c2184d13a4494599256abc2149c4a9339822f847ca5c3e3e28\": container with ID starting with 143ea5406a67a3c2184d13a4494599256abc2149c4a9339822f847ca5c3e3e28 not found: ID does not exist" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.185193 4949 scope.go:117] "RemoveContainer" containerID="fda606ab02323c81bd9b00d176b29e0a676d5b92ed85faf1e656f69f72761b57" Oct 01 16:21:09 crc kubenswrapper[4949]: E1001 16:21:09.185469 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fda606ab02323c81bd9b00d176b29e0a676d5b92ed85faf1e656f69f72761b57\": container with ID starting with fda606ab02323c81bd9b00d176b29e0a676d5b92ed85faf1e656f69f72761b57 not found: ID does not exist" containerID="fda606ab02323c81bd9b00d176b29e0a676d5b92ed85faf1e656f69f72761b57" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.185494 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fda606ab02323c81bd9b00d176b29e0a676d5b92ed85faf1e656f69f72761b57"} err="failed to get container status \"fda606ab02323c81bd9b00d176b29e0a676d5b92ed85faf1e656f69f72761b57\": rpc error: code = NotFound desc = could not find container \"fda606ab02323c81bd9b00d176b29e0a676d5b92ed85faf1e656f69f72761b57\": container with ID starting with fda606ab02323c81bd9b00d176b29e0a676d5b92ed85faf1e656f69f72761b57 not found: ID does not exist" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.185508 4949 scope.go:117] "RemoveContainer" containerID="4f8c9a5ab15b7db4053866141a5f95637490ae959d120f240e934d69a08613c7" Oct 01 16:21:09 crc kubenswrapper[4949]: E1001 16:21:09.185735 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4f8c9a5ab15b7db4053866141a5f95637490ae959d120f240e934d69a08613c7\": container with ID starting with 4f8c9a5ab15b7db4053866141a5f95637490ae959d120f240e934d69a08613c7 not found: ID does not exist" containerID="4f8c9a5ab15b7db4053866141a5f95637490ae959d120f240e934d69a08613c7" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.185776 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f8c9a5ab15b7db4053866141a5f95637490ae959d120f240e934d69a08613c7"} err="failed to get container status \"4f8c9a5ab15b7db4053866141a5f95637490ae959d120f240e934d69a08613c7\": rpc error: code = NotFound desc = could not find container \"4f8c9a5ab15b7db4053866141a5f95637490ae959d120f240e934d69a08613c7\": container with ID starting with 4f8c9a5ab15b7db4053866141a5f95637490ae959d120f240e934d69a08613c7 not found: ID does not exist" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.185804 4949 scope.go:117] "RemoveContainer" containerID="a117e16d45f195bf48b78a6ce4af11bf5cd926545d30112ed595b49f3f21ab5b" Oct 01 16:21:09 crc kubenswrapper[4949]: E1001 16:21:09.186039 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a117e16d45f195bf48b78a6ce4af11bf5cd926545d30112ed595b49f3f21ab5b\": container with ID starting with a117e16d45f195bf48b78a6ce4af11bf5cd926545d30112ed595b49f3f21ab5b not found: ID does not exist" containerID="a117e16d45f195bf48b78a6ce4af11bf5cd926545d30112ed595b49f3f21ab5b" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.186063 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a117e16d45f195bf48b78a6ce4af11bf5cd926545d30112ed595b49f3f21ab5b"} err="failed to get container status \"a117e16d45f195bf48b78a6ce4af11bf5cd926545d30112ed595b49f3f21ab5b\": rpc error: code = NotFound desc = could not find container \"a117e16d45f195bf48b78a6ce4af11bf5cd926545d30112ed595b49f3f21ab5b\": container with ID starting with a117e16d45f195bf48b78a6ce4af11bf5cd926545d30112ed595b49f3f21ab5b not found: ID does not exist" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.186076 4949 scope.go:117] "RemoveContainer" containerID="5657390087c46e46a3b36fa1a732eea973e99b4a94594f4bf35a41627421554d" Oct 01 16:21:09 crc kubenswrapper[4949]: E1001 16:21:09.186275 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5657390087c46e46a3b36fa1a732eea973e99b4a94594f4bf35a41627421554d\": container with ID starting with 5657390087c46e46a3b36fa1a732eea973e99b4a94594f4bf35a41627421554d not found: ID does not exist" containerID="5657390087c46e46a3b36fa1a732eea973e99b4a94594f4bf35a41627421554d" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.186296 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5657390087c46e46a3b36fa1a732eea973e99b4a94594f4bf35a41627421554d"} err="failed to get container status \"5657390087c46e46a3b36fa1a732eea973e99b4a94594f4bf35a41627421554d\": rpc error: code = NotFound desc = could not find container \"5657390087c46e46a3b36fa1a732eea973e99b4a94594f4bf35a41627421554d\": container with ID starting with 5657390087c46e46a3b36fa1a732eea973e99b4a94594f4bf35a41627421554d not found: ID does not exist" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.186335 4949 scope.go:117] "RemoveContainer" containerID="893109630b07b8a64bc691bf0b5cbf8e7bc7c8a36c3fa9391db366750a072882" Oct 01 16:21:09 crc kubenswrapper[4949]: E1001 16:21:09.186518 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"893109630b07b8a64bc691bf0b5cbf8e7bc7c8a36c3fa9391db366750a072882\": container with ID starting with 893109630b07b8a64bc691bf0b5cbf8e7bc7c8a36c3fa9391db366750a072882 not found: ID does not exist" containerID="893109630b07b8a64bc691bf0b5cbf8e7bc7c8a36c3fa9391db366750a072882" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.186542 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"893109630b07b8a64bc691bf0b5cbf8e7bc7c8a36c3fa9391db366750a072882"} err="failed to get container status \"893109630b07b8a64bc691bf0b5cbf8e7bc7c8a36c3fa9391db366750a072882\": rpc error: code = NotFound desc = could not find container \"893109630b07b8a64bc691bf0b5cbf8e7bc7c8a36c3fa9391db366750a072882\": container with ID starting with 893109630b07b8a64bc691bf0b5cbf8e7bc7c8a36c3fa9391db366750a072882 not found: ID does not exist" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.186557 4949 scope.go:117] "RemoveContainer" containerID="c7452318866e052928a6393a52b66a5b45a8745e28e5c6272ebe04d403e6e894" Oct 01 16:21:09 crc kubenswrapper[4949]: E1001 16:21:09.186744 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c7452318866e052928a6393a52b66a5b45a8745e28e5c6272ebe04d403e6e894\": container with ID starting with c7452318866e052928a6393a52b66a5b45a8745e28e5c6272ebe04d403e6e894 not found: ID does not exist" containerID="c7452318866e052928a6393a52b66a5b45a8745e28e5c6272ebe04d403e6e894" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.186769 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7452318866e052928a6393a52b66a5b45a8745e28e5c6272ebe04d403e6e894"} err="failed to get container status \"c7452318866e052928a6393a52b66a5b45a8745e28e5c6272ebe04d403e6e894\": rpc error: code = NotFound desc = could not find container \"c7452318866e052928a6393a52b66a5b45a8745e28e5c6272ebe04d403e6e894\": container with ID starting with c7452318866e052928a6393a52b66a5b45a8745e28e5c6272ebe04d403e6e894 not found: ID does not exist" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.186786 4949 scope.go:117] "RemoveContainer" containerID="4e1ec65c181866ac4ac1d31a7d13e2b4c65a77fb8f3f1350922323603d86680c" Oct 01 16:21:09 crc kubenswrapper[4949]: E1001 16:21:09.186970 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e1ec65c181866ac4ac1d31a7d13e2b4c65a77fb8f3f1350922323603d86680c\": container with ID starting with 4e1ec65c181866ac4ac1d31a7d13e2b4c65a77fb8f3f1350922323603d86680c not found: ID does not exist" containerID="4e1ec65c181866ac4ac1d31a7d13e2b4c65a77fb8f3f1350922323603d86680c" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.186994 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e1ec65c181866ac4ac1d31a7d13e2b4c65a77fb8f3f1350922323603d86680c"} err="failed to get container status \"4e1ec65c181866ac4ac1d31a7d13e2b4c65a77fb8f3f1350922323603d86680c\": rpc error: code = NotFound desc = could not find container \"4e1ec65c181866ac4ac1d31a7d13e2b4c65a77fb8f3f1350922323603d86680c\": container with ID starting with 4e1ec65c181866ac4ac1d31a7d13e2b4c65a77fb8f3f1350922323603d86680c not found: ID does not exist" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.187012 4949 scope.go:117] "RemoveContainer" containerID="721d51cda66030e43533e1c77438eac6f0c3e2ee07288fcfa1b37a27f92daefc" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.187303 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"721d51cda66030e43533e1c77438eac6f0c3e2ee07288fcfa1b37a27f92daefc"} err="failed to get container status \"721d51cda66030e43533e1c77438eac6f0c3e2ee07288fcfa1b37a27f92daefc\": rpc error: code = NotFound desc = could not find container \"721d51cda66030e43533e1c77438eac6f0c3e2ee07288fcfa1b37a27f92daefc\": container with ID starting with 721d51cda66030e43533e1c77438eac6f0c3e2ee07288fcfa1b37a27f92daefc not found: ID does not exist" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.187323 4949 scope.go:117] "RemoveContainer" containerID="143ea5406a67a3c2184d13a4494599256abc2149c4a9339822f847ca5c3e3e28" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.187507 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"143ea5406a67a3c2184d13a4494599256abc2149c4a9339822f847ca5c3e3e28"} err="failed to get container status \"143ea5406a67a3c2184d13a4494599256abc2149c4a9339822f847ca5c3e3e28\": rpc error: code = NotFound desc = could not find container \"143ea5406a67a3c2184d13a4494599256abc2149c4a9339822f847ca5c3e3e28\": container with ID starting with 143ea5406a67a3c2184d13a4494599256abc2149c4a9339822f847ca5c3e3e28 not found: ID does not exist" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.187528 4949 scope.go:117] "RemoveContainer" containerID="fda606ab02323c81bd9b00d176b29e0a676d5b92ed85faf1e656f69f72761b57" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.187691 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fda606ab02323c81bd9b00d176b29e0a676d5b92ed85faf1e656f69f72761b57"} err="failed to get container status \"fda606ab02323c81bd9b00d176b29e0a676d5b92ed85faf1e656f69f72761b57\": rpc error: code = NotFound desc = could not find container \"fda606ab02323c81bd9b00d176b29e0a676d5b92ed85faf1e656f69f72761b57\": container with ID starting with fda606ab02323c81bd9b00d176b29e0a676d5b92ed85faf1e656f69f72761b57 not found: ID does not exist" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.187710 4949 scope.go:117] "RemoveContainer" containerID="4f8c9a5ab15b7db4053866141a5f95637490ae959d120f240e934d69a08613c7" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.188070 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f8c9a5ab15b7db4053866141a5f95637490ae959d120f240e934d69a08613c7"} err="failed to get container status \"4f8c9a5ab15b7db4053866141a5f95637490ae959d120f240e934d69a08613c7\": rpc error: code = NotFound desc = could not find container \"4f8c9a5ab15b7db4053866141a5f95637490ae959d120f240e934d69a08613c7\": container with ID starting with 4f8c9a5ab15b7db4053866141a5f95637490ae959d120f240e934d69a08613c7 not found: ID does not exist" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.188102 4949 scope.go:117] "RemoveContainer" containerID="a117e16d45f195bf48b78a6ce4af11bf5cd926545d30112ed595b49f3f21ab5b" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.188388 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a117e16d45f195bf48b78a6ce4af11bf5cd926545d30112ed595b49f3f21ab5b"} err="failed to get container status \"a117e16d45f195bf48b78a6ce4af11bf5cd926545d30112ed595b49f3f21ab5b\": rpc error: code = NotFound desc = could not find container \"a117e16d45f195bf48b78a6ce4af11bf5cd926545d30112ed595b49f3f21ab5b\": container with ID starting with a117e16d45f195bf48b78a6ce4af11bf5cd926545d30112ed595b49f3f21ab5b not found: ID does not exist" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.188411 4949 scope.go:117] "RemoveContainer" containerID="5657390087c46e46a3b36fa1a732eea973e99b4a94594f4bf35a41627421554d" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.188712 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5657390087c46e46a3b36fa1a732eea973e99b4a94594f4bf35a41627421554d"} err="failed to get container status \"5657390087c46e46a3b36fa1a732eea973e99b4a94594f4bf35a41627421554d\": rpc error: code = NotFound desc = could not find container \"5657390087c46e46a3b36fa1a732eea973e99b4a94594f4bf35a41627421554d\": container with ID starting with 5657390087c46e46a3b36fa1a732eea973e99b4a94594f4bf35a41627421554d not found: ID does not exist" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.188734 4949 scope.go:117] "RemoveContainer" containerID="893109630b07b8a64bc691bf0b5cbf8e7bc7c8a36c3fa9391db366750a072882" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.188984 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"893109630b07b8a64bc691bf0b5cbf8e7bc7c8a36c3fa9391db366750a072882"} err="failed to get container status \"893109630b07b8a64bc691bf0b5cbf8e7bc7c8a36c3fa9391db366750a072882\": rpc error: code = NotFound desc = could not find container \"893109630b07b8a64bc691bf0b5cbf8e7bc7c8a36c3fa9391db366750a072882\": container with ID starting with 893109630b07b8a64bc691bf0b5cbf8e7bc7c8a36c3fa9391db366750a072882 not found: ID does not exist" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.189027 4949 scope.go:117] "RemoveContainer" containerID="c7452318866e052928a6393a52b66a5b45a8745e28e5c6272ebe04d403e6e894" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.197330 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7452318866e052928a6393a52b66a5b45a8745e28e5c6272ebe04d403e6e894"} err="failed to get container status \"c7452318866e052928a6393a52b66a5b45a8745e28e5c6272ebe04d403e6e894\": rpc error: code = NotFound desc = could not find container \"c7452318866e052928a6393a52b66a5b45a8745e28e5c6272ebe04d403e6e894\": container with ID starting with c7452318866e052928a6393a52b66a5b45a8745e28e5c6272ebe04d403e6e894 not found: ID does not exist" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.197370 4949 scope.go:117] "RemoveContainer" containerID="4e1ec65c181866ac4ac1d31a7d13e2b4c65a77fb8f3f1350922323603d86680c" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.198714 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e1ec65c181866ac4ac1d31a7d13e2b4c65a77fb8f3f1350922323603d86680c"} err="failed to get container status \"4e1ec65c181866ac4ac1d31a7d13e2b4c65a77fb8f3f1350922323603d86680c\": rpc error: code = NotFound desc = could not find container \"4e1ec65c181866ac4ac1d31a7d13e2b4c65a77fb8f3f1350922323603d86680c\": container with ID starting with 4e1ec65c181866ac4ac1d31a7d13e2b4c65a77fb8f3f1350922323603d86680c not found: ID does not exist" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.198750 4949 scope.go:117] "RemoveContainer" containerID="721d51cda66030e43533e1c77438eac6f0c3e2ee07288fcfa1b37a27f92daefc" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.199055 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"721d51cda66030e43533e1c77438eac6f0c3e2ee07288fcfa1b37a27f92daefc"} err="failed to get container status \"721d51cda66030e43533e1c77438eac6f0c3e2ee07288fcfa1b37a27f92daefc\": rpc error: code = NotFound desc = could not find container \"721d51cda66030e43533e1c77438eac6f0c3e2ee07288fcfa1b37a27f92daefc\": container with ID starting with 721d51cda66030e43533e1c77438eac6f0c3e2ee07288fcfa1b37a27f92daefc not found: ID does not exist" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.199080 4949 scope.go:117] "RemoveContainer" containerID="143ea5406a67a3c2184d13a4494599256abc2149c4a9339822f847ca5c3e3e28" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.199312 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"143ea5406a67a3c2184d13a4494599256abc2149c4a9339822f847ca5c3e3e28"} err="failed to get container status \"143ea5406a67a3c2184d13a4494599256abc2149c4a9339822f847ca5c3e3e28\": rpc error: code = NotFound desc = could not find container \"143ea5406a67a3c2184d13a4494599256abc2149c4a9339822f847ca5c3e3e28\": container with ID starting with 143ea5406a67a3c2184d13a4494599256abc2149c4a9339822f847ca5c3e3e28 not found: ID does not exist" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.199335 4949 scope.go:117] "RemoveContainer" containerID="fda606ab02323c81bd9b00d176b29e0a676d5b92ed85faf1e656f69f72761b57" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.199583 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fda606ab02323c81bd9b00d176b29e0a676d5b92ed85faf1e656f69f72761b57"} err="failed to get container status \"fda606ab02323c81bd9b00d176b29e0a676d5b92ed85faf1e656f69f72761b57\": rpc error: code = NotFound desc = could not find container \"fda606ab02323c81bd9b00d176b29e0a676d5b92ed85faf1e656f69f72761b57\": container with ID starting with fda606ab02323c81bd9b00d176b29e0a676d5b92ed85faf1e656f69f72761b57 not found: ID does not exist" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.199601 4949 scope.go:117] "RemoveContainer" containerID="4f8c9a5ab15b7db4053866141a5f95637490ae959d120f240e934d69a08613c7" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.199866 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f8c9a5ab15b7db4053866141a5f95637490ae959d120f240e934d69a08613c7"} err="failed to get container status \"4f8c9a5ab15b7db4053866141a5f95637490ae959d120f240e934d69a08613c7\": rpc error: code = NotFound desc = could not find container \"4f8c9a5ab15b7db4053866141a5f95637490ae959d120f240e934d69a08613c7\": container with ID starting with 4f8c9a5ab15b7db4053866141a5f95637490ae959d120f240e934d69a08613c7 not found: ID does not exist" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.199891 4949 scope.go:117] "RemoveContainer" containerID="a117e16d45f195bf48b78a6ce4af11bf5cd926545d30112ed595b49f3f21ab5b" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.200112 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a117e16d45f195bf48b78a6ce4af11bf5cd926545d30112ed595b49f3f21ab5b"} err="failed to get container status \"a117e16d45f195bf48b78a6ce4af11bf5cd926545d30112ed595b49f3f21ab5b\": rpc error: code = NotFound desc = could not find container \"a117e16d45f195bf48b78a6ce4af11bf5cd926545d30112ed595b49f3f21ab5b\": container with ID starting with a117e16d45f195bf48b78a6ce4af11bf5cd926545d30112ed595b49f3f21ab5b not found: ID does not exist" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.200150 4949 scope.go:117] "RemoveContainer" containerID="5657390087c46e46a3b36fa1a732eea973e99b4a94594f4bf35a41627421554d" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.200393 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5657390087c46e46a3b36fa1a732eea973e99b4a94594f4bf35a41627421554d"} err="failed to get container status \"5657390087c46e46a3b36fa1a732eea973e99b4a94594f4bf35a41627421554d\": rpc error: code = NotFound desc = could not find container \"5657390087c46e46a3b36fa1a732eea973e99b4a94594f4bf35a41627421554d\": container with ID starting with 5657390087c46e46a3b36fa1a732eea973e99b4a94594f4bf35a41627421554d not found: ID does not exist" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.200415 4949 scope.go:117] "RemoveContainer" containerID="893109630b07b8a64bc691bf0b5cbf8e7bc7c8a36c3fa9391db366750a072882" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.200641 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"893109630b07b8a64bc691bf0b5cbf8e7bc7c8a36c3fa9391db366750a072882"} err="failed to get container status \"893109630b07b8a64bc691bf0b5cbf8e7bc7c8a36c3fa9391db366750a072882\": rpc error: code = NotFound desc = could not find container \"893109630b07b8a64bc691bf0b5cbf8e7bc7c8a36c3fa9391db366750a072882\": container with ID starting with 893109630b07b8a64bc691bf0b5cbf8e7bc7c8a36c3fa9391db366750a072882 not found: ID does not exist" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.200663 4949 scope.go:117] "RemoveContainer" containerID="c7452318866e052928a6393a52b66a5b45a8745e28e5c6272ebe04d403e6e894" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.200858 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7452318866e052928a6393a52b66a5b45a8745e28e5c6272ebe04d403e6e894"} err="failed to get container status \"c7452318866e052928a6393a52b66a5b45a8745e28e5c6272ebe04d403e6e894\": rpc error: code = NotFound desc = could not find container \"c7452318866e052928a6393a52b66a5b45a8745e28e5c6272ebe04d403e6e894\": container with ID starting with c7452318866e052928a6393a52b66a5b45a8745e28e5c6272ebe04d403e6e894 not found: ID does not exist" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.200879 4949 scope.go:117] "RemoveContainer" containerID="4e1ec65c181866ac4ac1d31a7d13e2b4c65a77fb8f3f1350922323603d86680c" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.201058 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e1ec65c181866ac4ac1d31a7d13e2b4c65a77fb8f3f1350922323603d86680c"} err="failed to get container status \"4e1ec65c181866ac4ac1d31a7d13e2b4c65a77fb8f3f1350922323603d86680c\": rpc error: code = NotFound desc = could not find container \"4e1ec65c181866ac4ac1d31a7d13e2b4c65a77fb8f3f1350922323603d86680c\": container with ID starting with 4e1ec65c181866ac4ac1d31a7d13e2b4c65a77fb8f3f1350922323603d86680c not found: ID does not exist" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.201078 4949 scope.go:117] "RemoveContainer" containerID="721d51cda66030e43533e1c77438eac6f0c3e2ee07288fcfa1b37a27f92daefc" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.201296 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"721d51cda66030e43533e1c77438eac6f0c3e2ee07288fcfa1b37a27f92daefc"} err="failed to get container status \"721d51cda66030e43533e1c77438eac6f0c3e2ee07288fcfa1b37a27f92daefc\": rpc error: code = NotFound desc = could not find container \"721d51cda66030e43533e1c77438eac6f0c3e2ee07288fcfa1b37a27f92daefc\": container with ID starting with 721d51cda66030e43533e1c77438eac6f0c3e2ee07288fcfa1b37a27f92daefc not found: ID does not exist" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.201316 4949 scope.go:117] "RemoveContainer" containerID="143ea5406a67a3c2184d13a4494599256abc2149c4a9339822f847ca5c3e3e28" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.201539 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"143ea5406a67a3c2184d13a4494599256abc2149c4a9339822f847ca5c3e3e28"} err="failed to get container status \"143ea5406a67a3c2184d13a4494599256abc2149c4a9339822f847ca5c3e3e28\": rpc error: code = NotFound desc = could not find container \"143ea5406a67a3c2184d13a4494599256abc2149c4a9339822f847ca5c3e3e28\": container with ID starting with 143ea5406a67a3c2184d13a4494599256abc2149c4a9339822f847ca5c3e3e28 not found: ID does not exist" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.201559 4949 scope.go:117] "RemoveContainer" containerID="fda606ab02323c81bd9b00d176b29e0a676d5b92ed85faf1e656f69f72761b57" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.201795 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fda606ab02323c81bd9b00d176b29e0a676d5b92ed85faf1e656f69f72761b57"} err="failed to get container status \"fda606ab02323c81bd9b00d176b29e0a676d5b92ed85faf1e656f69f72761b57\": rpc error: code = NotFound desc = could not find container \"fda606ab02323c81bd9b00d176b29e0a676d5b92ed85faf1e656f69f72761b57\": container with ID starting with fda606ab02323c81bd9b00d176b29e0a676d5b92ed85faf1e656f69f72761b57 not found: ID does not exist" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.201812 4949 scope.go:117] "RemoveContainer" containerID="4f8c9a5ab15b7db4053866141a5f95637490ae959d120f240e934d69a08613c7" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.202366 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4f8c9a5ab15b7db4053866141a5f95637490ae959d120f240e934d69a08613c7"} err="failed to get container status \"4f8c9a5ab15b7db4053866141a5f95637490ae959d120f240e934d69a08613c7\": rpc error: code = NotFound desc = could not find container \"4f8c9a5ab15b7db4053866141a5f95637490ae959d120f240e934d69a08613c7\": container with ID starting with 4f8c9a5ab15b7db4053866141a5f95637490ae959d120f240e934d69a08613c7 not found: ID does not exist" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.202399 4949 scope.go:117] "RemoveContainer" containerID="a117e16d45f195bf48b78a6ce4af11bf5cd926545d30112ed595b49f3f21ab5b" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.204365 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a117e16d45f195bf48b78a6ce4af11bf5cd926545d30112ed595b49f3f21ab5b"} err="failed to get container status \"a117e16d45f195bf48b78a6ce4af11bf5cd926545d30112ed595b49f3f21ab5b\": rpc error: code = NotFound desc = could not find container \"a117e16d45f195bf48b78a6ce4af11bf5cd926545d30112ed595b49f3f21ab5b\": container with ID starting with a117e16d45f195bf48b78a6ce4af11bf5cd926545d30112ed595b49f3f21ab5b not found: ID does not exist" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.204411 4949 scope.go:117] "RemoveContainer" containerID="5657390087c46e46a3b36fa1a732eea973e99b4a94594f4bf35a41627421554d" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.204670 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5657390087c46e46a3b36fa1a732eea973e99b4a94594f4bf35a41627421554d"} err="failed to get container status \"5657390087c46e46a3b36fa1a732eea973e99b4a94594f4bf35a41627421554d\": rpc error: code = NotFound desc = could not find container \"5657390087c46e46a3b36fa1a732eea973e99b4a94594f4bf35a41627421554d\": container with ID starting with 5657390087c46e46a3b36fa1a732eea973e99b4a94594f4bf35a41627421554d not found: ID does not exist" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.204693 4949 scope.go:117] "RemoveContainer" containerID="893109630b07b8a64bc691bf0b5cbf8e7bc7c8a36c3fa9391db366750a072882" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.204999 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"893109630b07b8a64bc691bf0b5cbf8e7bc7c8a36c3fa9391db366750a072882"} err="failed to get container status \"893109630b07b8a64bc691bf0b5cbf8e7bc7c8a36c3fa9391db366750a072882\": rpc error: code = NotFound desc = could not find container \"893109630b07b8a64bc691bf0b5cbf8e7bc7c8a36c3fa9391db366750a072882\": container with ID starting with 893109630b07b8a64bc691bf0b5cbf8e7bc7c8a36c3fa9391db366750a072882 not found: ID does not exist" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.205868 4949 scope.go:117] "RemoveContainer" containerID="c7452318866e052928a6393a52b66a5b45a8745e28e5c6272ebe04d403e6e894" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.206856 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c7452318866e052928a6393a52b66a5b45a8745e28e5c6272ebe04d403e6e894"} err="failed to get container status \"c7452318866e052928a6393a52b66a5b45a8745e28e5c6272ebe04d403e6e894\": rpc error: code = NotFound desc = could not find container \"c7452318866e052928a6393a52b66a5b45a8745e28e5c6272ebe04d403e6e894\": container with ID starting with c7452318866e052928a6393a52b66a5b45a8745e28e5c6272ebe04d403e6e894 not found: ID does not exist" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.206889 4949 scope.go:117] "RemoveContainer" containerID="4e1ec65c181866ac4ac1d31a7d13e2b4c65a77fb8f3f1350922323603d86680c" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.207183 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e1ec65c181866ac4ac1d31a7d13e2b4c65a77fb8f3f1350922323603d86680c"} err="failed to get container status \"4e1ec65c181866ac4ac1d31a7d13e2b4c65a77fb8f3f1350922323603d86680c\": rpc error: code = NotFound desc = could not find container \"4e1ec65c181866ac4ac1d31a7d13e2b4c65a77fb8f3f1350922323603d86680c\": container with ID starting with 4e1ec65c181866ac4ac1d31a7d13e2b4c65a77fb8f3f1350922323603d86680c not found: ID does not exist" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.617708 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae" path="/var/lib/kubelet/pods/6d17ae19-3e65-4a3a-b49a-d8c513d2a2ae/volumes" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.619027 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="df0f332d-5479-4bfe-846e-03805ead7d11" path="/var/lib/kubelet/pods/df0f332d-5479-4bfe-846e-03805ead7d11/volumes" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.783366 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-mlbq6" event={"ID":"9f15709f-4836-45cd-a3c1-1fba0a51817e","Type":"ContainerStarted","Data":"bdf9c813811da0dbd8e5923fd7c483ba5cbabfd6ecd25d5de73bd31dc126d9d1"} Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.784058 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-mlbq6" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.786020 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-pf22v" event={"ID":"b6da831f-cb7b-40a9-bf29-b340db8658a0","Type":"ContainerStarted","Data":"ce819276b3c1592d8359fe2c8d42ae73c64a117649f8d4eafafe1ecfa2ccce60"} Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.786054 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-68d546b9d8-pf22v" event={"ID":"b6da831f-cb7b-40a9-bf29-b340db8658a0","Type":"ContainerStarted","Data":"19251933fb01532ede98b0a1a58231cd89cde81299b6e1af86533e740f8ba017"} Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.786997 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-68d546b9d8-pf22v" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.804783 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-mlbq6" podStartSLOduration=2.8047689289999997 podStartE2EDuration="2.804768929s" podCreationTimestamp="2025-10-01 16:21:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 16:21:09.801307514 +0000 UTC m=+2369.106913705" watchObservedRunningTime="2025-10-01 16:21:09.804768929 +0000 UTC m=+2369.110375110" Oct 01 16:21:09 crc kubenswrapper[4949]: I1001 16:21:09.823912 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-68d546b9d8-pf22v" podStartSLOduration=2.823894432 podStartE2EDuration="2.823894432s" podCreationTimestamp="2025-10-01 16:21:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 16:21:09.81502203 +0000 UTC m=+2369.120628221" watchObservedRunningTime="2025-10-01 16:21:09.823894432 +0000 UTC m=+2369.129500623" Oct 01 16:21:10 crc kubenswrapper[4949]: I1001 16:21:10.236495 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-557dffd7fc-wmwrc" Oct 01 16:21:10 crc kubenswrapper[4949]: I1001 16:21:10.305339 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["metallb-system/metallb-operator-webhook-server-75b55bf9f4-xhb9p"] Oct 01 16:21:10 crc kubenswrapper[4949]: I1001 16:21:10.305572 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/metallb-operator-webhook-server-75b55bf9f4-xhb9p" podUID="72d082fe-c501-45dc-a52c-39abe6d7326a" containerName="webhook-server" containerID="cri-o://a480d81ef37de1008b4719578786822a626b9e4ab82fa386802ac28dcd1eea95" gracePeriod=2 Oct 01 16:21:10 crc kubenswrapper[4949]: I1001 16:21:10.319269 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["metallb-system/metallb-operator-webhook-server-75b55bf9f4-xhb9p"] Oct 01 16:21:10 crc kubenswrapper[4949]: I1001 16:21:10.601774 4949 scope.go:117] "RemoveContainer" containerID="757b624790258f8fb52b1acdc82513f0413ed869329247fea2c8ea990d6338de" Oct 01 16:21:10 crc kubenswrapper[4949]: E1001 16:21:10.602323 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:21:10 crc kubenswrapper[4949]: I1001 16:21:10.797912 4949 generic.go:334] "Generic (PLEG): container finished" podID="72d082fe-c501-45dc-a52c-39abe6d7326a" containerID="a480d81ef37de1008b4719578786822a626b9e4ab82fa386802ac28dcd1eea95" exitCode=0 Oct 01 16:21:10 crc kubenswrapper[4949]: I1001 16:21:10.799448 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c56abc1c9fc8e877d165cfaf82355210f3f9d45eec9579d04ae2466a490f896" Oct 01 16:21:10 crc kubenswrapper[4949]: I1001 16:21:10.826967 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-75b55bf9f4-xhb9p" Oct 01 16:21:10 crc kubenswrapper[4949]: I1001 16:21:10.974737 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/72d082fe-c501-45dc-a52c-39abe6d7326a-webhook-cert\") pod \"72d082fe-c501-45dc-a52c-39abe6d7326a\" (UID: \"72d082fe-c501-45dc-a52c-39abe6d7326a\") " Oct 01 16:21:10 crc kubenswrapper[4949]: I1001 16:21:10.974808 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8xdpc\" (UniqueName: \"kubernetes.io/projected/72d082fe-c501-45dc-a52c-39abe6d7326a-kube-api-access-8xdpc\") pod \"72d082fe-c501-45dc-a52c-39abe6d7326a\" (UID: \"72d082fe-c501-45dc-a52c-39abe6d7326a\") " Oct 01 16:21:10 crc kubenswrapper[4949]: I1001 16:21:10.974875 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/72d082fe-c501-45dc-a52c-39abe6d7326a-apiservice-cert\") pod \"72d082fe-c501-45dc-a52c-39abe6d7326a\" (UID: \"72d082fe-c501-45dc-a52c-39abe6d7326a\") " Oct 01 16:21:10 crc kubenswrapper[4949]: I1001 16:21:10.980355 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72d082fe-c501-45dc-a52c-39abe6d7326a-kube-api-access-8xdpc" (OuterVolumeSpecName: "kube-api-access-8xdpc") pod "72d082fe-c501-45dc-a52c-39abe6d7326a" (UID: "72d082fe-c501-45dc-a52c-39abe6d7326a"). InnerVolumeSpecName "kube-api-access-8xdpc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:21:10 crc kubenswrapper[4949]: I1001 16:21:10.980868 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72d082fe-c501-45dc-a52c-39abe6d7326a-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "72d082fe-c501-45dc-a52c-39abe6d7326a" (UID: "72d082fe-c501-45dc-a52c-39abe6d7326a"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:21:10 crc kubenswrapper[4949]: I1001 16:21:10.980902 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72d082fe-c501-45dc-a52c-39abe6d7326a-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "72d082fe-c501-45dc-a52c-39abe6d7326a" (UID: "72d082fe-c501-45dc-a52c-39abe6d7326a"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:21:11 crc kubenswrapper[4949]: I1001 16:21:11.078016 4949 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/72d082fe-c501-45dc-a52c-39abe6d7326a-webhook-cert\") on node \"crc\" DevicePath \"\"" Oct 01 16:21:11 crc kubenswrapper[4949]: I1001 16:21:11.078046 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8xdpc\" (UniqueName: \"kubernetes.io/projected/72d082fe-c501-45dc-a52c-39abe6d7326a-kube-api-access-8xdpc\") on node \"crc\" DevicePath \"\"" Oct 01 16:21:11 crc kubenswrapper[4949]: I1001 16:21:11.078057 4949 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/72d082fe-c501-45dc-a52c-39abe6d7326a-apiservice-cert\") on node \"crc\" DevicePath \"\"" Oct 01 16:21:11 crc kubenswrapper[4949]: I1001 16:21:11.632832 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="72d082fe-c501-45dc-a52c-39abe6d7326a" path="/var/lib/kubelet/pods/72d082fe-c501-45dc-a52c-39abe6d7326a/volumes" Oct 01 16:21:11 crc kubenswrapper[4949]: I1001 16:21:11.837563 4949 generic.go:334] "Generic (PLEG): container finished" podID="bd38b0c3-ae51-4ad3-ae6c-2c926614301c" containerID="2caaf8670408608e8b6913b517d28bbd0d947ebe02030b1a0832aa4259d32e8a" exitCode=0 Oct 01 16:21:11 crc kubenswrapper[4949]: I1001 16:21:11.837646 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-279tn" event={"ID":"bd38b0c3-ae51-4ad3-ae6c-2c926614301c","Type":"ContainerDied","Data":"2caaf8670408608e8b6913b517d28bbd0d947ebe02030b1a0832aa4259d32e8a"} Oct 01 16:21:11 crc kubenswrapper[4949]: I1001 16:21:11.837668 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-75b55bf9f4-xhb9p" Oct 01 16:21:15 crc kubenswrapper[4949]: I1001 16:21:15.879282 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-279tn" event={"ID":"bd38b0c3-ae51-4ad3-ae6c-2c926614301c","Type":"ContainerDied","Data":"09bad2000fbf4865fe2b5105179577facad92634378f71ba0e3c3730cd27ff75"} Oct 01 16:21:15 crc kubenswrapper[4949]: I1001 16:21:15.879796 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09bad2000fbf4865fe2b5105179577facad92634378f71ba0e3c3730cd27ff75" Oct 01 16:21:15 crc kubenswrapper[4949]: I1001 16:21:15.927392 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-279tn" Oct 01 16:21:16 crc kubenswrapper[4949]: I1001 16:21:16.080737 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bd38b0c3-ae51-4ad3-ae6c-2c926614301c-inventory\") pod \"bd38b0c3-ae51-4ad3-ae6c-2c926614301c\" (UID: \"bd38b0c3-ae51-4ad3-ae6c-2c926614301c\") " Oct 01 16:21:16 crc kubenswrapper[4949]: I1001 16:21:16.080917 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bd38b0c3-ae51-4ad3-ae6c-2c926614301c-ssh-key\") pod \"bd38b0c3-ae51-4ad3-ae6c-2c926614301c\" (UID: \"bd38b0c3-ae51-4ad3-ae6c-2c926614301c\") " Oct 01 16:21:16 crc kubenswrapper[4949]: I1001 16:21:16.080949 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5vrhs\" (UniqueName: \"kubernetes.io/projected/bd38b0c3-ae51-4ad3-ae6c-2c926614301c-kube-api-access-5vrhs\") pod \"bd38b0c3-ae51-4ad3-ae6c-2c926614301c\" (UID: \"bd38b0c3-ae51-4ad3-ae6c-2c926614301c\") " Oct 01 16:21:16 crc kubenswrapper[4949]: I1001 16:21:16.080999 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bd38b0c3-ae51-4ad3-ae6c-2c926614301c-ceph\") pod \"bd38b0c3-ae51-4ad3-ae6c-2c926614301c\" (UID: \"bd38b0c3-ae51-4ad3-ae6c-2c926614301c\") " Oct 01 16:21:16 crc kubenswrapper[4949]: I1001 16:21:16.089360 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd38b0c3-ae51-4ad3-ae6c-2c926614301c-kube-api-access-5vrhs" (OuterVolumeSpecName: "kube-api-access-5vrhs") pod "bd38b0c3-ae51-4ad3-ae6c-2c926614301c" (UID: "bd38b0c3-ae51-4ad3-ae6c-2c926614301c"). InnerVolumeSpecName "kube-api-access-5vrhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:21:16 crc kubenswrapper[4949]: I1001 16:21:16.089564 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd38b0c3-ae51-4ad3-ae6c-2c926614301c-ceph" (OuterVolumeSpecName: "ceph") pod "bd38b0c3-ae51-4ad3-ae6c-2c926614301c" (UID: "bd38b0c3-ae51-4ad3-ae6c-2c926614301c"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:21:16 crc kubenswrapper[4949]: I1001 16:21:16.112353 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd38b0c3-ae51-4ad3-ae6c-2c926614301c-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "bd38b0c3-ae51-4ad3-ae6c-2c926614301c" (UID: "bd38b0c3-ae51-4ad3-ae6c-2c926614301c"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:21:16 crc kubenswrapper[4949]: I1001 16:21:16.114966 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bd38b0c3-ae51-4ad3-ae6c-2c926614301c-inventory" (OuterVolumeSpecName: "inventory") pod "bd38b0c3-ae51-4ad3-ae6c-2c926614301c" (UID: "bd38b0c3-ae51-4ad3-ae6c-2c926614301c"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:21:16 crc kubenswrapper[4949]: I1001 16:21:16.184184 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/bd38b0c3-ae51-4ad3-ae6c-2c926614301c-inventory\") on node \"crc\" DevicePath \"\"" Oct 01 16:21:16 crc kubenswrapper[4949]: I1001 16:21:16.184224 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/bd38b0c3-ae51-4ad3-ae6c-2c926614301c-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 01 16:21:16 crc kubenswrapper[4949]: I1001 16:21:16.184303 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5vrhs\" (UniqueName: \"kubernetes.io/projected/bd38b0c3-ae51-4ad3-ae6c-2c926614301c-kube-api-access-5vrhs\") on node \"crc\" DevicePath \"\"" Oct 01 16:21:16 crc kubenswrapper[4949]: I1001 16:21:16.184315 4949 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/bd38b0c3-ae51-4ad3-ae6c-2c926614301c-ceph\") on node \"crc\" DevicePath \"\"" Oct 01 16:21:16 crc kubenswrapper[4949]: I1001 16:21:16.461826 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-7k5jk"] Oct 01 16:21:16 crc kubenswrapper[4949]: E1001 16:21:16.462313 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="72d082fe-c501-45dc-a52c-39abe6d7326a" containerName="webhook-server" Oct 01 16:21:16 crc kubenswrapper[4949]: I1001 16:21:16.462336 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="72d082fe-c501-45dc-a52c-39abe6d7326a" containerName="webhook-server" Oct 01 16:21:16 crc kubenswrapper[4949]: E1001 16:21:16.462377 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bd38b0c3-ae51-4ad3-ae6c-2c926614301c" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Oct 01 16:21:16 crc kubenswrapper[4949]: I1001 16:21:16.462388 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="bd38b0c3-ae51-4ad3-ae6c-2c926614301c" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Oct 01 16:21:16 crc kubenswrapper[4949]: I1001 16:21:16.462592 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="bd38b0c3-ae51-4ad3-ae6c-2c926614301c" containerName="install-os-edpm-deployment-openstack-edpm-ipam" Oct 01 16:21:16 crc kubenswrapper[4949]: I1001 16:21:16.462613 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="72d082fe-c501-45dc-a52c-39abe6d7326a" containerName="webhook-server" Oct 01 16:21:16 crc kubenswrapper[4949]: I1001 16:21:16.466917 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7k5jk" Oct 01 16:21:16 crc kubenswrapper[4949]: I1001 16:21:16.474527 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7k5jk"] Oct 01 16:21:16 crc kubenswrapper[4949]: I1001 16:21:16.494828 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8cmsm\" (UniqueName: \"kubernetes.io/projected/7d776d96-22f9-417d-ade1-ee33e5b1a34b-kube-api-access-8cmsm\") pod \"redhat-marketplace-7k5jk\" (UID: \"7d776d96-22f9-417d-ade1-ee33e5b1a34b\") " pod="openshift-marketplace/redhat-marketplace-7k5jk" Oct 01 16:21:16 crc kubenswrapper[4949]: I1001 16:21:16.494963 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d776d96-22f9-417d-ade1-ee33e5b1a34b-utilities\") pod \"redhat-marketplace-7k5jk\" (UID: \"7d776d96-22f9-417d-ade1-ee33e5b1a34b\") " pod="openshift-marketplace/redhat-marketplace-7k5jk" Oct 01 16:21:16 crc kubenswrapper[4949]: I1001 16:21:16.494984 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d776d96-22f9-417d-ade1-ee33e5b1a34b-catalog-content\") pod \"redhat-marketplace-7k5jk\" (UID: \"7d776d96-22f9-417d-ade1-ee33e5b1a34b\") " pod="openshift-marketplace/redhat-marketplace-7k5jk" Oct 01 16:21:16 crc kubenswrapper[4949]: I1001 16:21:16.596490 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8cmsm\" (UniqueName: \"kubernetes.io/projected/7d776d96-22f9-417d-ade1-ee33e5b1a34b-kube-api-access-8cmsm\") pod \"redhat-marketplace-7k5jk\" (UID: \"7d776d96-22f9-417d-ade1-ee33e5b1a34b\") " pod="openshift-marketplace/redhat-marketplace-7k5jk" Oct 01 16:21:16 crc kubenswrapper[4949]: I1001 16:21:16.596849 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d776d96-22f9-417d-ade1-ee33e5b1a34b-utilities\") pod \"redhat-marketplace-7k5jk\" (UID: \"7d776d96-22f9-417d-ade1-ee33e5b1a34b\") " pod="openshift-marketplace/redhat-marketplace-7k5jk" Oct 01 16:21:16 crc kubenswrapper[4949]: I1001 16:21:16.596877 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d776d96-22f9-417d-ade1-ee33e5b1a34b-catalog-content\") pod \"redhat-marketplace-7k5jk\" (UID: \"7d776d96-22f9-417d-ade1-ee33e5b1a34b\") " pod="openshift-marketplace/redhat-marketplace-7k5jk" Oct 01 16:21:16 crc kubenswrapper[4949]: I1001 16:21:16.597545 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d776d96-22f9-417d-ade1-ee33e5b1a34b-catalog-content\") pod \"redhat-marketplace-7k5jk\" (UID: \"7d776d96-22f9-417d-ade1-ee33e5b1a34b\") " pod="openshift-marketplace/redhat-marketplace-7k5jk" Oct 01 16:21:16 crc kubenswrapper[4949]: I1001 16:21:16.597689 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d776d96-22f9-417d-ade1-ee33e5b1a34b-utilities\") pod \"redhat-marketplace-7k5jk\" (UID: \"7d776d96-22f9-417d-ade1-ee33e5b1a34b\") " pod="openshift-marketplace/redhat-marketplace-7k5jk" Oct 01 16:21:16 crc kubenswrapper[4949]: I1001 16:21:16.619443 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8cmsm\" (UniqueName: \"kubernetes.io/projected/7d776d96-22f9-417d-ade1-ee33e5b1a34b-kube-api-access-8cmsm\") pod \"redhat-marketplace-7k5jk\" (UID: \"7d776d96-22f9-417d-ade1-ee33e5b1a34b\") " pod="openshift-marketplace/redhat-marketplace-7k5jk" Oct 01 16:21:16 crc kubenswrapper[4949]: I1001 16:21:16.786605 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7k5jk" Oct 01 16:21:16 crc kubenswrapper[4949]: I1001 16:21:16.887923 4949 generic.go:334] "Generic (PLEG): container finished" podID="655538b2-2c0d-48b6-a5e3-ada4d1150dbf" containerID="3bd4f971ecc09495f72b45fbdb84f6d4eb9c74c779b32f5bf31cafe4864a6a42" exitCode=0 Oct 01 16:21:16 crc kubenswrapper[4949]: I1001 16:21:16.887990 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gg5zc" event={"ID":"655538b2-2c0d-48b6-a5e3-ada4d1150dbf","Type":"ContainerDied","Data":"3bd4f971ecc09495f72b45fbdb84f6d4eb9c74c779b32f5bf31cafe4864a6a42"} Oct 01 16:21:16 crc kubenswrapper[4949]: I1001 16:21:16.890384 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-os-edpm-deployment-openstack-edpm-ipam-279tn" Oct 01 16:21:16 crc kubenswrapper[4949]: I1001 16:21:16.890432 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-zhn42" event={"ID":"6223d63f-7f8d-429c-8526-d0c4d21798cd","Type":"ContainerStarted","Data":"b7bd7bce3ea91528e06316b1feac90604a2906db1cbc75130b7e6cb04d73b28a"} Oct 01 16:21:16 crc kubenswrapper[4949]: I1001 16:21:16.890803 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-zhn42" Oct 01 16:21:16 crc kubenswrapper[4949]: I1001 16:21:16.993326 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-zhn42" podStartSLOduration=2.527736568 podStartE2EDuration="9.99330366s" podCreationTimestamp="2025-10-01 16:21:07 +0000 UTC" firstStartedPulling="2025-10-01 16:21:08.492908709 +0000 UTC m=+2367.798514900" lastFinishedPulling="2025-10-01 16:21:15.958475801 +0000 UTC m=+2375.264081992" observedRunningTime="2025-10-01 16:21:16.970888067 +0000 UTC m=+2376.276494248" watchObservedRunningTime="2025-10-01 16:21:16.99330366 +0000 UTC m=+2376.298909851" Oct 01 16:21:17 crc kubenswrapper[4949]: I1001 16:21:17.054685 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hqzm7"] Oct 01 16:21:17 crc kubenswrapper[4949]: I1001 16:21:17.056436 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hqzm7" Oct 01 16:21:17 crc kubenswrapper[4949]: I1001 16:21:17.058541 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 01 16:21:17 crc kubenswrapper[4949]: I1001 16:21:17.059057 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 01 16:21:17 crc kubenswrapper[4949]: I1001 16:21:17.059244 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 01 16:21:17 crc kubenswrapper[4949]: I1001 16:21:17.059449 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 01 16:21:17 crc kubenswrapper[4949]: I1001 16:21:17.059614 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dwhcl" Oct 01 16:21:17 crc kubenswrapper[4949]: I1001 16:21:17.085489 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hqzm7"] Oct 01 16:21:17 crc kubenswrapper[4949]: I1001 16:21:17.207295 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct4s4\" (UniqueName: \"kubernetes.io/projected/46ad0a13-d88e-4d0f-baa4-45b795d6f204-kube-api-access-ct4s4\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hqzm7\" (UID: \"46ad0a13-d88e-4d0f-baa4-45b795d6f204\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hqzm7" Oct 01 16:21:17 crc kubenswrapper[4949]: I1001 16:21:17.207611 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/46ad0a13-d88e-4d0f-baa4-45b795d6f204-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hqzm7\" (UID: \"46ad0a13-d88e-4d0f-baa4-45b795d6f204\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hqzm7" Oct 01 16:21:17 crc kubenswrapper[4949]: I1001 16:21:17.207693 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/46ad0a13-d88e-4d0f-baa4-45b795d6f204-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hqzm7\" (UID: \"46ad0a13-d88e-4d0f-baa4-45b795d6f204\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hqzm7" Oct 01 16:21:17 crc kubenswrapper[4949]: I1001 16:21:17.207723 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46ad0a13-d88e-4d0f-baa4-45b795d6f204-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hqzm7\" (UID: \"46ad0a13-d88e-4d0f-baa4-45b795d6f204\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hqzm7" Oct 01 16:21:17 crc kubenswrapper[4949]: I1001 16:21:17.309101 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct4s4\" (UniqueName: \"kubernetes.io/projected/46ad0a13-d88e-4d0f-baa4-45b795d6f204-kube-api-access-ct4s4\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hqzm7\" (UID: \"46ad0a13-d88e-4d0f-baa4-45b795d6f204\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hqzm7" Oct 01 16:21:17 crc kubenswrapper[4949]: I1001 16:21:17.309235 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/46ad0a13-d88e-4d0f-baa4-45b795d6f204-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hqzm7\" (UID: \"46ad0a13-d88e-4d0f-baa4-45b795d6f204\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hqzm7" Oct 01 16:21:17 crc kubenswrapper[4949]: I1001 16:21:17.309345 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/46ad0a13-d88e-4d0f-baa4-45b795d6f204-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hqzm7\" (UID: \"46ad0a13-d88e-4d0f-baa4-45b795d6f204\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hqzm7" Oct 01 16:21:17 crc kubenswrapper[4949]: I1001 16:21:17.309381 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46ad0a13-d88e-4d0f-baa4-45b795d6f204-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hqzm7\" (UID: \"46ad0a13-d88e-4d0f-baa4-45b795d6f204\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hqzm7" Oct 01 16:21:17 crc kubenswrapper[4949]: I1001 16:21:17.315896 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/46ad0a13-d88e-4d0f-baa4-45b795d6f204-ssh-key\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hqzm7\" (UID: \"46ad0a13-d88e-4d0f-baa4-45b795d6f204\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hqzm7" Oct 01 16:21:17 crc kubenswrapper[4949]: I1001 16:21:17.317742 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46ad0a13-d88e-4d0f-baa4-45b795d6f204-inventory\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hqzm7\" (UID: \"46ad0a13-d88e-4d0f-baa4-45b795d6f204\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hqzm7" Oct 01 16:21:17 crc kubenswrapper[4949]: I1001 16:21:17.318076 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/46ad0a13-d88e-4d0f-baa4-45b795d6f204-ceph\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hqzm7\" (UID: \"46ad0a13-d88e-4d0f-baa4-45b795d6f204\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hqzm7" Oct 01 16:21:17 crc kubenswrapper[4949]: I1001 16:21:17.328853 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct4s4\" (UniqueName: \"kubernetes.io/projected/46ad0a13-d88e-4d0f-baa4-45b795d6f204-kube-api-access-ct4s4\") pod \"ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hqzm7\" (UID: \"46ad0a13-d88e-4d0f-baa4-45b795d6f204\") " pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hqzm7" Oct 01 16:21:17 crc kubenswrapper[4949]: I1001 16:21:17.383957 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-7k5jk"] Oct 01 16:21:17 crc kubenswrapper[4949]: I1001 16:21:17.397885 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hqzm7" Oct 01 16:21:17 crc kubenswrapper[4949]: I1001 16:21:17.902466 4949 generic.go:334] "Generic (PLEG): container finished" podID="7d776d96-22f9-417d-ade1-ee33e5b1a34b" containerID="465b59f4270105080b4071c9fdc3b6588ac7cad91975d6d7b8f9986352d4581e" exitCode=0 Oct 01 16:21:17 crc kubenswrapper[4949]: I1001 16:21:17.902993 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7k5jk" event={"ID":"7d776d96-22f9-417d-ade1-ee33e5b1a34b","Type":"ContainerDied","Data":"465b59f4270105080b4071c9fdc3b6588ac7cad91975d6d7b8f9986352d4581e"} Oct 01 16:21:17 crc kubenswrapper[4949]: I1001 16:21:17.903095 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7k5jk" event={"ID":"7d776d96-22f9-417d-ade1-ee33e5b1a34b","Type":"ContainerStarted","Data":"cedc87efe4e6a848484ca9fa78d9b0a26d7e024f022c4f863cedcfcbb229cbe5"} Oct 01 16:21:17 crc kubenswrapper[4949]: I1001 16:21:17.909624 4949 generic.go:334] "Generic (PLEG): container finished" podID="655538b2-2c0d-48b6-a5e3-ada4d1150dbf" containerID="4b3dd7a359af097c951b6c43c70e7ec94e41508a0834b2cad1e8ccc99d7bb22b" exitCode=0 Oct 01 16:21:17 crc kubenswrapper[4949]: I1001 16:21:17.909831 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gg5zc" event={"ID":"655538b2-2c0d-48b6-a5e3-ada4d1150dbf","Type":"ContainerDied","Data":"4b3dd7a359af097c951b6c43c70e7ec94e41508a0834b2cad1e8ccc99d7bb22b"} Oct 01 16:21:17 crc kubenswrapper[4949]: I1001 16:21:17.958659 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hqzm7"] Oct 01 16:21:18 crc kubenswrapper[4949]: I1001 16:21:18.190879 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-mlbq6" Oct 01 16:21:18 crc kubenswrapper[4949]: I1001 16:21:18.205996 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-68d546b9d8-pf22v" Oct 01 16:21:18 crc kubenswrapper[4949]: I1001 16:21:18.309311 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["metallb-system/controller-5d688f5ffc-xpj4x"] Oct 01 16:21:18 crc kubenswrapper[4949]: I1001 16:21:18.310361 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/controller-5d688f5ffc-xpj4x" podUID="4d7b6770-ccc5-4f7c-88f7-a32bc9541190" containerName="controller" containerID="cri-o://509b7af7d104472f7d4c55fd23cbef4e42a8bfd1c246458f611eb34856b31cfa" gracePeriod=2 Oct 01 16:21:18 crc kubenswrapper[4949]: I1001 16:21:18.310493 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/controller-5d688f5ffc-xpj4x" podUID="4d7b6770-ccc5-4f7c-88f7-a32bc9541190" containerName="kube-rbac-proxy" containerID="cri-o://5eaf5133999b120f71ead5e64859b24778ade91c5a19020be2dd39d9e2ea3518" gracePeriod=2 Oct 01 16:21:18 crc kubenswrapper[4949]: I1001 16:21:18.323630 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["metallb-system/controller-5d688f5ffc-xpj4x"] Oct 01 16:21:18 crc kubenswrapper[4949]: I1001 16:21:18.693087 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5d688f5ffc-xpj4x" Oct 01 16:21:18 crc kubenswrapper[4949]: I1001 16:21:18.756515 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4d7b6770-ccc5-4f7c-88f7-a32bc9541190-cert\") pod \"4d7b6770-ccc5-4f7c-88f7-a32bc9541190\" (UID: \"4d7b6770-ccc5-4f7c-88f7-a32bc9541190\") " Oct 01 16:21:18 crc kubenswrapper[4949]: I1001 16:21:18.756560 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r94sd\" (UniqueName: \"kubernetes.io/projected/4d7b6770-ccc5-4f7c-88f7-a32bc9541190-kube-api-access-r94sd\") pod \"4d7b6770-ccc5-4f7c-88f7-a32bc9541190\" (UID: \"4d7b6770-ccc5-4f7c-88f7-a32bc9541190\") " Oct 01 16:21:18 crc kubenswrapper[4949]: I1001 16:21:18.756680 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4d7b6770-ccc5-4f7c-88f7-a32bc9541190-metrics-certs\") pod \"4d7b6770-ccc5-4f7c-88f7-a32bc9541190\" (UID: \"4d7b6770-ccc5-4f7c-88f7-a32bc9541190\") " Oct 01 16:21:18 crc kubenswrapper[4949]: I1001 16:21:18.760411 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d7b6770-ccc5-4f7c-88f7-a32bc9541190-kube-api-access-r94sd" (OuterVolumeSpecName: "kube-api-access-r94sd") pod "4d7b6770-ccc5-4f7c-88f7-a32bc9541190" (UID: "4d7b6770-ccc5-4f7c-88f7-a32bc9541190"). InnerVolumeSpecName "kube-api-access-r94sd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:21:18 crc kubenswrapper[4949]: I1001 16:21:18.762922 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d7b6770-ccc5-4f7c-88f7-a32bc9541190-cert" (OuterVolumeSpecName: "cert") pod "4d7b6770-ccc5-4f7c-88f7-a32bc9541190" (UID: "4d7b6770-ccc5-4f7c-88f7-a32bc9541190"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:21:18 crc kubenswrapper[4949]: I1001 16:21:18.767398 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4d7b6770-ccc5-4f7c-88f7-a32bc9541190-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "4d7b6770-ccc5-4f7c-88f7-a32bc9541190" (UID: "4d7b6770-ccc5-4f7c-88f7-a32bc9541190"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:21:18 crc kubenswrapper[4949]: I1001 16:21:18.862862 4949 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/4d7b6770-ccc5-4f7c-88f7-a32bc9541190-metrics-certs\") on node \"crc\" DevicePath \"\"" Oct 01 16:21:18 crc kubenswrapper[4949]: I1001 16:21:18.863191 4949 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/4d7b6770-ccc5-4f7c-88f7-a32bc9541190-cert\") on node \"crc\" DevicePath \"\"" Oct 01 16:21:18 crc kubenswrapper[4949]: I1001 16:21:18.863205 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r94sd\" (UniqueName: \"kubernetes.io/projected/4d7b6770-ccc5-4f7c-88f7-a32bc9541190-kube-api-access-r94sd\") on node \"crc\" DevicePath \"\"" Oct 01 16:21:18 crc kubenswrapper[4949]: I1001 16:21:18.921800 4949 generic.go:334] "Generic (PLEG): container finished" podID="655538b2-2c0d-48b6-a5e3-ada4d1150dbf" containerID="47fc1943dd8fe3ab71fe13941d9b1a3f00e0dc8fd5b85752d61097fba78a8697" exitCode=0 Oct 01 16:21:18 crc kubenswrapper[4949]: I1001 16:21:18.921872 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gg5zc" event={"ID":"655538b2-2c0d-48b6-a5e3-ada4d1150dbf","Type":"ContainerDied","Data":"47fc1943dd8fe3ab71fe13941d9b1a3f00e0dc8fd5b85752d61097fba78a8697"} Oct 01 16:21:18 crc kubenswrapper[4949]: I1001 16:21:18.928776 4949 generic.go:334] "Generic (PLEG): container finished" podID="4d7b6770-ccc5-4f7c-88f7-a32bc9541190" containerID="5eaf5133999b120f71ead5e64859b24778ade91c5a19020be2dd39d9e2ea3518" exitCode=0 Oct 01 16:21:18 crc kubenswrapper[4949]: I1001 16:21:18.928799 4949 generic.go:334] "Generic (PLEG): container finished" podID="4d7b6770-ccc5-4f7c-88f7-a32bc9541190" containerID="509b7af7d104472f7d4c55fd23cbef4e42a8bfd1c246458f611eb34856b31cfa" exitCode=0 Oct 01 16:21:18 crc kubenswrapper[4949]: I1001 16:21:18.928814 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-5d688f5ffc-xpj4x" Oct 01 16:21:18 crc kubenswrapper[4949]: I1001 16:21:18.930449 4949 scope.go:117] "RemoveContainer" containerID="5eaf5133999b120f71ead5e64859b24778ade91c5a19020be2dd39d9e2ea3518" Oct 01 16:21:18 crc kubenswrapper[4949]: I1001 16:21:18.931606 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hqzm7" event={"ID":"46ad0a13-d88e-4d0f-baa4-45b795d6f204","Type":"ContainerStarted","Data":"6335966f386763eab9ae7d07138be3f346f9ae9ca1461eb5a989cb09c886567d"} Oct 01 16:21:18 crc kubenswrapper[4949]: I1001 16:21:18.931636 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hqzm7" event={"ID":"46ad0a13-d88e-4d0f-baa4-45b795d6f204","Type":"ContainerStarted","Data":"6f52dff0c98cf7c93282c852a6358c6e442bc7bc9c9b36bc06beca93cee9cca7"} Oct 01 16:21:18 crc kubenswrapper[4949]: I1001 16:21:18.940189 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7k5jk" event={"ID":"7d776d96-22f9-417d-ade1-ee33e5b1a34b","Type":"ContainerStarted","Data":"bccfd5c2523d6fca25a8466843d817cd525d5fd29603fb6f8e3d0cae91f41f85"} Oct 01 16:21:18 crc kubenswrapper[4949]: I1001 16:21:18.968917 4949 scope.go:117] "RemoveContainer" containerID="509b7af7d104472f7d4c55fd23cbef4e42a8bfd1c246458f611eb34856b31cfa" Oct 01 16:21:18 crc kubenswrapper[4949]: I1001 16:21:18.972486 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hqzm7" podStartSLOduration=1.492334274 podStartE2EDuration="1.972466863s" podCreationTimestamp="2025-10-01 16:21:17 +0000 UTC" firstStartedPulling="2025-10-01 16:21:18.009388217 +0000 UTC m=+2377.314994408" lastFinishedPulling="2025-10-01 16:21:18.489520806 +0000 UTC m=+2377.795126997" observedRunningTime="2025-10-01 16:21:18.970289313 +0000 UTC m=+2378.275895504" watchObservedRunningTime="2025-10-01 16:21:18.972466863 +0000 UTC m=+2378.278073074" Oct 01 16:21:19 crc kubenswrapper[4949]: I1001 16:21:19.064415 4949 scope.go:117] "RemoveContainer" containerID="5eaf5133999b120f71ead5e64859b24778ade91c5a19020be2dd39d9e2ea3518" Oct 01 16:21:19 crc kubenswrapper[4949]: E1001 16:21:19.065571 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5eaf5133999b120f71ead5e64859b24778ade91c5a19020be2dd39d9e2ea3518\": container with ID starting with 5eaf5133999b120f71ead5e64859b24778ade91c5a19020be2dd39d9e2ea3518 not found: ID does not exist" containerID="5eaf5133999b120f71ead5e64859b24778ade91c5a19020be2dd39d9e2ea3518" Oct 01 16:21:19 crc kubenswrapper[4949]: I1001 16:21:19.065652 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5eaf5133999b120f71ead5e64859b24778ade91c5a19020be2dd39d9e2ea3518"} err="failed to get container status \"5eaf5133999b120f71ead5e64859b24778ade91c5a19020be2dd39d9e2ea3518\": rpc error: code = NotFound desc = could not find container \"5eaf5133999b120f71ead5e64859b24778ade91c5a19020be2dd39d9e2ea3518\": container with ID starting with 5eaf5133999b120f71ead5e64859b24778ade91c5a19020be2dd39d9e2ea3518 not found: ID does not exist" Oct 01 16:21:19 crc kubenswrapper[4949]: I1001 16:21:19.065683 4949 scope.go:117] "RemoveContainer" containerID="509b7af7d104472f7d4c55fd23cbef4e42a8bfd1c246458f611eb34856b31cfa" Oct 01 16:21:19 crc kubenswrapper[4949]: E1001 16:21:19.066093 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"509b7af7d104472f7d4c55fd23cbef4e42a8bfd1c246458f611eb34856b31cfa\": container with ID starting with 509b7af7d104472f7d4c55fd23cbef4e42a8bfd1c246458f611eb34856b31cfa not found: ID does not exist" containerID="509b7af7d104472f7d4c55fd23cbef4e42a8bfd1c246458f611eb34856b31cfa" Oct 01 16:21:19 crc kubenswrapper[4949]: I1001 16:21:19.066309 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"509b7af7d104472f7d4c55fd23cbef4e42a8bfd1c246458f611eb34856b31cfa"} err="failed to get container status \"509b7af7d104472f7d4c55fd23cbef4e42a8bfd1c246458f611eb34856b31cfa\": rpc error: code = NotFound desc = could not find container \"509b7af7d104472f7d4c55fd23cbef4e42a8bfd1c246458f611eb34856b31cfa\": container with ID starting with 509b7af7d104472f7d4c55fd23cbef4e42a8bfd1c246458f611eb34856b31cfa not found: ID does not exist" Oct 01 16:21:19 crc kubenswrapper[4949]: I1001 16:21:19.066328 4949 scope.go:117] "RemoveContainer" containerID="5eaf5133999b120f71ead5e64859b24778ade91c5a19020be2dd39d9e2ea3518" Oct 01 16:21:19 crc kubenswrapper[4949]: I1001 16:21:19.066608 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5eaf5133999b120f71ead5e64859b24778ade91c5a19020be2dd39d9e2ea3518"} err="failed to get container status \"5eaf5133999b120f71ead5e64859b24778ade91c5a19020be2dd39d9e2ea3518\": rpc error: code = NotFound desc = could not find container \"5eaf5133999b120f71ead5e64859b24778ade91c5a19020be2dd39d9e2ea3518\": container with ID starting with 5eaf5133999b120f71ead5e64859b24778ade91c5a19020be2dd39d9e2ea3518 not found: ID does not exist" Oct 01 16:21:19 crc kubenswrapper[4949]: I1001 16:21:19.066632 4949 scope.go:117] "RemoveContainer" containerID="509b7af7d104472f7d4c55fd23cbef4e42a8bfd1c246458f611eb34856b31cfa" Oct 01 16:21:19 crc kubenswrapper[4949]: I1001 16:21:19.066882 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"509b7af7d104472f7d4c55fd23cbef4e42a8bfd1c246458f611eb34856b31cfa"} err="failed to get container status \"509b7af7d104472f7d4c55fd23cbef4e42a8bfd1c246458f611eb34856b31cfa\": rpc error: code = NotFound desc = could not find container \"509b7af7d104472f7d4c55fd23cbef4e42a8bfd1c246458f611eb34856b31cfa\": container with ID starting with 509b7af7d104472f7d4c55fd23cbef4e42a8bfd1c246458f611eb34856b31cfa not found: ID does not exist" Oct 01 16:21:19 crc kubenswrapper[4949]: I1001 16:21:19.613347 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d7b6770-ccc5-4f7c-88f7-a32bc9541190" path="/var/lib/kubelet/pods/4d7b6770-ccc5-4f7c-88f7-a32bc9541190/volumes" Oct 01 16:21:19 crc kubenswrapper[4949]: I1001 16:21:19.956162 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gg5zc" event={"ID":"655538b2-2c0d-48b6-a5e3-ada4d1150dbf","Type":"ContainerStarted","Data":"e578031f00ab9901976fbf4902578f9f15bedb59da8ab4a5151a7202649d96a5"} Oct 01 16:21:19 crc kubenswrapper[4949]: I1001 16:21:19.956215 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gg5zc" event={"ID":"655538b2-2c0d-48b6-a5e3-ada4d1150dbf","Type":"ContainerStarted","Data":"d405842f2b654882795b126b727ab27e6cb58b69b7ea66af626e8dc617911165"} Oct 01 16:21:19 crc kubenswrapper[4949]: I1001 16:21:19.956230 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gg5zc" event={"ID":"655538b2-2c0d-48b6-a5e3-ada4d1150dbf","Type":"ContainerStarted","Data":"b141e165a7fd0b4ce77b4c688ee18c36d1a502e2ba9e92b7c97531c569421950"} Oct 01 16:21:19 crc kubenswrapper[4949]: I1001 16:21:19.961397 4949 generic.go:334] "Generic (PLEG): container finished" podID="7d776d96-22f9-417d-ade1-ee33e5b1a34b" containerID="bccfd5c2523d6fca25a8466843d817cd525d5fd29603fb6f8e3d0cae91f41f85" exitCode=0 Oct 01 16:21:19 crc kubenswrapper[4949]: I1001 16:21:19.961660 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7k5jk" event={"ID":"7d776d96-22f9-417d-ade1-ee33e5b1a34b","Type":"ContainerDied","Data":"bccfd5c2523d6fca25a8466843d817cd525d5fd29603fb6f8e3d0cae91f41f85"} Oct 01 16:21:20 crc kubenswrapper[4949]: I1001 16:21:20.977195 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gg5zc" event={"ID":"655538b2-2c0d-48b6-a5e3-ada4d1150dbf","Type":"ContainerStarted","Data":"2b347e89620be6e1e019f94c23ada8a00518b702562b6873d3799a41f2b4e883"} Oct 01 16:21:20 crc kubenswrapper[4949]: I1001 16:21:20.977554 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-gg5zc" Oct 01 16:21:20 crc kubenswrapper[4949]: I1001 16:21:20.977570 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gg5zc" event={"ID":"655538b2-2c0d-48b6-a5e3-ada4d1150dbf","Type":"ContainerStarted","Data":"eb43351eb415c5175ad6619b202beb1fd31f31af2ec92c972ef1f6428014550f"} Oct 01 16:21:20 crc kubenswrapper[4949]: I1001 16:21:20.977583 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-gg5zc" event={"ID":"655538b2-2c0d-48b6-a5e3-ada4d1150dbf","Type":"ContainerStarted","Data":"d701d1f0089d3caa572ae81e026f7aada2d9b4fa1e88d714762e2a9a5b5e9b39"} Oct 01 16:21:20 crc kubenswrapper[4949]: I1001 16:21:20.980051 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7k5jk" event={"ID":"7d776d96-22f9-417d-ade1-ee33e5b1a34b","Type":"ContainerStarted","Data":"faf5a71c0d8e183a620213fedc6e7cc2939cb0ccd28d3ecde96fcf343c4986eb"} Oct 01 16:21:21 crc kubenswrapper[4949]: I1001 16:21:21.018059 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-gg5zc" podStartSLOduration=6.30728827 podStartE2EDuration="14.018045132s" podCreationTimestamp="2025-10-01 16:21:07 +0000 UTC" firstStartedPulling="2025-10-01 16:21:08.207260042 +0000 UTC m=+2367.512866243" lastFinishedPulling="2025-10-01 16:21:15.918016914 +0000 UTC m=+2375.223623105" observedRunningTime="2025-10-01 16:21:21.014365511 +0000 UTC m=+2380.319971702" watchObservedRunningTime="2025-10-01 16:21:21.018045132 +0000 UTC m=+2380.323651323" Oct 01 16:21:21 crc kubenswrapper[4949]: I1001 16:21:21.041996 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-7k5jk" podStartSLOduration=2.51896632 podStartE2EDuration="5.041974676s" podCreationTimestamp="2025-10-01 16:21:16 +0000 UTC" firstStartedPulling="2025-10-01 16:21:17.906890221 +0000 UTC m=+2377.212496412" lastFinishedPulling="2025-10-01 16:21:20.429898577 +0000 UTC m=+2379.735504768" observedRunningTime="2025-10-01 16:21:21.032520908 +0000 UTC m=+2380.338127109" watchObservedRunningTime="2025-10-01 16:21:21.041974676 +0000 UTC m=+2380.347580867" Oct 01 16:21:21 crc kubenswrapper[4949]: I1001 16:21:21.616819 4949 scope.go:117] "RemoveContainer" containerID="757b624790258f8fb52b1acdc82513f0413ed869329247fea2c8ea990d6338de" Oct 01 16:21:21 crc kubenswrapper[4949]: E1001 16:21:21.617330 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:21:22 crc kubenswrapper[4949]: I1001 16:21:22.999211 4949 generic.go:334] "Generic (PLEG): container finished" podID="46ad0a13-d88e-4d0f-baa4-45b795d6f204" containerID="6335966f386763eab9ae7d07138be3f346f9ae9ca1461eb5a989cb09c886567d" exitCode=0 Oct 01 16:21:22 crc kubenswrapper[4949]: I1001 16:21:22.999262 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hqzm7" event={"ID":"46ad0a13-d88e-4d0f-baa4-45b795d6f204","Type":"ContainerDied","Data":"6335966f386763eab9ae7d07138be3f346f9ae9ca1461eb5a989cb09c886567d"} Oct 01 16:21:23 crc kubenswrapper[4949]: I1001 16:21:23.045685 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-gg5zc" Oct 01 16:21:23 crc kubenswrapper[4949]: I1001 16:21:23.086696 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-gg5zc" Oct 01 16:21:24 crc kubenswrapper[4949]: I1001 16:21:24.484051 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hqzm7" Oct 01 16:21:24 crc kubenswrapper[4949]: I1001 16:21:24.678708 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ct4s4\" (UniqueName: \"kubernetes.io/projected/46ad0a13-d88e-4d0f-baa4-45b795d6f204-kube-api-access-ct4s4\") pod \"46ad0a13-d88e-4d0f-baa4-45b795d6f204\" (UID: \"46ad0a13-d88e-4d0f-baa4-45b795d6f204\") " Oct 01 16:21:24 crc kubenswrapper[4949]: I1001 16:21:24.680953 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/46ad0a13-d88e-4d0f-baa4-45b795d6f204-ssh-key\") pod \"46ad0a13-d88e-4d0f-baa4-45b795d6f204\" (UID: \"46ad0a13-d88e-4d0f-baa4-45b795d6f204\") " Oct 01 16:21:24 crc kubenswrapper[4949]: I1001 16:21:24.681007 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/46ad0a13-d88e-4d0f-baa4-45b795d6f204-ceph\") pod \"46ad0a13-d88e-4d0f-baa4-45b795d6f204\" (UID: \"46ad0a13-d88e-4d0f-baa4-45b795d6f204\") " Oct 01 16:21:24 crc kubenswrapper[4949]: I1001 16:21:24.681054 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46ad0a13-d88e-4d0f-baa4-45b795d6f204-inventory\") pod \"46ad0a13-d88e-4d0f-baa4-45b795d6f204\" (UID: \"46ad0a13-d88e-4d0f-baa4-45b795d6f204\") " Oct 01 16:21:24 crc kubenswrapper[4949]: I1001 16:21:24.691347 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46ad0a13-d88e-4d0f-baa4-45b795d6f204-ceph" (OuterVolumeSpecName: "ceph") pod "46ad0a13-d88e-4d0f-baa4-45b795d6f204" (UID: "46ad0a13-d88e-4d0f-baa4-45b795d6f204"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:21:24 crc kubenswrapper[4949]: I1001 16:21:24.706744 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46ad0a13-d88e-4d0f-baa4-45b795d6f204-kube-api-access-ct4s4" (OuterVolumeSpecName: "kube-api-access-ct4s4") pod "46ad0a13-d88e-4d0f-baa4-45b795d6f204" (UID: "46ad0a13-d88e-4d0f-baa4-45b795d6f204"). InnerVolumeSpecName "kube-api-access-ct4s4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:21:24 crc kubenswrapper[4949]: I1001 16:21:24.719045 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46ad0a13-d88e-4d0f-baa4-45b795d6f204-inventory" (OuterVolumeSpecName: "inventory") pod "46ad0a13-d88e-4d0f-baa4-45b795d6f204" (UID: "46ad0a13-d88e-4d0f-baa4-45b795d6f204"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:21:24 crc kubenswrapper[4949]: I1001 16:21:24.723726 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46ad0a13-d88e-4d0f-baa4-45b795d6f204-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "46ad0a13-d88e-4d0f-baa4-45b795d6f204" (UID: "46ad0a13-d88e-4d0f-baa4-45b795d6f204"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:21:24 crc kubenswrapper[4949]: I1001 16:21:24.783037 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/46ad0a13-d88e-4d0f-baa4-45b795d6f204-inventory\") on node \"crc\" DevicePath \"\"" Oct 01 16:21:24 crc kubenswrapper[4949]: I1001 16:21:24.783074 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ct4s4\" (UniqueName: \"kubernetes.io/projected/46ad0a13-d88e-4d0f-baa4-45b795d6f204-kube-api-access-ct4s4\") on node \"crc\" DevicePath \"\"" Oct 01 16:21:24 crc kubenswrapper[4949]: I1001 16:21:24.783088 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/46ad0a13-d88e-4d0f-baa4-45b795d6f204-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 01 16:21:24 crc kubenswrapper[4949]: I1001 16:21:24.783096 4949 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/46ad0a13-d88e-4d0f-baa4-45b795d6f204-ceph\") on node \"crc\" DevicePath \"\"" Oct 01 16:21:25 crc kubenswrapper[4949]: I1001 16:21:25.020598 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hqzm7" event={"ID":"46ad0a13-d88e-4d0f-baa4-45b795d6f204","Type":"ContainerDied","Data":"6f52dff0c98cf7c93282c852a6358c6e442bc7bc9c9b36bc06beca93cee9cca7"} Oct 01 16:21:25 crc kubenswrapper[4949]: I1001 16:21:25.020653 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6f52dff0c98cf7c93282c852a6358c6e442bc7bc9c9b36bc06beca93cee9cca7" Oct 01 16:21:25 crc kubenswrapper[4949]: I1001 16:21:25.020693 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hqzm7" Oct 01 16:21:25 crc kubenswrapper[4949]: I1001 16:21:25.115591 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nghnz"] Oct 01 16:21:25 crc kubenswrapper[4949]: E1001 16:21:25.115992 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46ad0a13-d88e-4d0f-baa4-45b795d6f204" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Oct 01 16:21:25 crc kubenswrapper[4949]: I1001 16:21:25.116017 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="46ad0a13-d88e-4d0f-baa4-45b795d6f204" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Oct 01 16:21:25 crc kubenswrapper[4949]: E1001 16:21:25.116058 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d7b6770-ccc5-4f7c-88f7-a32bc9541190" containerName="kube-rbac-proxy" Oct 01 16:21:25 crc kubenswrapper[4949]: I1001 16:21:25.116067 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d7b6770-ccc5-4f7c-88f7-a32bc9541190" containerName="kube-rbac-proxy" Oct 01 16:21:25 crc kubenswrapper[4949]: E1001 16:21:25.116078 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d7b6770-ccc5-4f7c-88f7-a32bc9541190" containerName="controller" Oct 01 16:21:25 crc kubenswrapper[4949]: I1001 16:21:25.116085 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d7b6770-ccc5-4f7c-88f7-a32bc9541190" containerName="controller" Oct 01 16:21:25 crc kubenswrapper[4949]: I1001 16:21:25.116332 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d7b6770-ccc5-4f7c-88f7-a32bc9541190" containerName="controller" Oct 01 16:21:25 crc kubenswrapper[4949]: I1001 16:21:25.116358 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="46ad0a13-d88e-4d0f-baa4-45b795d6f204" containerName="ceph-hci-pre-edpm-deployment-openstack-edpm-ipam" Oct 01 16:21:25 crc kubenswrapper[4949]: I1001 16:21:25.116373 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d7b6770-ccc5-4f7c-88f7-a32bc9541190" containerName="kube-rbac-proxy" Oct 01 16:21:25 crc kubenswrapper[4949]: I1001 16:21:25.117103 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nghnz" Oct 01 16:21:25 crc kubenswrapper[4949]: I1001 16:21:25.119256 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 01 16:21:25 crc kubenswrapper[4949]: I1001 16:21:25.119448 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 01 16:21:25 crc kubenswrapper[4949]: I1001 16:21:25.120562 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 01 16:21:25 crc kubenswrapper[4949]: I1001 16:21:25.122174 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 01 16:21:25 crc kubenswrapper[4949]: I1001 16:21:25.122242 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dwhcl" Oct 01 16:21:25 crc kubenswrapper[4949]: I1001 16:21:25.139372 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nghnz"] Oct 01 16:21:25 crc kubenswrapper[4949]: I1001 16:21:25.291738 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g297r\" (UniqueName: \"kubernetes.io/projected/5638925a-de61-4d2c-8863-88658f9bb7fd-kube-api-access-g297r\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nghnz\" (UID: \"5638925a-de61-4d2c-8863-88658f9bb7fd\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nghnz" Oct 01 16:21:25 crc kubenswrapper[4949]: I1001 16:21:25.291933 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5638925a-de61-4d2c-8863-88658f9bb7fd-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nghnz\" (UID: \"5638925a-de61-4d2c-8863-88658f9bb7fd\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nghnz" Oct 01 16:21:25 crc kubenswrapper[4949]: I1001 16:21:25.292182 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5638925a-de61-4d2c-8863-88658f9bb7fd-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nghnz\" (UID: \"5638925a-de61-4d2c-8863-88658f9bb7fd\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nghnz" Oct 01 16:21:25 crc kubenswrapper[4949]: I1001 16:21:25.292321 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5638925a-de61-4d2c-8863-88658f9bb7fd-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nghnz\" (UID: \"5638925a-de61-4d2c-8863-88658f9bb7fd\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nghnz" Oct 01 16:21:25 crc kubenswrapper[4949]: I1001 16:21:25.393788 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g297r\" (UniqueName: \"kubernetes.io/projected/5638925a-de61-4d2c-8863-88658f9bb7fd-kube-api-access-g297r\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nghnz\" (UID: \"5638925a-de61-4d2c-8863-88658f9bb7fd\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nghnz" Oct 01 16:21:25 crc kubenswrapper[4949]: I1001 16:21:25.393917 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5638925a-de61-4d2c-8863-88658f9bb7fd-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nghnz\" (UID: \"5638925a-de61-4d2c-8863-88658f9bb7fd\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nghnz" Oct 01 16:21:25 crc kubenswrapper[4949]: I1001 16:21:25.394006 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5638925a-de61-4d2c-8863-88658f9bb7fd-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nghnz\" (UID: \"5638925a-de61-4d2c-8863-88658f9bb7fd\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nghnz" Oct 01 16:21:25 crc kubenswrapper[4949]: I1001 16:21:25.394054 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5638925a-de61-4d2c-8863-88658f9bb7fd-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nghnz\" (UID: \"5638925a-de61-4d2c-8863-88658f9bb7fd\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nghnz" Oct 01 16:21:25 crc kubenswrapper[4949]: I1001 16:21:25.398250 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5638925a-de61-4d2c-8863-88658f9bb7fd-inventory\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nghnz\" (UID: \"5638925a-de61-4d2c-8863-88658f9bb7fd\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nghnz" Oct 01 16:21:25 crc kubenswrapper[4949]: I1001 16:21:25.398593 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5638925a-de61-4d2c-8863-88658f9bb7fd-ceph\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nghnz\" (UID: \"5638925a-de61-4d2c-8863-88658f9bb7fd\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nghnz" Oct 01 16:21:25 crc kubenswrapper[4949]: I1001 16:21:25.400844 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5638925a-de61-4d2c-8863-88658f9bb7fd-ssh-key\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nghnz\" (UID: \"5638925a-de61-4d2c-8863-88658f9bb7fd\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nghnz" Oct 01 16:21:25 crc kubenswrapper[4949]: I1001 16:21:25.420472 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g297r\" (UniqueName: \"kubernetes.io/projected/5638925a-de61-4d2c-8863-88658f9bb7fd-kube-api-access-g297r\") pod \"configure-os-edpm-deployment-openstack-edpm-ipam-nghnz\" (UID: \"5638925a-de61-4d2c-8863-88658f9bb7fd\") " pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nghnz" Oct 01 16:21:25 crc kubenswrapper[4949]: I1001 16:21:25.477051 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nghnz" Oct 01 16:21:26 crc kubenswrapper[4949]: I1001 16:21:26.026785 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nghnz"] Oct 01 16:21:26 crc kubenswrapper[4949]: W1001 16:21:26.044648 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5638925a_de61_4d2c_8863_88658f9bb7fd.slice/crio-aa076c439da0585c4542734a6203b0df53d7d7a0751a3d35f0e42a0503626d22 WatchSource:0}: Error finding container aa076c439da0585c4542734a6203b0df53d7d7a0751a3d35f0e42a0503626d22: Status 404 returned error can't find the container with id aa076c439da0585c4542734a6203b0df53d7d7a0751a3d35f0e42a0503626d22 Oct 01 16:21:26 crc kubenswrapper[4949]: I1001 16:21:26.791917 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-7k5jk" Oct 01 16:21:26 crc kubenswrapper[4949]: I1001 16:21:26.792362 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-7k5jk" Oct 01 16:21:26 crc kubenswrapper[4949]: I1001 16:21:26.858178 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-7k5jk" Oct 01 16:21:27 crc kubenswrapper[4949]: I1001 16:21:27.053015 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nghnz" event={"ID":"5638925a-de61-4d2c-8863-88658f9bb7fd","Type":"ContainerStarted","Data":"a36c6efaa28a47187fd0524ee0a980e78f23000865cb1e564993d440191eb1fc"} Oct 01 16:21:27 crc kubenswrapper[4949]: I1001 16:21:27.053306 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nghnz" event={"ID":"5638925a-de61-4d2c-8863-88658f9bb7fd","Type":"ContainerStarted","Data":"aa076c439da0585c4542734a6203b0df53d7d7a0751a3d35f0e42a0503626d22"} Oct 01 16:21:27 crc kubenswrapper[4949]: I1001 16:21:27.090456 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nghnz" podStartSLOduration=1.5313855699999999 podStartE2EDuration="2.090431269s" podCreationTimestamp="2025-10-01 16:21:25 +0000 UTC" firstStartedPulling="2025-10-01 16:21:26.066397165 +0000 UTC m=+2385.372003356" lastFinishedPulling="2025-10-01 16:21:26.625442824 +0000 UTC m=+2385.931049055" observedRunningTime="2025-10-01 16:21:27.084420294 +0000 UTC m=+2386.390026485" watchObservedRunningTime="2025-10-01 16:21:27.090431269 +0000 UTC m=+2386.396037490" Oct 01 16:21:27 crc kubenswrapper[4949]: I1001 16:21:27.193402 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-7k5jk" Oct 01 16:21:27 crc kubenswrapper[4949]: I1001 16:21:27.649258 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7k5jk"] Oct 01 16:21:28 crc kubenswrapper[4949]: I1001 16:21:28.032283 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-64bf5d555-zhn42" Oct 01 16:21:28 crc kubenswrapper[4949]: I1001 16:21:28.111738 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["metallb-system/frr-k8s-webhook-server-5478bdb765-wvjl8"] Oct 01 16:21:28 crc kubenswrapper[4949]: I1001 16:21:28.112035 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/frr-k8s-webhook-server-5478bdb765-wvjl8" podUID="a1d2cf29-ae90-40e0-81bf-a6661fe62cb2" containerName="frr-k8s-webhook-server" containerID="cri-o://bef953aca680f2fce32579e0bd6e0deee0fea52fda16df76e31f79be5f29c852" gracePeriod=10 Oct 01 16:21:28 crc kubenswrapper[4949]: I1001 16:21:28.649589 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-5478bdb765-wvjl8" Oct 01 16:21:28 crc kubenswrapper[4949]: I1001 16:21:28.751281 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2krrv\" (UniqueName: \"kubernetes.io/projected/a1d2cf29-ae90-40e0-81bf-a6661fe62cb2-kube-api-access-2krrv\") pod \"a1d2cf29-ae90-40e0-81bf-a6661fe62cb2\" (UID: \"a1d2cf29-ae90-40e0-81bf-a6661fe62cb2\") " Oct 01 16:21:28 crc kubenswrapper[4949]: I1001 16:21:28.751656 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a1d2cf29-ae90-40e0-81bf-a6661fe62cb2-cert\") pod \"a1d2cf29-ae90-40e0-81bf-a6661fe62cb2\" (UID: \"a1d2cf29-ae90-40e0-81bf-a6661fe62cb2\") " Oct 01 16:21:28 crc kubenswrapper[4949]: I1001 16:21:28.758680 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a1d2cf29-ae90-40e0-81bf-a6661fe62cb2-kube-api-access-2krrv" (OuterVolumeSpecName: "kube-api-access-2krrv") pod "a1d2cf29-ae90-40e0-81bf-a6661fe62cb2" (UID: "a1d2cf29-ae90-40e0-81bf-a6661fe62cb2"). InnerVolumeSpecName "kube-api-access-2krrv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:21:28 crc kubenswrapper[4949]: I1001 16:21:28.758864 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a1d2cf29-ae90-40e0-81bf-a6661fe62cb2-cert" (OuterVolumeSpecName: "cert") pod "a1d2cf29-ae90-40e0-81bf-a6661fe62cb2" (UID: "a1d2cf29-ae90-40e0-81bf-a6661fe62cb2"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:21:28 crc kubenswrapper[4949]: I1001 16:21:28.853518 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2krrv\" (UniqueName: \"kubernetes.io/projected/a1d2cf29-ae90-40e0-81bf-a6661fe62cb2-kube-api-access-2krrv\") on node \"crc\" DevicePath \"\"" Oct 01 16:21:28 crc kubenswrapper[4949]: I1001 16:21:28.853546 4949 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a1d2cf29-ae90-40e0-81bf-a6661fe62cb2-cert\") on node \"crc\" DevicePath \"\"" Oct 01 16:21:29 crc kubenswrapper[4949]: I1001 16:21:29.071165 4949 generic.go:334] "Generic (PLEG): container finished" podID="a1d2cf29-ae90-40e0-81bf-a6661fe62cb2" containerID="bef953aca680f2fce32579e0bd6e0deee0fea52fda16df76e31f79be5f29c852" exitCode=0 Oct 01 16:21:29 crc kubenswrapper[4949]: I1001 16:21:29.071484 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-7k5jk" podUID="7d776d96-22f9-417d-ade1-ee33e5b1a34b" containerName="registry-server" containerID="cri-o://faf5a71c0d8e183a620213fedc6e7cc2939cb0ccd28d3ecde96fcf343c4986eb" gracePeriod=2 Oct 01 16:21:29 crc kubenswrapper[4949]: I1001 16:21:29.071616 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-5478bdb765-wvjl8" Oct 01 16:21:29 crc kubenswrapper[4949]: I1001 16:21:29.074329 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-5478bdb765-wvjl8" event={"ID":"a1d2cf29-ae90-40e0-81bf-a6661fe62cb2","Type":"ContainerDied","Data":"bef953aca680f2fce32579e0bd6e0deee0fea52fda16df76e31f79be5f29c852"} Oct 01 16:21:29 crc kubenswrapper[4949]: I1001 16:21:29.074397 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-5478bdb765-wvjl8" event={"ID":"a1d2cf29-ae90-40e0-81bf-a6661fe62cb2","Type":"ContainerDied","Data":"30a5b470225545c1e904cfffdec82205f30081e4a5185b2f86399a5b2343b27e"} Oct 01 16:21:29 crc kubenswrapper[4949]: I1001 16:21:29.074416 4949 scope.go:117] "RemoveContainer" containerID="bef953aca680f2fce32579e0bd6e0deee0fea52fda16df76e31f79be5f29c852" Oct 01 16:21:29 crc kubenswrapper[4949]: I1001 16:21:29.105735 4949 scope.go:117] "RemoveContainer" containerID="bef953aca680f2fce32579e0bd6e0deee0fea52fda16df76e31f79be5f29c852" Oct 01 16:21:29 crc kubenswrapper[4949]: E1001 16:21:29.106585 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bef953aca680f2fce32579e0bd6e0deee0fea52fda16df76e31f79be5f29c852\": container with ID starting with bef953aca680f2fce32579e0bd6e0deee0fea52fda16df76e31f79be5f29c852 not found: ID does not exist" containerID="bef953aca680f2fce32579e0bd6e0deee0fea52fda16df76e31f79be5f29c852" Oct 01 16:21:29 crc kubenswrapper[4949]: I1001 16:21:29.106628 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bef953aca680f2fce32579e0bd6e0deee0fea52fda16df76e31f79be5f29c852"} err="failed to get container status \"bef953aca680f2fce32579e0bd6e0deee0fea52fda16df76e31f79be5f29c852\": rpc error: code = NotFound desc = could not find container \"bef953aca680f2fce32579e0bd6e0deee0fea52fda16df76e31f79be5f29c852\": container with ID starting with bef953aca680f2fce32579e0bd6e0deee0fea52fda16df76e31f79be5f29c852 not found: ID does not exist" Oct 01 16:21:29 crc kubenswrapper[4949]: I1001 16:21:29.113886 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["metallb-system/frr-k8s-webhook-server-5478bdb765-wvjl8"] Oct 01 16:21:29 crc kubenswrapper[4949]: I1001 16:21:29.121841 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["metallb-system/frr-k8s-webhook-server-5478bdb765-wvjl8"] Oct 01 16:21:29 crc kubenswrapper[4949]: E1001 16:21:29.251470 4949 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1d2cf29_ae90_40e0_81bf_a6661fe62cb2.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7d776d96_22f9_417d_ade1_ee33e5b1a34b.slice/crio-faf5a71c0d8e183a620213fedc6e7cc2939cb0ccd28d3ecde96fcf343c4986eb.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda1d2cf29_ae90_40e0_81bf_a6661fe62cb2.slice/crio-30a5b470225545c1e904cfffdec82205f30081e4a5185b2f86399a5b2343b27e\": RecentStats: unable to find data in memory cache]" Oct 01 16:21:29 crc kubenswrapper[4949]: I1001 16:21:29.578758 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7k5jk" Oct 01 16:21:29 crc kubenswrapper[4949]: I1001 16:21:29.615727 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a1d2cf29-ae90-40e0-81bf-a6661fe62cb2" path="/var/lib/kubelet/pods/a1d2cf29-ae90-40e0-81bf-a6661fe62cb2/volumes" Oct 01 16:21:29 crc kubenswrapper[4949]: I1001 16:21:29.771697 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8cmsm\" (UniqueName: \"kubernetes.io/projected/7d776d96-22f9-417d-ade1-ee33e5b1a34b-kube-api-access-8cmsm\") pod \"7d776d96-22f9-417d-ade1-ee33e5b1a34b\" (UID: \"7d776d96-22f9-417d-ade1-ee33e5b1a34b\") " Oct 01 16:21:29 crc kubenswrapper[4949]: I1001 16:21:29.771735 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d776d96-22f9-417d-ade1-ee33e5b1a34b-utilities\") pod \"7d776d96-22f9-417d-ade1-ee33e5b1a34b\" (UID: \"7d776d96-22f9-417d-ade1-ee33e5b1a34b\") " Oct 01 16:21:29 crc kubenswrapper[4949]: I1001 16:21:29.771818 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d776d96-22f9-417d-ade1-ee33e5b1a34b-catalog-content\") pod \"7d776d96-22f9-417d-ade1-ee33e5b1a34b\" (UID: \"7d776d96-22f9-417d-ade1-ee33e5b1a34b\") " Oct 01 16:21:29 crc kubenswrapper[4949]: I1001 16:21:29.772984 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d776d96-22f9-417d-ade1-ee33e5b1a34b-utilities" (OuterVolumeSpecName: "utilities") pod "7d776d96-22f9-417d-ade1-ee33e5b1a34b" (UID: "7d776d96-22f9-417d-ade1-ee33e5b1a34b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:21:29 crc kubenswrapper[4949]: I1001 16:21:29.778363 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7d776d96-22f9-417d-ade1-ee33e5b1a34b-kube-api-access-8cmsm" (OuterVolumeSpecName: "kube-api-access-8cmsm") pod "7d776d96-22f9-417d-ade1-ee33e5b1a34b" (UID: "7d776d96-22f9-417d-ade1-ee33e5b1a34b"). InnerVolumeSpecName "kube-api-access-8cmsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:21:29 crc kubenswrapper[4949]: I1001 16:21:29.788472 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7d776d96-22f9-417d-ade1-ee33e5b1a34b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7d776d96-22f9-417d-ade1-ee33e5b1a34b" (UID: "7d776d96-22f9-417d-ade1-ee33e5b1a34b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:21:29 crc kubenswrapper[4949]: I1001 16:21:29.874328 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7d776d96-22f9-417d-ade1-ee33e5b1a34b-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 01 16:21:29 crc kubenswrapper[4949]: I1001 16:21:29.874375 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8cmsm\" (UniqueName: \"kubernetes.io/projected/7d776d96-22f9-417d-ade1-ee33e5b1a34b-kube-api-access-8cmsm\") on node \"crc\" DevicePath \"\"" Oct 01 16:21:29 crc kubenswrapper[4949]: I1001 16:21:29.874390 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7d776d96-22f9-417d-ade1-ee33e5b1a34b-utilities\") on node \"crc\" DevicePath \"\"" Oct 01 16:21:30 crc kubenswrapper[4949]: I1001 16:21:30.089777 4949 generic.go:334] "Generic (PLEG): container finished" podID="7d776d96-22f9-417d-ade1-ee33e5b1a34b" containerID="faf5a71c0d8e183a620213fedc6e7cc2939cb0ccd28d3ecde96fcf343c4986eb" exitCode=0 Oct 01 16:21:30 crc kubenswrapper[4949]: I1001 16:21:30.089838 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7k5jk" event={"ID":"7d776d96-22f9-417d-ade1-ee33e5b1a34b","Type":"ContainerDied","Data":"faf5a71c0d8e183a620213fedc6e7cc2939cb0ccd28d3ecde96fcf343c4986eb"} Oct 01 16:21:30 crc kubenswrapper[4949]: I1001 16:21:30.089876 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-7k5jk" Oct 01 16:21:30 crc kubenswrapper[4949]: I1001 16:21:30.089909 4949 scope.go:117] "RemoveContainer" containerID="faf5a71c0d8e183a620213fedc6e7cc2939cb0ccd28d3ecde96fcf343c4986eb" Oct 01 16:21:30 crc kubenswrapper[4949]: I1001 16:21:30.089887 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-7k5jk" event={"ID":"7d776d96-22f9-417d-ade1-ee33e5b1a34b","Type":"ContainerDied","Data":"cedc87efe4e6a848484ca9fa78d9b0a26d7e024f022c4f863cedcfcbb229cbe5"} Oct 01 16:21:30 crc kubenswrapper[4949]: I1001 16:21:30.122349 4949 scope.go:117] "RemoveContainer" containerID="bccfd5c2523d6fca25a8466843d817cd525d5fd29603fb6f8e3d0cae91f41f85" Oct 01 16:21:30 crc kubenswrapper[4949]: I1001 16:21:30.128303 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-7k5jk"] Oct 01 16:21:30 crc kubenswrapper[4949]: I1001 16:21:30.138343 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-7k5jk"] Oct 01 16:21:30 crc kubenswrapper[4949]: I1001 16:21:30.143979 4949 scope.go:117] "RemoveContainer" containerID="465b59f4270105080b4071c9fdc3b6588ac7cad91975d6d7b8f9986352d4581e" Oct 01 16:21:30 crc kubenswrapper[4949]: I1001 16:21:30.195582 4949 scope.go:117] "RemoveContainer" containerID="faf5a71c0d8e183a620213fedc6e7cc2939cb0ccd28d3ecde96fcf343c4986eb" Oct 01 16:21:30 crc kubenswrapper[4949]: E1001 16:21:30.196346 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"faf5a71c0d8e183a620213fedc6e7cc2939cb0ccd28d3ecde96fcf343c4986eb\": container with ID starting with faf5a71c0d8e183a620213fedc6e7cc2939cb0ccd28d3ecde96fcf343c4986eb not found: ID does not exist" containerID="faf5a71c0d8e183a620213fedc6e7cc2939cb0ccd28d3ecde96fcf343c4986eb" Oct 01 16:21:30 crc kubenswrapper[4949]: I1001 16:21:30.196407 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"faf5a71c0d8e183a620213fedc6e7cc2939cb0ccd28d3ecde96fcf343c4986eb"} err="failed to get container status \"faf5a71c0d8e183a620213fedc6e7cc2939cb0ccd28d3ecde96fcf343c4986eb\": rpc error: code = NotFound desc = could not find container \"faf5a71c0d8e183a620213fedc6e7cc2939cb0ccd28d3ecde96fcf343c4986eb\": container with ID starting with faf5a71c0d8e183a620213fedc6e7cc2939cb0ccd28d3ecde96fcf343c4986eb not found: ID does not exist" Oct 01 16:21:30 crc kubenswrapper[4949]: I1001 16:21:30.196437 4949 scope.go:117] "RemoveContainer" containerID="bccfd5c2523d6fca25a8466843d817cd525d5fd29603fb6f8e3d0cae91f41f85" Oct 01 16:21:30 crc kubenswrapper[4949]: E1001 16:21:30.196704 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bccfd5c2523d6fca25a8466843d817cd525d5fd29603fb6f8e3d0cae91f41f85\": container with ID starting with bccfd5c2523d6fca25a8466843d817cd525d5fd29603fb6f8e3d0cae91f41f85 not found: ID does not exist" containerID="bccfd5c2523d6fca25a8466843d817cd525d5fd29603fb6f8e3d0cae91f41f85" Oct 01 16:21:30 crc kubenswrapper[4949]: I1001 16:21:30.196737 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bccfd5c2523d6fca25a8466843d817cd525d5fd29603fb6f8e3d0cae91f41f85"} err="failed to get container status \"bccfd5c2523d6fca25a8466843d817cd525d5fd29603fb6f8e3d0cae91f41f85\": rpc error: code = NotFound desc = could not find container \"bccfd5c2523d6fca25a8466843d817cd525d5fd29603fb6f8e3d0cae91f41f85\": container with ID starting with bccfd5c2523d6fca25a8466843d817cd525d5fd29603fb6f8e3d0cae91f41f85 not found: ID does not exist" Oct 01 16:21:30 crc kubenswrapper[4949]: I1001 16:21:30.196763 4949 scope.go:117] "RemoveContainer" containerID="465b59f4270105080b4071c9fdc3b6588ac7cad91975d6d7b8f9986352d4581e" Oct 01 16:21:30 crc kubenswrapper[4949]: E1001 16:21:30.197064 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"465b59f4270105080b4071c9fdc3b6588ac7cad91975d6d7b8f9986352d4581e\": container with ID starting with 465b59f4270105080b4071c9fdc3b6588ac7cad91975d6d7b8f9986352d4581e not found: ID does not exist" containerID="465b59f4270105080b4071c9fdc3b6588ac7cad91975d6d7b8f9986352d4581e" Oct 01 16:21:30 crc kubenswrapper[4949]: I1001 16:21:30.197107 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"465b59f4270105080b4071c9fdc3b6588ac7cad91975d6d7b8f9986352d4581e"} err="failed to get container status \"465b59f4270105080b4071c9fdc3b6588ac7cad91975d6d7b8f9986352d4581e\": rpc error: code = NotFound desc = could not find container \"465b59f4270105080b4071c9fdc3b6588ac7cad91975d6d7b8f9986352d4581e\": container with ID starting with 465b59f4270105080b4071c9fdc3b6588ac7cad91975d6d7b8f9986352d4581e not found: ID does not exist" Oct 01 16:21:31 crc kubenswrapper[4949]: I1001 16:21:31.625353 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7d776d96-22f9-417d-ade1-ee33e5b1a34b" path="/var/lib/kubelet/pods/7d776d96-22f9-417d-ade1-ee33e5b1a34b/volumes" Oct 01 16:21:35 crc kubenswrapper[4949]: I1001 16:21:35.602322 4949 scope.go:117] "RemoveContainer" containerID="757b624790258f8fb52b1acdc82513f0413ed869329247fea2c8ea990d6338de" Oct 01 16:21:35 crc kubenswrapper[4949]: E1001 16:21:35.603910 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:21:38 crc kubenswrapper[4949]: I1001 16:21:38.045812 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-gg5zc" Oct 01 16:21:39 crc kubenswrapper[4949]: I1001 16:21:39.998101 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-cf5f54bf-mpgzz" Oct 01 16:21:40 crc kubenswrapper[4949]: I1001 16:21:40.072157 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6fcd764774-zd8pg"] Oct 01 16:21:40 crc kubenswrapper[4949]: I1001 16:21:40.072467 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="metallb-system/metallb-operator-controller-manager-6fcd764774-zd8pg" podUID="bc880bcc-217b-4bff-aa0f-c9f7d6fff779" containerName="manager" containerID="cri-o://a7a1e7525601012a3e5bfd7fe73bbbe11778c22f3809d3750eb758f4247c0d11" gracePeriod=10 Oct 01 16:21:40 crc kubenswrapper[4949]: I1001 16:21:40.652001 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6fcd764774-zd8pg" Oct 01 16:21:40 crc kubenswrapper[4949]: I1001 16:21:40.791952 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btm6m\" (UniqueName: \"kubernetes.io/projected/bc880bcc-217b-4bff-aa0f-c9f7d6fff779-kube-api-access-btm6m\") pod \"bc880bcc-217b-4bff-aa0f-c9f7d6fff779\" (UID: \"bc880bcc-217b-4bff-aa0f-c9f7d6fff779\") " Oct 01 16:21:40 crc kubenswrapper[4949]: I1001 16:21:40.792028 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bc880bcc-217b-4bff-aa0f-c9f7d6fff779-apiservice-cert\") pod \"bc880bcc-217b-4bff-aa0f-c9f7d6fff779\" (UID: \"bc880bcc-217b-4bff-aa0f-c9f7d6fff779\") " Oct 01 16:21:40 crc kubenswrapper[4949]: I1001 16:21:40.792197 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bc880bcc-217b-4bff-aa0f-c9f7d6fff779-webhook-cert\") pod \"bc880bcc-217b-4bff-aa0f-c9f7d6fff779\" (UID: \"bc880bcc-217b-4bff-aa0f-c9f7d6fff779\") " Oct 01 16:21:40 crc kubenswrapper[4949]: I1001 16:21:40.797951 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc880bcc-217b-4bff-aa0f-c9f7d6fff779-kube-api-access-btm6m" (OuterVolumeSpecName: "kube-api-access-btm6m") pod "bc880bcc-217b-4bff-aa0f-c9f7d6fff779" (UID: "bc880bcc-217b-4bff-aa0f-c9f7d6fff779"). InnerVolumeSpecName "kube-api-access-btm6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:21:40 crc kubenswrapper[4949]: I1001 16:21:40.798073 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc880bcc-217b-4bff-aa0f-c9f7d6fff779-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "bc880bcc-217b-4bff-aa0f-c9f7d6fff779" (UID: "bc880bcc-217b-4bff-aa0f-c9f7d6fff779"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:21:40 crc kubenswrapper[4949]: I1001 16:21:40.799200 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc880bcc-217b-4bff-aa0f-c9f7d6fff779-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "bc880bcc-217b-4bff-aa0f-c9f7d6fff779" (UID: "bc880bcc-217b-4bff-aa0f-c9f7d6fff779"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:21:40 crc kubenswrapper[4949]: I1001 16:21:40.895170 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btm6m\" (UniqueName: \"kubernetes.io/projected/bc880bcc-217b-4bff-aa0f-c9f7d6fff779-kube-api-access-btm6m\") on node \"crc\" DevicePath \"\"" Oct 01 16:21:40 crc kubenswrapper[4949]: I1001 16:21:40.895205 4949 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bc880bcc-217b-4bff-aa0f-c9f7d6fff779-apiservice-cert\") on node \"crc\" DevicePath \"\"" Oct 01 16:21:40 crc kubenswrapper[4949]: I1001 16:21:40.895220 4949 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bc880bcc-217b-4bff-aa0f-c9f7d6fff779-webhook-cert\") on node \"crc\" DevicePath \"\"" Oct 01 16:21:41 crc kubenswrapper[4949]: I1001 16:21:41.194464 4949 generic.go:334] "Generic (PLEG): container finished" podID="bc880bcc-217b-4bff-aa0f-c9f7d6fff779" containerID="a7a1e7525601012a3e5bfd7fe73bbbe11778c22f3809d3750eb758f4247c0d11" exitCode=0 Oct 01 16:21:41 crc kubenswrapper[4949]: I1001 16:21:41.194506 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6fcd764774-zd8pg" event={"ID":"bc880bcc-217b-4bff-aa0f-c9f7d6fff779","Type":"ContainerDied","Data":"a7a1e7525601012a3e5bfd7fe73bbbe11778c22f3809d3750eb758f4247c0d11"} Oct 01 16:21:41 crc kubenswrapper[4949]: I1001 16:21:41.194531 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6fcd764774-zd8pg" event={"ID":"bc880bcc-217b-4bff-aa0f-c9f7d6fff779","Type":"ContainerDied","Data":"648cc9c44cdd4033e6f5f1084ad0c98413746d710e805b8e3b2c99683ffe7caa"} Oct 01 16:21:41 crc kubenswrapper[4949]: I1001 16:21:41.194526 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6fcd764774-zd8pg" Oct 01 16:21:41 crc kubenswrapper[4949]: I1001 16:21:41.194543 4949 scope.go:117] "RemoveContainer" containerID="a7a1e7525601012a3e5bfd7fe73bbbe11778c22f3809d3750eb758f4247c0d11" Oct 01 16:21:41 crc kubenswrapper[4949]: I1001 16:21:41.224888 4949 scope.go:117] "RemoveContainer" containerID="a7a1e7525601012a3e5bfd7fe73bbbe11778c22f3809d3750eb758f4247c0d11" Oct 01 16:21:41 crc kubenswrapper[4949]: E1001 16:21:41.225463 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a7a1e7525601012a3e5bfd7fe73bbbe11778c22f3809d3750eb758f4247c0d11\": container with ID starting with a7a1e7525601012a3e5bfd7fe73bbbe11778c22f3809d3750eb758f4247c0d11 not found: ID does not exist" containerID="a7a1e7525601012a3e5bfd7fe73bbbe11778c22f3809d3750eb758f4247c0d11" Oct 01 16:21:41 crc kubenswrapper[4949]: I1001 16:21:41.225509 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a7a1e7525601012a3e5bfd7fe73bbbe11778c22f3809d3750eb758f4247c0d11"} err="failed to get container status \"a7a1e7525601012a3e5bfd7fe73bbbe11778c22f3809d3750eb758f4247c0d11\": rpc error: code = NotFound desc = could not find container \"a7a1e7525601012a3e5bfd7fe73bbbe11778c22f3809d3750eb758f4247c0d11\": container with ID starting with a7a1e7525601012a3e5bfd7fe73bbbe11778c22f3809d3750eb758f4247c0d11 not found: ID does not exist" Oct 01 16:21:41 crc kubenswrapper[4949]: I1001 16:21:41.233236 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6fcd764774-zd8pg"] Oct 01 16:21:41 crc kubenswrapper[4949]: I1001 16:21:41.239980 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6fcd764774-zd8pg"] Oct 01 16:21:41 crc kubenswrapper[4949]: I1001 16:21:41.618462 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc880bcc-217b-4bff-aa0f-c9f7d6fff779" path="/var/lib/kubelet/pods/bc880bcc-217b-4bff-aa0f-c9f7d6fff779/volumes" Oct 01 16:21:46 crc kubenswrapper[4949]: I1001 16:21:46.601599 4949 scope.go:117] "RemoveContainer" containerID="757b624790258f8fb52b1acdc82513f0413ed869329247fea2c8ea990d6338de" Oct 01 16:21:46 crc kubenswrapper[4949]: E1001 16:21:46.602268 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:21:58 crc kubenswrapper[4949]: I1001 16:21:58.601733 4949 scope.go:117] "RemoveContainer" containerID="757b624790258f8fb52b1acdc82513f0413ed869329247fea2c8ea990d6338de" Oct 01 16:21:58 crc kubenswrapper[4949]: E1001 16:21:58.602491 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:22:11 crc kubenswrapper[4949]: I1001 16:22:11.613742 4949 scope.go:117] "RemoveContainer" containerID="757b624790258f8fb52b1acdc82513f0413ed869329247fea2c8ea990d6338de" Oct 01 16:22:11 crc kubenswrapper[4949]: E1001 16:22:11.616632 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:22:13 crc kubenswrapper[4949]: I1001 16:22:13.532049 4949 generic.go:334] "Generic (PLEG): container finished" podID="5638925a-de61-4d2c-8863-88658f9bb7fd" containerID="a36c6efaa28a47187fd0524ee0a980e78f23000865cb1e564993d440191eb1fc" exitCode=0 Oct 01 16:22:13 crc kubenswrapper[4949]: I1001 16:22:13.532231 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nghnz" event={"ID":"5638925a-de61-4d2c-8863-88658f9bb7fd","Type":"ContainerDied","Data":"a36c6efaa28a47187fd0524ee0a980e78f23000865cb1e564993d440191eb1fc"} Oct 01 16:22:14 crc kubenswrapper[4949]: I1001 16:22:14.956606 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nghnz" Oct 01 16:22:15 crc kubenswrapper[4949]: I1001 16:22:15.067110 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5638925a-de61-4d2c-8863-88658f9bb7fd-inventory\") pod \"5638925a-de61-4d2c-8863-88658f9bb7fd\" (UID: \"5638925a-de61-4d2c-8863-88658f9bb7fd\") " Oct 01 16:22:15 crc kubenswrapper[4949]: I1001 16:22:15.067237 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g297r\" (UniqueName: \"kubernetes.io/projected/5638925a-de61-4d2c-8863-88658f9bb7fd-kube-api-access-g297r\") pod \"5638925a-de61-4d2c-8863-88658f9bb7fd\" (UID: \"5638925a-de61-4d2c-8863-88658f9bb7fd\") " Oct 01 16:22:15 crc kubenswrapper[4949]: I1001 16:22:15.067311 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5638925a-de61-4d2c-8863-88658f9bb7fd-ssh-key\") pod \"5638925a-de61-4d2c-8863-88658f9bb7fd\" (UID: \"5638925a-de61-4d2c-8863-88658f9bb7fd\") " Oct 01 16:22:15 crc kubenswrapper[4949]: I1001 16:22:15.067452 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5638925a-de61-4d2c-8863-88658f9bb7fd-ceph\") pod \"5638925a-de61-4d2c-8863-88658f9bb7fd\" (UID: \"5638925a-de61-4d2c-8863-88658f9bb7fd\") " Oct 01 16:22:15 crc kubenswrapper[4949]: I1001 16:22:15.072988 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5638925a-de61-4d2c-8863-88658f9bb7fd-kube-api-access-g297r" (OuterVolumeSpecName: "kube-api-access-g297r") pod "5638925a-de61-4d2c-8863-88658f9bb7fd" (UID: "5638925a-de61-4d2c-8863-88658f9bb7fd"). InnerVolumeSpecName "kube-api-access-g297r". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:22:15 crc kubenswrapper[4949]: I1001 16:22:15.082430 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5638925a-de61-4d2c-8863-88658f9bb7fd-ceph" (OuterVolumeSpecName: "ceph") pod "5638925a-de61-4d2c-8863-88658f9bb7fd" (UID: "5638925a-de61-4d2c-8863-88658f9bb7fd"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:22:15 crc kubenswrapper[4949]: I1001 16:22:15.092402 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5638925a-de61-4d2c-8863-88658f9bb7fd-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "5638925a-de61-4d2c-8863-88658f9bb7fd" (UID: "5638925a-de61-4d2c-8863-88658f9bb7fd"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:22:15 crc kubenswrapper[4949]: I1001 16:22:15.095147 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5638925a-de61-4d2c-8863-88658f9bb7fd-inventory" (OuterVolumeSpecName: "inventory") pod "5638925a-de61-4d2c-8863-88658f9bb7fd" (UID: "5638925a-de61-4d2c-8863-88658f9bb7fd"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:22:15 crc kubenswrapper[4949]: I1001 16:22:15.169561 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/5638925a-de61-4d2c-8863-88658f9bb7fd-inventory\") on node \"crc\" DevicePath \"\"" Oct 01 16:22:15 crc kubenswrapper[4949]: I1001 16:22:15.169766 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g297r\" (UniqueName: \"kubernetes.io/projected/5638925a-de61-4d2c-8863-88658f9bb7fd-kube-api-access-g297r\") on node \"crc\" DevicePath \"\"" Oct 01 16:22:15 crc kubenswrapper[4949]: I1001 16:22:15.169868 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/5638925a-de61-4d2c-8863-88658f9bb7fd-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 01 16:22:15 crc kubenswrapper[4949]: I1001 16:22:15.169926 4949 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/5638925a-de61-4d2c-8863-88658f9bb7fd-ceph\") on node \"crc\" DevicePath \"\"" Oct 01 16:22:15 crc kubenswrapper[4949]: I1001 16:22:15.554943 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nghnz" event={"ID":"5638925a-de61-4d2c-8863-88658f9bb7fd","Type":"ContainerDied","Data":"aa076c439da0585c4542734a6203b0df53d7d7a0751a3d35f0e42a0503626d22"} Oct 01 16:22:15 crc kubenswrapper[4949]: I1001 16:22:15.555003 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa076c439da0585c4542734a6203b0df53d7d7a0751a3d35f0e42a0503626d22" Oct 01 16:22:15 crc kubenswrapper[4949]: I1001 16:22:15.555054 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/configure-os-edpm-deployment-openstack-edpm-ipam-nghnz" Oct 01 16:22:15 crc kubenswrapper[4949]: I1001 16:22:15.745663 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-7g2xq"] Oct 01 16:22:15 crc kubenswrapper[4949]: E1001 16:22:15.745990 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a1d2cf29-ae90-40e0-81bf-a6661fe62cb2" containerName="frr-k8s-webhook-server" Oct 01 16:22:15 crc kubenswrapper[4949]: I1001 16:22:15.746005 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="a1d2cf29-ae90-40e0-81bf-a6661fe62cb2" containerName="frr-k8s-webhook-server" Oct 01 16:22:15 crc kubenswrapper[4949]: E1001 16:22:15.746035 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5638925a-de61-4d2c-8863-88658f9bb7fd" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 01 16:22:15 crc kubenswrapper[4949]: I1001 16:22:15.746044 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="5638925a-de61-4d2c-8863-88658f9bb7fd" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 01 16:22:15 crc kubenswrapper[4949]: E1001 16:22:15.746055 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d776d96-22f9-417d-ade1-ee33e5b1a34b" containerName="extract-utilities" Oct 01 16:22:15 crc kubenswrapper[4949]: I1001 16:22:15.746063 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d776d96-22f9-417d-ade1-ee33e5b1a34b" containerName="extract-utilities" Oct 01 16:22:15 crc kubenswrapper[4949]: E1001 16:22:15.746071 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d776d96-22f9-417d-ade1-ee33e5b1a34b" containerName="registry-server" Oct 01 16:22:15 crc kubenswrapper[4949]: I1001 16:22:15.746077 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d776d96-22f9-417d-ade1-ee33e5b1a34b" containerName="registry-server" Oct 01 16:22:15 crc kubenswrapper[4949]: E1001 16:22:15.746096 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bc880bcc-217b-4bff-aa0f-c9f7d6fff779" containerName="manager" Oct 01 16:22:15 crc kubenswrapper[4949]: I1001 16:22:15.746104 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc880bcc-217b-4bff-aa0f-c9f7d6fff779" containerName="manager" Oct 01 16:22:15 crc kubenswrapper[4949]: E1001 16:22:15.746114 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7d776d96-22f9-417d-ade1-ee33e5b1a34b" containerName="extract-content" Oct 01 16:22:15 crc kubenswrapper[4949]: I1001 16:22:15.750165 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="7d776d96-22f9-417d-ade1-ee33e5b1a34b" containerName="extract-content" Oct 01 16:22:15 crc kubenswrapper[4949]: I1001 16:22:15.750506 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="5638925a-de61-4d2c-8863-88658f9bb7fd" containerName="configure-os-edpm-deployment-openstack-edpm-ipam" Oct 01 16:22:15 crc kubenswrapper[4949]: I1001 16:22:15.750524 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="bc880bcc-217b-4bff-aa0f-c9f7d6fff779" containerName="manager" Oct 01 16:22:15 crc kubenswrapper[4949]: I1001 16:22:15.750560 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="7d776d96-22f9-417d-ade1-ee33e5b1a34b" containerName="registry-server" Oct 01 16:22:15 crc kubenswrapper[4949]: I1001 16:22:15.750580 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="a1d2cf29-ae90-40e0-81bf-a6661fe62cb2" containerName="frr-k8s-webhook-server" Oct 01 16:22:15 crc kubenswrapper[4949]: I1001 16:22:15.751178 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-7g2xq" Oct 01 16:22:15 crc kubenswrapper[4949]: I1001 16:22:15.754050 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 01 16:22:15 crc kubenswrapper[4949]: I1001 16:22:15.754322 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 01 16:22:15 crc kubenswrapper[4949]: I1001 16:22:15.754483 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 01 16:22:15 crc kubenswrapper[4949]: I1001 16:22:15.755472 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dwhcl" Oct 01 16:22:15 crc kubenswrapper[4949]: I1001 16:22:15.755641 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 01 16:22:15 crc kubenswrapper[4949]: I1001 16:22:15.763938 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-7g2xq"] Oct 01 16:22:15 crc kubenswrapper[4949]: I1001 16:22:15.888482 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfvd9\" (UniqueName: \"kubernetes.io/projected/d73263fe-3cef-4a57-92b8-2d70f128c8d4-kube-api-access-sfvd9\") pod \"ssh-known-hosts-edpm-deployment-7g2xq\" (UID: \"d73263fe-3cef-4a57-92b8-2d70f128c8d4\") " pod="openstack/ssh-known-hosts-edpm-deployment-7g2xq" Oct 01 16:22:15 crc kubenswrapper[4949]: I1001 16:22:15.888656 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d73263fe-3cef-4a57-92b8-2d70f128c8d4-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-7g2xq\" (UID: \"d73263fe-3cef-4a57-92b8-2d70f128c8d4\") " pod="openstack/ssh-known-hosts-edpm-deployment-7g2xq" Oct 01 16:22:15 crc kubenswrapper[4949]: I1001 16:22:15.889037 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d73263fe-3cef-4a57-92b8-2d70f128c8d4-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-7g2xq\" (UID: \"d73263fe-3cef-4a57-92b8-2d70f128c8d4\") " pod="openstack/ssh-known-hosts-edpm-deployment-7g2xq" Oct 01 16:22:15 crc kubenswrapper[4949]: I1001 16:22:15.889114 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d73263fe-3cef-4a57-92b8-2d70f128c8d4-ceph\") pod \"ssh-known-hosts-edpm-deployment-7g2xq\" (UID: \"d73263fe-3cef-4a57-92b8-2d70f128c8d4\") " pod="openstack/ssh-known-hosts-edpm-deployment-7g2xq" Oct 01 16:22:15 crc kubenswrapper[4949]: I1001 16:22:15.991316 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d73263fe-3cef-4a57-92b8-2d70f128c8d4-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-7g2xq\" (UID: \"d73263fe-3cef-4a57-92b8-2d70f128c8d4\") " pod="openstack/ssh-known-hosts-edpm-deployment-7g2xq" Oct 01 16:22:15 crc kubenswrapper[4949]: I1001 16:22:15.991392 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d73263fe-3cef-4a57-92b8-2d70f128c8d4-ceph\") pod \"ssh-known-hosts-edpm-deployment-7g2xq\" (UID: \"d73263fe-3cef-4a57-92b8-2d70f128c8d4\") " pod="openstack/ssh-known-hosts-edpm-deployment-7g2xq" Oct 01 16:22:15 crc kubenswrapper[4949]: I1001 16:22:15.991506 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sfvd9\" (UniqueName: \"kubernetes.io/projected/d73263fe-3cef-4a57-92b8-2d70f128c8d4-kube-api-access-sfvd9\") pod \"ssh-known-hosts-edpm-deployment-7g2xq\" (UID: \"d73263fe-3cef-4a57-92b8-2d70f128c8d4\") " pod="openstack/ssh-known-hosts-edpm-deployment-7g2xq" Oct 01 16:22:15 crc kubenswrapper[4949]: I1001 16:22:15.991596 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d73263fe-3cef-4a57-92b8-2d70f128c8d4-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-7g2xq\" (UID: \"d73263fe-3cef-4a57-92b8-2d70f128c8d4\") " pod="openstack/ssh-known-hosts-edpm-deployment-7g2xq" Oct 01 16:22:15 crc kubenswrapper[4949]: I1001 16:22:15.995876 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d73263fe-3cef-4a57-92b8-2d70f128c8d4-ceph\") pod \"ssh-known-hosts-edpm-deployment-7g2xq\" (UID: \"d73263fe-3cef-4a57-92b8-2d70f128c8d4\") " pod="openstack/ssh-known-hosts-edpm-deployment-7g2xq" Oct 01 16:22:15 crc kubenswrapper[4949]: I1001 16:22:15.996078 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d73263fe-3cef-4a57-92b8-2d70f128c8d4-inventory-0\") pod \"ssh-known-hosts-edpm-deployment-7g2xq\" (UID: \"d73263fe-3cef-4a57-92b8-2d70f128c8d4\") " pod="openstack/ssh-known-hosts-edpm-deployment-7g2xq" Oct 01 16:22:16 crc kubenswrapper[4949]: I1001 16:22:16.007369 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d73263fe-3cef-4a57-92b8-2d70f128c8d4-ssh-key-openstack-edpm-ipam\") pod \"ssh-known-hosts-edpm-deployment-7g2xq\" (UID: \"d73263fe-3cef-4a57-92b8-2d70f128c8d4\") " pod="openstack/ssh-known-hosts-edpm-deployment-7g2xq" Oct 01 16:22:16 crc kubenswrapper[4949]: I1001 16:22:16.017475 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfvd9\" (UniqueName: \"kubernetes.io/projected/d73263fe-3cef-4a57-92b8-2d70f128c8d4-kube-api-access-sfvd9\") pod \"ssh-known-hosts-edpm-deployment-7g2xq\" (UID: \"d73263fe-3cef-4a57-92b8-2d70f128c8d4\") " pod="openstack/ssh-known-hosts-edpm-deployment-7g2xq" Oct 01 16:22:16 crc kubenswrapper[4949]: I1001 16:22:16.065644 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-7g2xq" Oct 01 16:22:16 crc kubenswrapper[4949]: I1001 16:22:16.645967 4949 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 01 16:22:16 crc kubenswrapper[4949]: I1001 16:22:16.646357 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ssh-known-hosts-edpm-deployment-7g2xq"] Oct 01 16:22:17 crc kubenswrapper[4949]: I1001 16:22:17.576270 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-7g2xq" event={"ID":"d73263fe-3cef-4a57-92b8-2d70f128c8d4","Type":"ContainerStarted","Data":"3733cfba64b9a7f115807c45c14435e3a83f704083489c2af2781b3e0816d853"} Oct 01 16:22:18 crc kubenswrapper[4949]: I1001 16:22:18.588824 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-7g2xq" event={"ID":"d73263fe-3cef-4a57-92b8-2d70f128c8d4","Type":"ContainerStarted","Data":"afeef8c448a2bd46a5f68620346f70371ac45a2825f38f5a44cd02173a5fb7e0"} Oct 01 16:22:18 crc kubenswrapper[4949]: I1001 16:22:18.619632 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ssh-known-hosts-edpm-deployment-7g2xq" podStartSLOduration=2.795748943 podStartE2EDuration="3.619603823s" podCreationTimestamp="2025-10-01 16:22:15 +0000 UTC" firstStartedPulling="2025-10-01 16:22:16.645710299 +0000 UTC m=+2435.951316510" lastFinishedPulling="2025-10-01 16:22:17.469565149 +0000 UTC m=+2436.775171390" observedRunningTime="2025-10-01 16:22:18.611249414 +0000 UTC m=+2437.916855615" watchObservedRunningTime="2025-10-01 16:22:18.619603823 +0000 UTC m=+2437.925210044" Oct 01 16:22:23 crc kubenswrapper[4949]: I1001 16:22:23.601327 4949 scope.go:117] "RemoveContainer" containerID="757b624790258f8fb52b1acdc82513f0413ed869329247fea2c8ea990d6338de" Oct 01 16:22:23 crc kubenswrapper[4949]: E1001 16:22:23.602130 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:22:27 crc kubenswrapper[4949]: I1001 16:22:27.676538 4949 generic.go:334] "Generic (PLEG): container finished" podID="d73263fe-3cef-4a57-92b8-2d70f128c8d4" containerID="afeef8c448a2bd46a5f68620346f70371ac45a2825f38f5a44cd02173a5fb7e0" exitCode=0 Oct 01 16:22:27 crc kubenswrapper[4949]: I1001 16:22:27.676608 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-7g2xq" event={"ID":"d73263fe-3cef-4a57-92b8-2d70f128c8d4","Type":"ContainerDied","Data":"afeef8c448a2bd46a5f68620346f70371ac45a2825f38f5a44cd02173a5fb7e0"} Oct 01 16:22:29 crc kubenswrapper[4949]: I1001 16:22:29.117939 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-7g2xq" Oct 01 16:22:29 crc kubenswrapper[4949]: I1001 16:22:29.250811 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sfvd9\" (UniqueName: \"kubernetes.io/projected/d73263fe-3cef-4a57-92b8-2d70f128c8d4-kube-api-access-sfvd9\") pod \"d73263fe-3cef-4a57-92b8-2d70f128c8d4\" (UID: \"d73263fe-3cef-4a57-92b8-2d70f128c8d4\") " Oct 01 16:22:29 crc kubenswrapper[4949]: I1001 16:22:29.250958 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d73263fe-3cef-4a57-92b8-2d70f128c8d4-ceph\") pod \"d73263fe-3cef-4a57-92b8-2d70f128c8d4\" (UID: \"d73263fe-3cef-4a57-92b8-2d70f128c8d4\") " Oct 01 16:22:29 crc kubenswrapper[4949]: I1001 16:22:29.250992 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d73263fe-3cef-4a57-92b8-2d70f128c8d4-inventory-0\") pod \"d73263fe-3cef-4a57-92b8-2d70f128c8d4\" (UID: \"d73263fe-3cef-4a57-92b8-2d70f128c8d4\") " Oct 01 16:22:29 crc kubenswrapper[4949]: I1001 16:22:29.251056 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d73263fe-3cef-4a57-92b8-2d70f128c8d4-ssh-key-openstack-edpm-ipam\") pod \"d73263fe-3cef-4a57-92b8-2d70f128c8d4\" (UID: \"d73263fe-3cef-4a57-92b8-2d70f128c8d4\") " Oct 01 16:22:29 crc kubenswrapper[4949]: I1001 16:22:29.256528 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d73263fe-3cef-4a57-92b8-2d70f128c8d4-ceph" (OuterVolumeSpecName: "ceph") pod "d73263fe-3cef-4a57-92b8-2d70f128c8d4" (UID: "d73263fe-3cef-4a57-92b8-2d70f128c8d4"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:22:29 crc kubenswrapper[4949]: I1001 16:22:29.256699 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d73263fe-3cef-4a57-92b8-2d70f128c8d4-kube-api-access-sfvd9" (OuterVolumeSpecName: "kube-api-access-sfvd9") pod "d73263fe-3cef-4a57-92b8-2d70f128c8d4" (UID: "d73263fe-3cef-4a57-92b8-2d70f128c8d4"). InnerVolumeSpecName "kube-api-access-sfvd9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:22:29 crc kubenswrapper[4949]: I1001 16:22:29.282278 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d73263fe-3cef-4a57-92b8-2d70f128c8d4-inventory-0" (OuterVolumeSpecName: "inventory-0") pod "d73263fe-3cef-4a57-92b8-2d70f128c8d4" (UID: "d73263fe-3cef-4a57-92b8-2d70f128c8d4"). InnerVolumeSpecName "inventory-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:22:29 crc kubenswrapper[4949]: I1001 16:22:29.285195 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d73263fe-3cef-4a57-92b8-2d70f128c8d4-ssh-key-openstack-edpm-ipam" (OuterVolumeSpecName: "ssh-key-openstack-edpm-ipam") pod "d73263fe-3cef-4a57-92b8-2d70f128c8d4" (UID: "d73263fe-3cef-4a57-92b8-2d70f128c8d4"). InnerVolumeSpecName "ssh-key-openstack-edpm-ipam". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:22:29 crc kubenswrapper[4949]: I1001 16:22:29.353505 4949 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/d73263fe-3cef-4a57-92b8-2d70f128c8d4-ceph\") on node \"crc\" DevicePath \"\"" Oct 01 16:22:29 crc kubenswrapper[4949]: I1001 16:22:29.353539 4949 reconciler_common.go:293] "Volume detached for volume \"inventory-0\" (UniqueName: \"kubernetes.io/secret/d73263fe-3cef-4a57-92b8-2d70f128c8d4-inventory-0\") on node \"crc\" DevicePath \"\"" Oct 01 16:22:29 crc kubenswrapper[4949]: I1001 16:22:29.353550 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key-openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/secret/d73263fe-3cef-4a57-92b8-2d70f128c8d4-ssh-key-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Oct 01 16:22:29 crc kubenswrapper[4949]: I1001 16:22:29.353569 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sfvd9\" (UniqueName: \"kubernetes.io/projected/d73263fe-3cef-4a57-92b8-2d70f128c8d4-kube-api-access-sfvd9\") on node \"crc\" DevicePath \"\"" Oct 01 16:22:29 crc kubenswrapper[4949]: I1001 16:22:29.696806 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ssh-known-hosts-edpm-deployment-7g2xq" event={"ID":"d73263fe-3cef-4a57-92b8-2d70f128c8d4","Type":"ContainerDied","Data":"3733cfba64b9a7f115807c45c14435e3a83f704083489c2af2781b3e0816d853"} Oct 01 16:22:29 crc kubenswrapper[4949]: I1001 16:22:29.696856 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3733cfba64b9a7f115807c45c14435e3a83f704083489c2af2781b3e0816d853" Oct 01 16:22:29 crc kubenswrapper[4949]: I1001 16:22:29.696904 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ssh-known-hosts-edpm-deployment-7g2xq" Oct 01 16:22:29 crc kubenswrapper[4949]: I1001 16:22:29.795925 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-h6htf"] Oct 01 16:22:29 crc kubenswrapper[4949]: E1001 16:22:29.796325 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d73263fe-3cef-4a57-92b8-2d70f128c8d4" containerName="ssh-known-hosts-edpm-deployment" Oct 01 16:22:29 crc kubenswrapper[4949]: I1001 16:22:29.796342 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="d73263fe-3cef-4a57-92b8-2d70f128c8d4" containerName="ssh-known-hosts-edpm-deployment" Oct 01 16:22:29 crc kubenswrapper[4949]: I1001 16:22:29.796533 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="d73263fe-3cef-4a57-92b8-2d70f128c8d4" containerName="ssh-known-hosts-edpm-deployment" Oct 01 16:22:29 crc kubenswrapper[4949]: I1001 16:22:29.797093 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h6htf" Oct 01 16:22:29 crc kubenswrapper[4949]: I1001 16:22:29.799399 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 01 16:22:29 crc kubenswrapper[4949]: I1001 16:22:29.799531 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 01 16:22:29 crc kubenswrapper[4949]: I1001 16:22:29.799675 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 01 16:22:29 crc kubenswrapper[4949]: I1001 16:22:29.799784 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dwhcl" Oct 01 16:22:29 crc kubenswrapper[4949]: I1001 16:22:29.800460 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 01 16:22:29 crc kubenswrapper[4949]: I1001 16:22:29.812585 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-h6htf"] Oct 01 16:22:29 crc kubenswrapper[4949]: I1001 16:22:29.863375 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gjvsd\" (UniqueName: \"kubernetes.io/projected/cd7ebc74-d7fc-479d-9707-077690577317-kube-api-access-gjvsd\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-h6htf\" (UID: \"cd7ebc74-d7fc-479d-9707-077690577317\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h6htf" Oct 01 16:22:29 crc kubenswrapper[4949]: I1001 16:22:29.863428 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd7ebc74-d7fc-479d-9707-077690577317-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-h6htf\" (UID: \"cd7ebc74-d7fc-479d-9707-077690577317\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h6htf" Oct 01 16:22:29 crc kubenswrapper[4949]: I1001 16:22:29.863632 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cd7ebc74-d7fc-479d-9707-077690577317-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-h6htf\" (UID: \"cd7ebc74-d7fc-479d-9707-077690577317\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h6htf" Oct 01 16:22:29 crc kubenswrapper[4949]: I1001 16:22:29.863966 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cd7ebc74-d7fc-479d-9707-077690577317-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-h6htf\" (UID: \"cd7ebc74-d7fc-479d-9707-077690577317\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h6htf" Oct 01 16:22:29 crc kubenswrapper[4949]: I1001 16:22:29.965909 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cd7ebc74-d7fc-479d-9707-077690577317-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-h6htf\" (UID: \"cd7ebc74-d7fc-479d-9707-077690577317\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h6htf" Oct 01 16:22:29 crc kubenswrapper[4949]: I1001 16:22:29.966035 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gjvsd\" (UniqueName: \"kubernetes.io/projected/cd7ebc74-d7fc-479d-9707-077690577317-kube-api-access-gjvsd\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-h6htf\" (UID: \"cd7ebc74-d7fc-479d-9707-077690577317\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h6htf" Oct 01 16:22:29 crc kubenswrapper[4949]: I1001 16:22:29.966088 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd7ebc74-d7fc-479d-9707-077690577317-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-h6htf\" (UID: \"cd7ebc74-d7fc-479d-9707-077690577317\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h6htf" Oct 01 16:22:29 crc kubenswrapper[4949]: I1001 16:22:29.966318 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cd7ebc74-d7fc-479d-9707-077690577317-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-h6htf\" (UID: \"cd7ebc74-d7fc-479d-9707-077690577317\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h6htf" Oct 01 16:22:29 crc kubenswrapper[4949]: I1001 16:22:29.973537 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cd7ebc74-d7fc-479d-9707-077690577317-ssh-key\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-h6htf\" (UID: \"cd7ebc74-d7fc-479d-9707-077690577317\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h6htf" Oct 01 16:22:29 crc kubenswrapper[4949]: I1001 16:22:29.973587 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd7ebc74-d7fc-479d-9707-077690577317-inventory\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-h6htf\" (UID: \"cd7ebc74-d7fc-479d-9707-077690577317\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h6htf" Oct 01 16:22:29 crc kubenswrapper[4949]: I1001 16:22:29.980283 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cd7ebc74-d7fc-479d-9707-077690577317-ceph\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-h6htf\" (UID: \"cd7ebc74-d7fc-479d-9707-077690577317\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h6htf" Oct 01 16:22:30 crc kubenswrapper[4949]: I1001 16:22:29.999992 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gjvsd\" (UniqueName: \"kubernetes.io/projected/cd7ebc74-d7fc-479d-9707-077690577317-kube-api-access-gjvsd\") pod \"run-os-edpm-deployment-openstack-edpm-ipam-h6htf\" (UID: \"cd7ebc74-d7fc-479d-9707-077690577317\") " pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h6htf" Oct 01 16:22:30 crc kubenswrapper[4949]: I1001 16:22:30.118241 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h6htf" Oct 01 16:22:30 crc kubenswrapper[4949]: I1001 16:22:30.760024 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/run-os-edpm-deployment-openstack-edpm-ipam-h6htf"] Oct 01 16:22:31 crc kubenswrapper[4949]: I1001 16:22:31.737787 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h6htf" event={"ID":"cd7ebc74-d7fc-479d-9707-077690577317","Type":"ContainerStarted","Data":"777fcb7c7b4084055f16c64731cd0779dac62b5b1d7538bb23866aba456e25a0"} Oct 01 16:22:31 crc kubenswrapper[4949]: I1001 16:22:31.738223 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h6htf" event={"ID":"cd7ebc74-d7fc-479d-9707-077690577317","Type":"ContainerStarted","Data":"2a1ba9eff47659989c387542b8c05663d330e042377326c10e70d1a5442ad8d9"} Oct 01 16:22:31 crc kubenswrapper[4949]: I1001 16:22:31.756582 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h6htf" podStartSLOduration=2.208002052 podStartE2EDuration="2.756564903s" podCreationTimestamp="2025-10-01 16:22:29 +0000 UTC" firstStartedPulling="2025-10-01 16:22:30.761978172 +0000 UTC m=+2450.067584373" lastFinishedPulling="2025-10-01 16:22:31.310540993 +0000 UTC m=+2450.616147224" observedRunningTime="2025-10-01 16:22:31.754890386 +0000 UTC m=+2451.060496587" watchObservedRunningTime="2025-10-01 16:22:31.756564903 +0000 UTC m=+2451.062171104" Oct 01 16:22:37 crc kubenswrapper[4949]: I1001 16:22:37.601944 4949 scope.go:117] "RemoveContainer" containerID="757b624790258f8fb52b1acdc82513f0413ed869329247fea2c8ea990d6338de" Oct 01 16:22:37 crc kubenswrapper[4949]: E1001 16:22:37.602679 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:22:39 crc kubenswrapper[4949]: I1001 16:22:39.817842 4949 generic.go:334] "Generic (PLEG): container finished" podID="cd7ebc74-d7fc-479d-9707-077690577317" containerID="777fcb7c7b4084055f16c64731cd0779dac62b5b1d7538bb23866aba456e25a0" exitCode=0 Oct 01 16:22:39 crc kubenswrapper[4949]: I1001 16:22:39.817944 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h6htf" event={"ID":"cd7ebc74-d7fc-479d-9707-077690577317","Type":"ContainerDied","Data":"777fcb7c7b4084055f16c64731cd0779dac62b5b1d7538bb23866aba456e25a0"} Oct 01 16:22:41 crc kubenswrapper[4949]: I1001 16:22:41.224452 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h6htf" Oct 01 16:22:41 crc kubenswrapper[4949]: I1001 16:22:41.298094 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cd7ebc74-d7fc-479d-9707-077690577317-ssh-key\") pod \"cd7ebc74-d7fc-479d-9707-077690577317\" (UID: \"cd7ebc74-d7fc-479d-9707-077690577317\") " Oct 01 16:22:41 crc kubenswrapper[4949]: I1001 16:22:41.298357 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cd7ebc74-d7fc-479d-9707-077690577317-ceph\") pod \"cd7ebc74-d7fc-479d-9707-077690577317\" (UID: \"cd7ebc74-d7fc-479d-9707-077690577317\") " Oct 01 16:22:41 crc kubenswrapper[4949]: I1001 16:22:41.298404 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd7ebc74-d7fc-479d-9707-077690577317-inventory\") pod \"cd7ebc74-d7fc-479d-9707-077690577317\" (UID: \"cd7ebc74-d7fc-479d-9707-077690577317\") " Oct 01 16:22:41 crc kubenswrapper[4949]: I1001 16:22:41.298517 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gjvsd\" (UniqueName: \"kubernetes.io/projected/cd7ebc74-d7fc-479d-9707-077690577317-kube-api-access-gjvsd\") pod \"cd7ebc74-d7fc-479d-9707-077690577317\" (UID: \"cd7ebc74-d7fc-479d-9707-077690577317\") " Oct 01 16:22:41 crc kubenswrapper[4949]: I1001 16:22:41.303731 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd7ebc74-d7fc-479d-9707-077690577317-ceph" (OuterVolumeSpecName: "ceph") pod "cd7ebc74-d7fc-479d-9707-077690577317" (UID: "cd7ebc74-d7fc-479d-9707-077690577317"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:22:41 crc kubenswrapper[4949]: I1001 16:22:41.304542 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd7ebc74-d7fc-479d-9707-077690577317-kube-api-access-gjvsd" (OuterVolumeSpecName: "kube-api-access-gjvsd") pod "cd7ebc74-d7fc-479d-9707-077690577317" (UID: "cd7ebc74-d7fc-479d-9707-077690577317"). InnerVolumeSpecName "kube-api-access-gjvsd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:22:41 crc kubenswrapper[4949]: I1001 16:22:41.323339 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd7ebc74-d7fc-479d-9707-077690577317-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "cd7ebc74-d7fc-479d-9707-077690577317" (UID: "cd7ebc74-d7fc-479d-9707-077690577317"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:22:41 crc kubenswrapper[4949]: I1001 16:22:41.324292 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cd7ebc74-d7fc-479d-9707-077690577317-inventory" (OuterVolumeSpecName: "inventory") pod "cd7ebc74-d7fc-479d-9707-077690577317" (UID: "cd7ebc74-d7fc-479d-9707-077690577317"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:22:41 crc kubenswrapper[4949]: I1001 16:22:41.401473 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/cd7ebc74-d7fc-479d-9707-077690577317-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 01 16:22:41 crc kubenswrapper[4949]: I1001 16:22:41.401507 4949 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/cd7ebc74-d7fc-479d-9707-077690577317-ceph\") on node \"crc\" DevicePath \"\"" Oct 01 16:22:41 crc kubenswrapper[4949]: I1001 16:22:41.401516 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/cd7ebc74-d7fc-479d-9707-077690577317-inventory\") on node \"crc\" DevicePath \"\"" Oct 01 16:22:41 crc kubenswrapper[4949]: I1001 16:22:41.401525 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gjvsd\" (UniqueName: \"kubernetes.io/projected/cd7ebc74-d7fc-479d-9707-077690577317-kube-api-access-gjvsd\") on node \"crc\" DevicePath \"\"" Oct 01 16:22:41 crc kubenswrapper[4949]: I1001 16:22:41.840474 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h6htf" event={"ID":"cd7ebc74-d7fc-479d-9707-077690577317","Type":"ContainerDied","Data":"2a1ba9eff47659989c387542b8c05663d330e042377326c10e70d1a5442ad8d9"} Oct 01 16:22:41 crc kubenswrapper[4949]: I1001 16:22:41.841271 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a1ba9eff47659989c387542b8c05663d330e042377326c10e70d1a5442ad8d9" Oct 01 16:22:41 crc kubenswrapper[4949]: I1001 16:22:41.840538 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/run-os-edpm-deployment-openstack-edpm-ipam-h6htf" Oct 01 16:22:41 crc kubenswrapper[4949]: I1001 16:22:41.913747 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-km6ld"] Oct 01 16:22:41 crc kubenswrapper[4949]: E1001 16:22:41.914304 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cd7ebc74-d7fc-479d-9707-077690577317" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Oct 01 16:22:41 crc kubenswrapper[4949]: I1001 16:22:41.914332 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="cd7ebc74-d7fc-479d-9707-077690577317" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Oct 01 16:22:41 crc kubenswrapper[4949]: I1001 16:22:41.914594 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="cd7ebc74-d7fc-479d-9707-077690577317" containerName="run-os-edpm-deployment-openstack-edpm-ipam" Oct 01 16:22:41 crc kubenswrapper[4949]: I1001 16:22:41.915383 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-km6ld" Oct 01 16:22:41 crc kubenswrapper[4949]: I1001 16:22:41.919279 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 01 16:22:41 crc kubenswrapper[4949]: I1001 16:22:41.919305 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 01 16:22:41 crc kubenswrapper[4949]: I1001 16:22:41.919845 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 01 16:22:41 crc kubenswrapper[4949]: I1001 16:22:41.919916 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 01 16:22:41 crc kubenswrapper[4949]: I1001 16:22:41.919998 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dwhcl" Oct 01 16:22:41 crc kubenswrapper[4949]: I1001 16:22:41.922573 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-km6ld"] Oct 01 16:22:42 crc kubenswrapper[4949]: I1001 16:22:42.011477 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6c07eb6-177e-415a-b24d-0ad1e81e3ab9-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-km6ld\" (UID: \"a6c07eb6-177e-415a-b24d-0ad1e81e3ab9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-km6ld" Oct 01 16:22:42 crc kubenswrapper[4949]: I1001 16:22:42.011551 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mhpsk\" (UniqueName: \"kubernetes.io/projected/a6c07eb6-177e-415a-b24d-0ad1e81e3ab9-kube-api-access-mhpsk\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-km6ld\" (UID: \"a6c07eb6-177e-415a-b24d-0ad1e81e3ab9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-km6ld" Oct 01 16:22:42 crc kubenswrapper[4949]: I1001 16:22:42.011695 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a6c07eb6-177e-415a-b24d-0ad1e81e3ab9-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-km6ld\" (UID: \"a6c07eb6-177e-415a-b24d-0ad1e81e3ab9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-km6ld" Oct 01 16:22:42 crc kubenswrapper[4949]: I1001 16:22:42.011725 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a6c07eb6-177e-415a-b24d-0ad1e81e3ab9-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-km6ld\" (UID: \"a6c07eb6-177e-415a-b24d-0ad1e81e3ab9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-km6ld" Oct 01 16:22:42 crc kubenswrapper[4949]: I1001 16:22:42.113620 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mhpsk\" (UniqueName: \"kubernetes.io/projected/a6c07eb6-177e-415a-b24d-0ad1e81e3ab9-kube-api-access-mhpsk\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-km6ld\" (UID: \"a6c07eb6-177e-415a-b24d-0ad1e81e3ab9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-km6ld" Oct 01 16:22:42 crc kubenswrapper[4949]: I1001 16:22:42.113693 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a6c07eb6-177e-415a-b24d-0ad1e81e3ab9-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-km6ld\" (UID: \"a6c07eb6-177e-415a-b24d-0ad1e81e3ab9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-km6ld" Oct 01 16:22:42 crc kubenswrapper[4949]: I1001 16:22:42.113715 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a6c07eb6-177e-415a-b24d-0ad1e81e3ab9-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-km6ld\" (UID: \"a6c07eb6-177e-415a-b24d-0ad1e81e3ab9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-km6ld" Oct 01 16:22:42 crc kubenswrapper[4949]: I1001 16:22:42.113815 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6c07eb6-177e-415a-b24d-0ad1e81e3ab9-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-km6ld\" (UID: \"a6c07eb6-177e-415a-b24d-0ad1e81e3ab9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-km6ld" Oct 01 16:22:42 crc kubenswrapper[4949]: I1001 16:22:42.120028 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6c07eb6-177e-415a-b24d-0ad1e81e3ab9-inventory\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-km6ld\" (UID: \"a6c07eb6-177e-415a-b24d-0ad1e81e3ab9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-km6ld" Oct 01 16:22:42 crc kubenswrapper[4949]: I1001 16:22:42.120558 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a6c07eb6-177e-415a-b24d-0ad1e81e3ab9-ssh-key\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-km6ld\" (UID: \"a6c07eb6-177e-415a-b24d-0ad1e81e3ab9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-km6ld" Oct 01 16:22:42 crc kubenswrapper[4949]: I1001 16:22:42.128021 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a6c07eb6-177e-415a-b24d-0ad1e81e3ab9-ceph\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-km6ld\" (UID: \"a6c07eb6-177e-415a-b24d-0ad1e81e3ab9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-km6ld" Oct 01 16:22:42 crc kubenswrapper[4949]: I1001 16:22:42.141048 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mhpsk\" (UniqueName: \"kubernetes.io/projected/a6c07eb6-177e-415a-b24d-0ad1e81e3ab9-kube-api-access-mhpsk\") pod \"reboot-os-edpm-deployment-openstack-edpm-ipam-km6ld\" (UID: \"a6c07eb6-177e-415a-b24d-0ad1e81e3ab9\") " pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-km6ld" Oct 01 16:22:42 crc kubenswrapper[4949]: I1001 16:22:42.229360 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-km6ld" Oct 01 16:22:42 crc kubenswrapper[4949]: I1001 16:22:42.783903 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-km6ld"] Oct 01 16:22:42 crc kubenswrapper[4949]: W1001 16:22:42.792417 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda6c07eb6_177e_415a_b24d_0ad1e81e3ab9.slice/crio-06b4dc30cc03fed298536e9950b0670fa7f442c197b33d54f3cb428db5cbc35b WatchSource:0}: Error finding container 06b4dc30cc03fed298536e9950b0670fa7f442c197b33d54f3cb428db5cbc35b: Status 404 returned error can't find the container with id 06b4dc30cc03fed298536e9950b0670fa7f442c197b33d54f3cb428db5cbc35b Oct 01 16:22:42 crc kubenswrapper[4949]: I1001 16:22:42.849202 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-km6ld" event={"ID":"a6c07eb6-177e-415a-b24d-0ad1e81e3ab9","Type":"ContainerStarted","Data":"06b4dc30cc03fed298536e9950b0670fa7f442c197b33d54f3cb428db5cbc35b"} Oct 01 16:22:43 crc kubenswrapper[4949]: I1001 16:22:43.861556 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-km6ld" event={"ID":"a6c07eb6-177e-415a-b24d-0ad1e81e3ab9","Type":"ContainerStarted","Data":"126f5bfec19420ab885a828edee97a9ce0c9f483c843e485fc2c17dddda1a9c6"} Oct 01 16:22:43 crc kubenswrapper[4949]: I1001 16:22:43.890346 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-km6ld" podStartSLOduration=2.389481342 podStartE2EDuration="2.890324365s" podCreationTimestamp="2025-10-01 16:22:41 +0000 UTC" firstStartedPulling="2025-10-01 16:22:42.795075284 +0000 UTC m=+2462.100681475" lastFinishedPulling="2025-10-01 16:22:43.295918307 +0000 UTC m=+2462.601524498" observedRunningTime="2025-10-01 16:22:43.882650086 +0000 UTC m=+2463.188256277" watchObservedRunningTime="2025-10-01 16:22:43.890324365 +0000 UTC m=+2463.195930556" Oct 01 16:22:50 crc kubenswrapper[4949]: I1001 16:22:50.601598 4949 scope.go:117] "RemoveContainer" containerID="757b624790258f8fb52b1acdc82513f0413ed869329247fea2c8ea990d6338de" Oct 01 16:22:50 crc kubenswrapper[4949]: E1001 16:22:50.602720 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:22:51 crc kubenswrapper[4949]: E1001 16:22:51.238182 4949 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd7ebc74_d7fc_479d_9707_077690577317.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd7ebc74_d7fc_479d_9707_077690577317.slice/crio-2a1ba9eff47659989c387542b8c05663d330e042377326c10e70d1a5442ad8d9\": RecentStats: unable to find data in memory cache]" Oct 01 16:22:52 crc kubenswrapper[4949]: I1001 16:22:52.951798 4949 generic.go:334] "Generic (PLEG): container finished" podID="a6c07eb6-177e-415a-b24d-0ad1e81e3ab9" containerID="126f5bfec19420ab885a828edee97a9ce0c9f483c843e485fc2c17dddda1a9c6" exitCode=0 Oct 01 16:22:52 crc kubenswrapper[4949]: I1001 16:22:52.951887 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-km6ld" event={"ID":"a6c07eb6-177e-415a-b24d-0ad1e81e3ab9","Type":"ContainerDied","Data":"126f5bfec19420ab885a828edee97a9ce0c9f483c843e485fc2c17dddda1a9c6"} Oct 01 16:22:54 crc kubenswrapper[4949]: I1001 16:22:54.329506 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-km6ld" Oct 01 16:22:54 crc kubenswrapper[4949]: I1001 16:22:54.487274 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a6c07eb6-177e-415a-b24d-0ad1e81e3ab9-ceph\") pod \"a6c07eb6-177e-415a-b24d-0ad1e81e3ab9\" (UID: \"a6c07eb6-177e-415a-b24d-0ad1e81e3ab9\") " Oct 01 16:22:54 crc kubenswrapper[4949]: I1001 16:22:54.487327 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mhpsk\" (UniqueName: \"kubernetes.io/projected/a6c07eb6-177e-415a-b24d-0ad1e81e3ab9-kube-api-access-mhpsk\") pod \"a6c07eb6-177e-415a-b24d-0ad1e81e3ab9\" (UID: \"a6c07eb6-177e-415a-b24d-0ad1e81e3ab9\") " Oct 01 16:22:54 crc kubenswrapper[4949]: I1001 16:22:54.487526 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6c07eb6-177e-415a-b24d-0ad1e81e3ab9-inventory\") pod \"a6c07eb6-177e-415a-b24d-0ad1e81e3ab9\" (UID: \"a6c07eb6-177e-415a-b24d-0ad1e81e3ab9\") " Oct 01 16:22:54 crc kubenswrapper[4949]: I1001 16:22:54.487633 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a6c07eb6-177e-415a-b24d-0ad1e81e3ab9-ssh-key\") pod \"a6c07eb6-177e-415a-b24d-0ad1e81e3ab9\" (UID: \"a6c07eb6-177e-415a-b24d-0ad1e81e3ab9\") " Oct 01 16:22:54 crc kubenswrapper[4949]: I1001 16:22:54.499315 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6c07eb6-177e-415a-b24d-0ad1e81e3ab9-ceph" (OuterVolumeSpecName: "ceph") pod "a6c07eb6-177e-415a-b24d-0ad1e81e3ab9" (UID: "a6c07eb6-177e-415a-b24d-0ad1e81e3ab9"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:22:54 crc kubenswrapper[4949]: I1001 16:22:54.503266 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6c07eb6-177e-415a-b24d-0ad1e81e3ab9-kube-api-access-mhpsk" (OuterVolumeSpecName: "kube-api-access-mhpsk") pod "a6c07eb6-177e-415a-b24d-0ad1e81e3ab9" (UID: "a6c07eb6-177e-415a-b24d-0ad1e81e3ab9"). InnerVolumeSpecName "kube-api-access-mhpsk". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:22:54 crc kubenswrapper[4949]: I1001 16:22:54.522071 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6c07eb6-177e-415a-b24d-0ad1e81e3ab9-inventory" (OuterVolumeSpecName: "inventory") pod "a6c07eb6-177e-415a-b24d-0ad1e81e3ab9" (UID: "a6c07eb6-177e-415a-b24d-0ad1e81e3ab9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:22:54 crc kubenswrapper[4949]: I1001 16:22:54.528783 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6c07eb6-177e-415a-b24d-0ad1e81e3ab9-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "a6c07eb6-177e-415a-b24d-0ad1e81e3ab9" (UID: "a6c07eb6-177e-415a-b24d-0ad1e81e3ab9"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:22:54 crc kubenswrapper[4949]: I1001 16:22:54.589612 4949 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/a6c07eb6-177e-415a-b24d-0ad1e81e3ab9-ceph\") on node \"crc\" DevicePath \"\"" Oct 01 16:22:54 crc kubenswrapper[4949]: I1001 16:22:54.589647 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mhpsk\" (UniqueName: \"kubernetes.io/projected/a6c07eb6-177e-415a-b24d-0ad1e81e3ab9-kube-api-access-mhpsk\") on node \"crc\" DevicePath \"\"" Oct 01 16:22:54 crc kubenswrapper[4949]: I1001 16:22:54.589658 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/a6c07eb6-177e-415a-b24d-0ad1e81e3ab9-inventory\") on node \"crc\" DevicePath \"\"" Oct 01 16:22:54 crc kubenswrapper[4949]: I1001 16:22:54.589666 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/a6c07eb6-177e-415a-b24d-0ad1e81e3ab9-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 01 16:22:54 crc kubenswrapper[4949]: I1001 16:22:54.980556 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-km6ld" event={"ID":"a6c07eb6-177e-415a-b24d-0ad1e81e3ab9","Type":"ContainerDied","Data":"06b4dc30cc03fed298536e9950b0670fa7f442c197b33d54f3cb428db5cbc35b"} Oct 01 16:22:54 crc kubenswrapper[4949]: I1001 16:22:54.980612 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06b4dc30cc03fed298536e9950b0670fa7f442c197b33d54f3cb428db5cbc35b" Oct 01 16:22:54 crc kubenswrapper[4949]: I1001 16:22:54.980636 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/reboot-os-edpm-deployment-openstack-edpm-ipam-km6ld" Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.104039 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr"] Oct 01 16:22:55 crc kubenswrapper[4949]: E1001 16:22:55.104581 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6c07eb6-177e-415a-b24d-0ad1e81e3ab9" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.104607 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6c07eb6-177e-415a-b24d-0ad1e81e3ab9" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.105595 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6c07eb6-177e-415a-b24d-0ad1e81e3ab9" containerName="reboot-os-edpm-deployment-openstack-edpm-ipam" Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.106575 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr" Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.109734 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-neutron-metadata-default-certs-0" Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.109752 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.109776 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-ovn-default-certs-0" Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.109771 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.109930 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.110013 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-libvirt-default-certs-0" Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.112870 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.117371 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr"] Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.118841 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dwhcl" Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.216021 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr\" (UID: \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr" Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.216063 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr\" (UID: \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr" Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.216084 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr\" (UID: \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr" Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.216111 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr\" (UID: \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr" Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.216240 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr\" (UID: \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr" Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.216290 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr\" (UID: \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr" Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.216419 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr\" (UID: \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr" Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.216627 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr\" (UID: \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr" Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.216810 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr\" (UID: \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr" Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.216844 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr\" (UID: \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr" Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.216867 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr\" (UID: \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr" Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.216947 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d9mc4\" (UniqueName: \"kubernetes.io/projected/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-kube-api-access-d9mc4\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr\" (UID: \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr" Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.216990 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr\" (UID: \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr" Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.318937 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr\" (UID: \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr" Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.319049 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr\" (UID: \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr" Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.319078 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr\" (UID: \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr" Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.319104 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr\" (UID: \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr" Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.319162 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d9mc4\" (UniqueName: \"kubernetes.io/projected/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-kube-api-access-d9mc4\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr\" (UID: \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr" Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.319205 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr\" (UID: \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr" Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.319231 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr\" (UID: \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr" Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.319254 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr\" (UID: \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr" Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.319277 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr\" (UID: \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr" Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.319300 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr\" (UID: \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr" Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.319327 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr\" (UID: \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr" Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.319352 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr\" (UID: \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr" Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.319404 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr\" (UID: \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr" Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.324226 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-ceph\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr\" (UID: \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr" Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.324305 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-inventory\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr\" (UID: \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr" Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.326001 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-bootstrap-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr\" (UID: \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr" Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.326115 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-ssh-key\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr\" (UID: \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr" Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.326294 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr\" (UID: \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr" Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.326503 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-nova-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr\" (UID: \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr" Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.326669 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-libvirt-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr\" (UID: \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr" Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.327226 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-openstack-edpm-ipam-ovn-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr\" (UID: \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr" Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.328246 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr\" (UID: \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr" Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.330032 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-ovn-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr\" (UID: \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr" Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.330039 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-repo-setup-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr\" (UID: \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr" Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.331076 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-neutron-metadata-combined-ca-bundle\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr\" (UID: \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr" Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.342308 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d9mc4\" (UniqueName: \"kubernetes.io/projected/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-kube-api-access-d9mc4\") pod \"install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr\" (UID: \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\") " pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr" Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.433719 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr" Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.815753 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr"] Oct 01 16:22:55 crc kubenswrapper[4949]: I1001 16:22:55.992769 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr" event={"ID":"eff7bc51-9128-4d18-8e2e-05c8b779e7ba","Type":"ContainerStarted","Data":"d358cde2151225a41fad1b7dae3880749e82293e7146bd36f66466f3fb290023"} Oct 01 16:22:57 crc kubenswrapper[4949]: I1001 16:22:57.016396 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr" event={"ID":"eff7bc51-9128-4d18-8e2e-05c8b779e7ba","Type":"ContainerStarted","Data":"d6245546fd79d7c25f8e01686fbb0732390c71e03f18e9ec5c30ea85741007c4"} Oct 01 16:22:57 crc kubenswrapper[4949]: I1001 16:22:57.049468 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr" podStartSLOduration=1.572720962 podStartE2EDuration="2.049452724s" podCreationTimestamp="2025-10-01 16:22:55 +0000 UTC" firstStartedPulling="2025-10-01 16:22:55.816868457 +0000 UTC m=+2475.122474648" lastFinishedPulling="2025-10-01 16:22:56.293600209 +0000 UTC m=+2475.599206410" observedRunningTime="2025-10-01 16:22:57.04495714 +0000 UTC m=+2476.350563361" watchObservedRunningTime="2025-10-01 16:22:57.049452724 +0000 UTC m=+2476.355058915" Oct 01 16:23:01 crc kubenswrapper[4949]: E1001 16:23:01.495364 4949 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd7ebc74_d7fc_479d_9707_077690577317.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd7ebc74_d7fc_479d_9707_077690577317.slice/crio-2a1ba9eff47659989c387542b8c05663d330e042377326c10e70d1a5442ad8d9\": RecentStats: unable to find data in memory cache]" Oct 01 16:23:03 crc kubenswrapper[4949]: I1001 16:23:03.601863 4949 scope.go:117] "RemoveContainer" containerID="757b624790258f8fb52b1acdc82513f0413ed869329247fea2c8ea990d6338de" Oct 01 16:23:03 crc kubenswrapper[4949]: E1001 16:23:03.602782 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:23:11 crc kubenswrapper[4949]: E1001 16:23:11.743600 4949 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd7ebc74_d7fc_479d_9707_077690577317.slice/crio-2a1ba9eff47659989c387542b8c05663d330e042377326c10e70d1a5442ad8d9\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd7ebc74_d7fc_479d_9707_077690577317.slice\": RecentStats: unable to find data in memory cache]" Oct 01 16:23:17 crc kubenswrapper[4949]: I1001 16:23:17.602216 4949 scope.go:117] "RemoveContainer" containerID="757b624790258f8fb52b1acdc82513f0413ed869329247fea2c8ea990d6338de" Oct 01 16:23:17 crc kubenswrapper[4949]: E1001 16:23:17.603152 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:23:21 crc kubenswrapper[4949]: E1001 16:23:21.982689 4949 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd7ebc74_d7fc_479d_9707_077690577317.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd7ebc74_d7fc_479d_9707_077690577317.slice/crio-2a1ba9eff47659989c387542b8c05663d330e042377326c10e70d1a5442ad8d9\": RecentStats: unable to find data in memory cache]" Oct 01 16:23:30 crc kubenswrapper[4949]: I1001 16:23:30.331299 4949 generic.go:334] "Generic (PLEG): container finished" podID="eff7bc51-9128-4d18-8e2e-05c8b779e7ba" containerID="d6245546fd79d7c25f8e01686fbb0732390c71e03f18e9ec5c30ea85741007c4" exitCode=0 Oct 01 16:23:30 crc kubenswrapper[4949]: I1001 16:23:30.331386 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr" event={"ID":"eff7bc51-9128-4d18-8e2e-05c8b779e7ba","Type":"ContainerDied","Data":"d6245546fd79d7c25f8e01686fbb0732390c71e03f18e9ec5c30ea85741007c4"} Oct 01 16:23:31 crc kubenswrapper[4949]: I1001 16:23:31.609166 4949 scope.go:117] "RemoveContainer" containerID="757b624790258f8fb52b1acdc82513f0413ed869329247fea2c8ea990d6338de" Oct 01 16:23:31 crc kubenswrapper[4949]: I1001 16:23:31.748920 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr" Oct 01 16:23:31 crc kubenswrapper[4949]: I1001 16:23:31.769465 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-libvirt-combined-ca-bundle\") pod \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\" (UID: \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\") " Oct 01 16:23:31 crc kubenswrapper[4949]: I1001 16:23:31.769764 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-openstack-edpm-ipam-ovn-default-certs-0\") pod \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\" (UID: \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\") " Oct 01 16:23:31 crc kubenswrapper[4949]: I1001 16:23:31.769907 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-bootstrap-combined-ca-bundle\") pod \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\" (UID: \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\") " Oct 01 16:23:31 crc kubenswrapper[4949]: I1001 16:23:31.770226 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-inventory\") pod \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\" (UID: \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\") " Oct 01 16:23:31 crc kubenswrapper[4949]: I1001 16:23:31.770396 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-ceph\") pod \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\" (UID: \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\") " Oct 01 16:23:31 crc kubenswrapper[4949]: I1001 16:23:31.770614 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-openstack-edpm-ipam-libvirt-default-certs-0\") pod \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\" (UID: \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\") " Oct 01 16:23:31 crc kubenswrapper[4949]: I1001 16:23:31.776858 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-bootstrap-combined-ca-bundle" (OuterVolumeSpecName: "bootstrap-combined-ca-bundle") pod "eff7bc51-9128-4d18-8e2e-05c8b779e7ba" (UID: "eff7bc51-9128-4d18-8e2e-05c8b779e7ba"). InnerVolumeSpecName "bootstrap-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:23:31 crc kubenswrapper[4949]: I1001 16:23:31.777171 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "eff7bc51-9128-4d18-8e2e-05c8b779e7ba" (UID: "eff7bc51-9128-4d18-8e2e-05c8b779e7ba"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:23:31 crc kubenswrapper[4949]: I1001 16:23:31.777340 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-ovn-combined-ca-bundle\") pod \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\" (UID: \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\") " Oct 01 16:23:31 crc kubenswrapper[4949]: I1001 16:23:31.777477 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-nova-combined-ca-bundle\") pod \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\" (UID: \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\") " Oct 01 16:23:31 crc kubenswrapper[4949]: I1001 16:23:31.777595 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-neutron-metadata-combined-ca-bundle\") pod \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\" (UID: \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\") " Oct 01 16:23:31 crc kubenswrapper[4949]: I1001 16:23:31.777339 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-ceph" (OuterVolumeSpecName: "ceph") pod "eff7bc51-9128-4d18-8e2e-05c8b779e7ba" (UID: "eff7bc51-9128-4d18-8e2e-05c8b779e7ba"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:23:31 crc kubenswrapper[4949]: I1001 16:23:31.777601 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-openstack-edpm-ipam-libvirt-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-libvirt-default-certs-0") pod "eff7bc51-9128-4d18-8e2e-05c8b779e7ba" (UID: "eff7bc51-9128-4d18-8e2e-05c8b779e7ba"). InnerVolumeSpecName "openstack-edpm-ipam-libvirt-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:23:31 crc kubenswrapper[4949]: I1001 16:23:31.777917 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-openstack-edpm-ipam-neutron-metadata-default-certs-0\") pod \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\" (UID: \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\") " Oct 01 16:23:31 crc kubenswrapper[4949]: I1001 16:23:31.778050 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-ssh-key\") pod \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\" (UID: \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\") " Oct 01 16:23:31 crc kubenswrapper[4949]: I1001 16:23:31.778199 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-repo-setup-combined-ca-bundle\") pod \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\" (UID: \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\") " Oct 01 16:23:31 crc kubenswrapper[4949]: I1001 16:23:31.778349 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d9mc4\" (UniqueName: \"kubernetes.io/projected/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-kube-api-access-d9mc4\") pod \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\" (UID: \"eff7bc51-9128-4d18-8e2e-05c8b779e7ba\") " Oct 01 16:23:31 crc kubenswrapper[4949]: I1001 16:23:31.779933 4949 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-ceph\") on node \"crc\" DevicePath \"\"" Oct 01 16:23:31 crc kubenswrapper[4949]: I1001 16:23:31.780144 4949 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-libvirt-default-certs-0\" (UniqueName: \"kubernetes.io/projected/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-openstack-edpm-ipam-libvirt-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 01 16:23:31 crc kubenswrapper[4949]: I1001 16:23:31.780276 4949 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 16:23:31 crc kubenswrapper[4949]: I1001 16:23:31.780600 4949 reconciler_common.go:293] "Volume detached for volume \"bootstrap-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-bootstrap-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 16:23:31 crc kubenswrapper[4949]: I1001 16:23:31.784596 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "eff7bc51-9128-4d18-8e2e-05c8b779e7ba" (UID: "eff7bc51-9128-4d18-8e2e-05c8b779e7ba"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:23:31 crc kubenswrapper[4949]: I1001 16:23:31.796826 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "eff7bc51-9128-4d18-8e2e-05c8b779e7ba" (UID: "eff7bc51-9128-4d18-8e2e-05c8b779e7ba"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:23:31 crc kubenswrapper[4949]: I1001 16:23:31.805930 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-nova-combined-ca-bundle" (OuterVolumeSpecName: "nova-combined-ca-bundle") pod "eff7bc51-9128-4d18-8e2e-05c8b779e7ba" (UID: "eff7bc51-9128-4d18-8e2e-05c8b779e7ba"). InnerVolumeSpecName "nova-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:23:31 crc kubenswrapper[4949]: I1001 16:23:31.809900 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-openstack-edpm-ipam-ovn-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-ovn-default-certs-0") pod "eff7bc51-9128-4d18-8e2e-05c8b779e7ba" (UID: "eff7bc51-9128-4d18-8e2e-05c8b779e7ba"). InnerVolumeSpecName "openstack-edpm-ipam-ovn-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:23:31 crc kubenswrapper[4949]: I1001 16:23:31.810727 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-repo-setup-combined-ca-bundle" (OuterVolumeSpecName: "repo-setup-combined-ca-bundle") pod "eff7bc51-9128-4d18-8e2e-05c8b779e7ba" (UID: "eff7bc51-9128-4d18-8e2e-05c8b779e7ba"). InnerVolumeSpecName "repo-setup-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:23:31 crc kubenswrapper[4949]: I1001 16:23:31.815422 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-openstack-edpm-ipam-neutron-metadata-default-certs-0" (OuterVolumeSpecName: "openstack-edpm-ipam-neutron-metadata-default-certs-0") pod "eff7bc51-9128-4d18-8e2e-05c8b779e7ba" (UID: "eff7bc51-9128-4d18-8e2e-05c8b779e7ba"). InnerVolumeSpecName "openstack-edpm-ipam-neutron-metadata-default-certs-0". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:23:31 crc kubenswrapper[4949]: I1001 16:23:31.816038 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-kube-api-access-d9mc4" (OuterVolumeSpecName: "kube-api-access-d9mc4") pod "eff7bc51-9128-4d18-8e2e-05c8b779e7ba" (UID: "eff7bc51-9128-4d18-8e2e-05c8b779e7ba"). InnerVolumeSpecName "kube-api-access-d9mc4". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:23:31 crc kubenswrapper[4949]: I1001 16:23:31.845621 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "eff7bc51-9128-4d18-8e2e-05c8b779e7ba" (UID: "eff7bc51-9128-4d18-8e2e-05c8b779e7ba"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:23:31 crc kubenswrapper[4949]: I1001 16:23:31.846299 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-inventory" (OuterVolumeSpecName: "inventory") pod "eff7bc51-9128-4d18-8e2e-05c8b779e7ba" (UID: "eff7bc51-9128-4d18-8e2e-05c8b779e7ba"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:23:31 crc kubenswrapper[4949]: I1001 16:23:31.883262 4949 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-ovn-default-certs-0\" (UniqueName: \"kubernetes.io/projected/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-openstack-edpm-ipam-ovn-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 01 16:23:31 crc kubenswrapper[4949]: I1001 16:23:31.883491 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-inventory\") on node \"crc\" DevicePath \"\"" Oct 01 16:23:31 crc kubenswrapper[4949]: I1001 16:23:31.883503 4949 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 16:23:31 crc kubenswrapper[4949]: I1001 16:23:31.883515 4949 reconciler_common.go:293] "Volume detached for volume \"nova-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-nova-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 16:23:31 crc kubenswrapper[4949]: I1001 16:23:31.883525 4949 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 16:23:31 crc kubenswrapper[4949]: I1001 16:23:31.883535 4949 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam-neutron-metadata-default-certs-0\" (UniqueName: \"kubernetes.io/projected/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-openstack-edpm-ipam-neutron-metadata-default-certs-0\") on node \"crc\" DevicePath \"\"" Oct 01 16:23:31 crc kubenswrapper[4949]: I1001 16:23:31.883544 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 01 16:23:31 crc kubenswrapper[4949]: I1001 16:23:31.883554 4949 reconciler_common.go:293] "Volume detached for volume \"repo-setup-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-repo-setup-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 16:23:31 crc kubenswrapper[4949]: I1001 16:23:31.883574 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d9mc4\" (UniqueName: \"kubernetes.io/projected/eff7bc51-9128-4d18-8e2e-05c8b779e7ba-kube-api-access-d9mc4\") on node \"crc\" DevicePath \"\"" Oct 01 16:23:32 crc kubenswrapper[4949]: E1001 16:23:32.229669 4949 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd7ebc74_d7fc_479d_9707_077690577317.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podcd7ebc74_d7fc_479d_9707_077690577317.slice/crio-2a1ba9eff47659989c387542b8c05663d330e042377326c10e70d1a5442ad8d9\": RecentStats: unable to find data in memory cache]" Oct 01 16:23:32 crc kubenswrapper[4949]: I1001 16:23:32.351390 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" event={"ID":"0e15cd67-d4ad-49b8-96a6-da114105e558","Type":"ContainerStarted","Data":"1b065e9610be91fb7869bd4af48c01b9d60eab8531d64587fac8bb8990d9bc6e"} Oct 01 16:23:32 crc kubenswrapper[4949]: I1001 16:23:32.354375 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr" Oct 01 16:23:32 crc kubenswrapper[4949]: I1001 16:23:32.354382 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr" event={"ID":"eff7bc51-9128-4d18-8e2e-05c8b779e7ba","Type":"ContainerDied","Data":"d358cde2151225a41fad1b7dae3880749e82293e7146bd36f66466f3fb290023"} Oct 01 16:23:32 crc kubenswrapper[4949]: I1001 16:23:32.354998 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d358cde2151225a41fad1b7dae3880749e82293e7146bd36f66466f3fb290023" Oct 01 16:23:32 crc kubenswrapper[4949]: I1001 16:23:32.462416 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dwfvq"] Oct 01 16:23:32 crc kubenswrapper[4949]: E1001 16:23:32.463162 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eff7bc51-9128-4d18-8e2e-05c8b779e7ba" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Oct 01 16:23:32 crc kubenswrapper[4949]: I1001 16:23:32.463266 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="eff7bc51-9128-4d18-8e2e-05c8b779e7ba" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Oct 01 16:23:32 crc kubenswrapper[4949]: I1001 16:23:32.463579 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="eff7bc51-9128-4d18-8e2e-05c8b779e7ba" containerName="install-certs-edpm-deployment-openstack-edpm-ipam" Oct 01 16:23:32 crc kubenswrapper[4949]: I1001 16:23:32.464389 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dwfvq" Oct 01 16:23:32 crc kubenswrapper[4949]: I1001 16:23:32.467794 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dwhcl" Oct 01 16:23:32 crc kubenswrapper[4949]: I1001 16:23:32.467972 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 01 16:23:32 crc kubenswrapper[4949]: I1001 16:23:32.468182 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 01 16:23:32 crc kubenswrapper[4949]: I1001 16:23:32.468231 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 01 16:23:32 crc kubenswrapper[4949]: I1001 16:23:32.468731 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 01 16:23:32 crc kubenswrapper[4949]: I1001 16:23:32.489208 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dwfvq"] Oct 01 16:23:32 crc kubenswrapper[4949]: I1001 16:23:32.492944 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9rxw\" (UniqueName: \"kubernetes.io/projected/be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9-kube-api-access-m9rxw\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-dwfvq\" (UID: \"be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dwfvq" Oct 01 16:23:32 crc kubenswrapper[4949]: I1001 16:23:32.493021 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-dwfvq\" (UID: \"be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dwfvq" Oct 01 16:23:32 crc kubenswrapper[4949]: I1001 16:23:32.493103 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-dwfvq\" (UID: \"be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dwfvq" Oct 01 16:23:32 crc kubenswrapper[4949]: I1001 16:23:32.493176 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9-ssh-key\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-dwfvq\" (UID: \"be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dwfvq" Oct 01 16:23:32 crc kubenswrapper[4949]: I1001 16:23:32.594542 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m9rxw\" (UniqueName: \"kubernetes.io/projected/be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9-kube-api-access-m9rxw\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-dwfvq\" (UID: \"be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dwfvq" Oct 01 16:23:32 crc kubenswrapper[4949]: I1001 16:23:32.595315 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-dwfvq\" (UID: \"be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dwfvq" Oct 01 16:23:32 crc kubenswrapper[4949]: I1001 16:23:32.595489 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-dwfvq\" (UID: \"be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dwfvq" Oct 01 16:23:32 crc kubenswrapper[4949]: I1001 16:23:32.595596 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9-ssh-key\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-dwfvq\" (UID: \"be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dwfvq" Oct 01 16:23:32 crc kubenswrapper[4949]: I1001 16:23:32.600355 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9-ssh-key\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-dwfvq\" (UID: \"be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dwfvq" Oct 01 16:23:32 crc kubenswrapper[4949]: I1001 16:23:32.602603 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9-inventory\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-dwfvq\" (UID: \"be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dwfvq" Oct 01 16:23:32 crc kubenswrapper[4949]: I1001 16:23:32.603930 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9-ceph\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-dwfvq\" (UID: \"be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dwfvq" Oct 01 16:23:32 crc kubenswrapper[4949]: I1001 16:23:32.622930 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9rxw\" (UniqueName: \"kubernetes.io/projected/be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9-kube-api-access-m9rxw\") pod \"ceph-client-edpm-deployment-openstack-edpm-ipam-dwfvq\" (UID: \"be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9\") " pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dwfvq" Oct 01 16:23:32 crc kubenswrapper[4949]: I1001 16:23:32.788321 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dwfvq" Oct 01 16:23:33 crc kubenswrapper[4949]: I1001 16:23:33.348101 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dwfvq"] Oct 01 16:23:33 crc kubenswrapper[4949]: I1001 16:23:33.364398 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dwfvq" event={"ID":"be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9","Type":"ContainerStarted","Data":"55f3b9a4abea725e492783d1e8b83dd7f940cec521167acf6c8864732c86d14a"} Oct 01 16:23:34 crc kubenswrapper[4949]: I1001 16:23:34.374007 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dwfvq" event={"ID":"be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9","Type":"ContainerStarted","Data":"9deb93f5111a74bb3b5af97513ea90bea5d473ca76c0890793fdee8e55aa287d"} Oct 01 16:23:34 crc kubenswrapper[4949]: I1001 16:23:34.391245 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dwfvq" podStartSLOduration=1.731460839 podStartE2EDuration="2.39122735s" podCreationTimestamp="2025-10-01 16:23:32 +0000 UTC" firstStartedPulling="2025-10-01 16:23:33.356877898 +0000 UTC m=+2512.662484109" lastFinishedPulling="2025-10-01 16:23:34.016644429 +0000 UTC m=+2513.322250620" observedRunningTime="2025-10-01 16:23:34.387790396 +0000 UTC m=+2513.693396587" watchObservedRunningTime="2025-10-01 16:23:34.39122735 +0000 UTC m=+2513.696833541" Oct 01 16:23:39 crc kubenswrapper[4949]: I1001 16:23:39.426598 4949 generic.go:334] "Generic (PLEG): container finished" podID="be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9" containerID="9deb93f5111a74bb3b5af97513ea90bea5d473ca76c0890793fdee8e55aa287d" exitCode=0 Oct 01 16:23:39 crc kubenswrapper[4949]: I1001 16:23:39.426707 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dwfvq" event={"ID":"be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9","Type":"ContainerDied","Data":"9deb93f5111a74bb3b5af97513ea90bea5d473ca76c0890793fdee8e55aa287d"} Oct 01 16:23:40 crc kubenswrapper[4949]: I1001 16:23:40.837608 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dwfvq" Oct 01 16:23:40 crc kubenswrapper[4949]: I1001 16:23:40.865803 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9-ssh-key\") pod \"be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9\" (UID: \"be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9\") " Oct 01 16:23:40 crc kubenswrapper[4949]: I1001 16:23:40.865893 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9-ceph\") pod \"be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9\" (UID: \"be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9\") " Oct 01 16:23:40 crc kubenswrapper[4949]: I1001 16:23:40.866033 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9-inventory\") pod \"be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9\" (UID: \"be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9\") " Oct 01 16:23:40 crc kubenswrapper[4949]: I1001 16:23:40.866085 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9rxw\" (UniqueName: \"kubernetes.io/projected/be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9-kube-api-access-m9rxw\") pod \"be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9\" (UID: \"be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9\") " Oct 01 16:23:40 crc kubenswrapper[4949]: I1001 16:23:40.883711 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9-kube-api-access-m9rxw" (OuterVolumeSpecName: "kube-api-access-m9rxw") pod "be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9" (UID: "be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9"). InnerVolumeSpecName "kube-api-access-m9rxw". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:23:40 crc kubenswrapper[4949]: I1001 16:23:40.888325 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9-ceph" (OuterVolumeSpecName: "ceph") pod "be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9" (UID: "be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:23:40 crc kubenswrapper[4949]: I1001 16:23:40.900757 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9-inventory" (OuterVolumeSpecName: "inventory") pod "be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9" (UID: "be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:23:40 crc kubenswrapper[4949]: I1001 16:23:40.922616 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9" (UID: "be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:23:40 crc kubenswrapper[4949]: I1001 16:23:40.969269 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 01 16:23:40 crc kubenswrapper[4949]: I1001 16:23:40.969306 4949 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9-ceph\") on node \"crc\" DevicePath \"\"" Oct 01 16:23:40 crc kubenswrapper[4949]: I1001 16:23:40.969315 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9-inventory\") on node \"crc\" DevicePath \"\"" Oct 01 16:23:40 crc kubenswrapper[4949]: I1001 16:23:40.969325 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m9rxw\" (UniqueName: \"kubernetes.io/projected/be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9-kube-api-access-m9rxw\") on node \"crc\" DevicePath \"\"" Oct 01 16:23:41 crc kubenswrapper[4949]: I1001 16:23:41.452822 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dwfvq" event={"ID":"be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9","Type":"ContainerDied","Data":"55f3b9a4abea725e492783d1e8b83dd7f940cec521167acf6c8864732c86d14a"} Oct 01 16:23:41 crc kubenswrapper[4949]: I1001 16:23:41.453254 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="55f3b9a4abea725e492783d1e8b83dd7f940cec521167acf6c8864732c86d14a" Oct 01 16:23:41 crc kubenswrapper[4949]: I1001 16:23:41.452914 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceph-client-edpm-deployment-openstack-edpm-ipam-dwfvq" Oct 01 16:23:41 crc kubenswrapper[4949]: I1001 16:23:41.529267 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-zl9vq"] Oct 01 16:23:41 crc kubenswrapper[4949]: E1001 16:23:41.529644 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Oct 01 16:23:41 crc kubenswrapper[4949]: I1001 16:23:41.529669 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Oct 01 16:23:41 crc kubenswrapper[4949]: I1001 16:23:41.529879 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9" containerName="ceph-client-edpm-deployment-openstack-edpm-ipam" Oct 01 16:23:41 crc kubenswrapper[4949]: I1001 16:23:41.530466 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zl9vq" Oct 01 16:23:41 crc kubenswrapper[4949]: I1001 16:23:41.533427 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Oct 01 16:23:41 crc kubenswrapper[4949]: I1001 16:23:41.534166 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 01 16:23:41 crc kubenswrapper[4949]: I1001 16:23:41.534242 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 01 16:23:41 crc kubenswrapper[4949]: I1001 16:23:41.534452 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 01 16:23:41 crc kubenswrapper[4949]: I1001 16:23:41.534586 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dwhcl" Oct 01 16:23:41 crc kubenswrapper[4949]: I1001 16:23:41.538401 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 01 16:23:41 crc kubenswrapper[4949]: I1001 16:23:41.542463 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-zl9vq"] Oct 01 16:23:41 crc kubenswrapper[4949]: I1001 16:23:41.591195 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zl9vq\" (UID: \"ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zl9vq" Oct 01 16:23:41 crc kubenswrapper[4949]: I1001 16:23:41.591471 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zl9vq\" (UID: \"ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zl9vq" Oct 01 16:23:41 crc kubenswrapper[4949]: I1001 16:23:41.591580 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zl9vq\" (UID: \"ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zl9vq" Oct 01 16:23:41 crc kubenswrapper[4949]: I1001 16:23:41.591675 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zl9vq\" (UID: \"ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zl9vq" Oct 01 16:23:41 crc kubenswrapper[4949]: I1001 16:23:41.591758 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkkzs\" (UniqueName: \"kubernetes.io/projected/ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e-kube-api-access-fkkzs\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zl9vq\" (UID: \"ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zl9vq" Oct 01 16:23:41 crc kubenswrapper[4949]: I1001 16:23:41.591837 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zl9vq\" (UID: \"ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zl9vq" Oct 01 16:23:41 crc kubenswrapper[4949]: I1001 16:23:41.693854 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zl9vq\" (UID: \"ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zl9vq" Oct 01 16:23:41 crc kubenswrapper[4949]: I1001 16:23:41.694193 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkkzs\" (UniqueName: \"kubernetes.io/projected/ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e-kube-api-access-fkkzs\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zl9vq\" (UID: \"ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zl9vq" Oct 01 16:23:41 crc kubenswrapper[4949]: I1001 16:23:41.694233 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zl9vq\" (UID: \"ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zl9vq" Oct 01 16:23:41 crc kubenswrapper[4949]: I1001 16:23:41.694628 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zl9vq\" (UID: \"ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zl9vq" Oct 01 16:23:41 crc kubenswrapper[4949]: I1001 16:23:41.694768 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zl9vq\" (UID: \"ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zl9vq" Oct 01 16:23:41 crc kubenswrapper[4949]: I1001 16:23:41.694812 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zl9vq\" (UID: \"ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zl9vq" Oct 01 16:23:41 crc kubenswrapper[4949]: I1001 16:23:41.696773 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 01 16:23:41 crc kubenswrapper[4949]: I1001 16:23:41.696802 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ovncontroller-config" Oct 01 16:23:41 crc kubenswrapper[4949]: I1001 16:23:41.696884 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 01 16:23:41 crc kubenswrapper[4949]: I1001 16:23:41.696990 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 01 16:23:41 crc kubenswrapper[4949]: I1001 16:23:41.698641 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e-ovn-combined-ca-bundle\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zl9vq\" (UID: \"ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zl9vq" Oct 01 16:23:41 crc kubenswrapper[4949]: I1001 16:23:41.706805 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e-ovncontroller-config-0\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zl9vq\" (UID: \"ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zl9vq" Oct 01 16:23:41 crc kubenswrapper[4949]: I1001 16:23:41.709205 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e-ceph\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zl9vq\" (UID: \"ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zl9vq" Oct 01 16:23:41 crc kubenswrapper[4949]: I1001 16:23:41.709587 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e-inventory\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zl9vq\" (UID: \"ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zl9vq" Oct 01 16:23:41 crc kubenswrapper[4949]: I1001 16:23:41.709994 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e-ssh-key\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zl9vq\" (UID: \"ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zl9vq" Oct 01 16:23:41 crc kubenswrapper[4949]: I1001 16:23:41.711235 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkkzs\" (UniqueName: \"kubernetes.io/projected/ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e-kube-api-access-fkkzs\") pod \"ovn-edpm-deployment-openstack-edpm-ipam-zl9vq\" (UID: \"ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e\") " pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zl9vq" Oct 01 16:23:41 crc kubenswrapper[4949]: I1001 16:23:41.855404 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dwhcl" Oct 01 16:23:41 crc kubenswrapper[4949]: I1001 16:23:41.863512 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zl9vq" Oct 01 16:23:42 crc kubenswrapper[4949]: I1001 16:23:42.399717 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ovn-edpm-deployment-openstack-edpm-ipam-zl9vq"] Oct 01 16:23:42 crc kubenswrapper[4949]: I1001 16:23:42.464271 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zl9vq" event={"ID":"ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e","Type":"ContainerStarted","Data":"45d54578e2950c6eef637650a505ca15b2f7353de7e2d04e394d8b34ef03524e"} Oct 01 16:23:43 crc kubenswrapper[4949]: I1001 16:23:43.084425 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 01 16:23:43 crc kubenswrapper[4949]: I1001 16:23:43.475616 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zl9vq" event={"ID":"ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e","Type":"ContainerStarted","Data":"e59883e8f74befaabfed885730c2ffbd4dc098f2bc5e85a5e0c83cf064b6cef7"} Oct 01 16:23:43 crc kubenswrapper[4949]: I1001 16:23:43.494879 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zl9vq" podStartSLOduration=1.813391953 podStartE2EDuration="2.494859678s" podCreationTimestamp="2025-10-01 16:23:41 +0000 UTC" firstStartedPulling="2025-10-01 16:23:42.400511912 +0000 UTC m=+2521.706118113" lastFinishedPulling="2025-10-01 16:23:43.081979647 +0000 UTC m=+2522.387585838" observedRunningTime="2025-10-01 16:23:43.494147019 +0000 UTC m=+2522.799753230" watchObservedRunningTime="2025-10-01 16:23:43.494859678 +0000 UTC m=+2522.800465889" Oct 01 16:24:55 crc kubenswrapper[4949]: I1001 16:24:55.192564 4949 generic.go:334] "Generic (PLEG): container finished" podID="ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e" containerID="e59883e8f74befaabfed885730c2ffbd4dc098f2bc5e85a5e0c83cf064b6cef7" exitCode=0 Oct 01 16:24:55 crc kubenswrapper[4949]: I1001 16:24:55.192680 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zl9vq" event={"ID":"ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e","Type":"ContainerDied","Data":"e59883e8f74befaabfed885730c2ffbd4dc098f2bc5e85a5e0c83cf064b6cef7"} Oct 01 16:24:56 crc kubenswrapper[4949]: I1001 16:24:56.621703 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zl9vq" Oct 01 16:24:56 crc kubenswrapper[4949]: I1001 16:24:56.711854 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e-inventory\") pod \"ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e\" (UID: \"ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e\") " Oct 01 16:24:56 crc kubenswrapper[4949]: I1001 16:24:56.711924 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e-ssh-key\") pod \"ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e\" (UID: \"ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e\") " Oct 01 16:24:56 crc kubenswrapper[4949]: I1001 16:24:56.711974 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e-ovn-combined-ca-bundle\") pod \"ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e\" (UID: \"ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e\") " Oct 01 16:24:56 crc kubenswrapper[4949]: I1001 16:24:56.712976 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fkkzs\" (UniqueName: \"kubernetes.io/projected/ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e-kube-api-access-fkkzs\") pod \"ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e\" (UID: \"ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e\") " Oct 01 16:24:56 crc kubenswrapper[4949]: I1001 16:24:56.713027 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e-ovncontroller-config-0\") pod \"ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e\" (UID: \"ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e\") " Oct 01 16:24:56 crc kubenswrapper[4949]: I1001 16:24:56.713060 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e-ceph\") pod \"ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e\" (UID: \"ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e\") " Oct 01 16:24:56 crc kubenswrapper[4949]: I1001 16:24:56.717877 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e-ceph" (OuterVolumeSpecName: "ceph") pod "ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e" (UID: "ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:24:56 crc kubenswrapper[4949]: I1001 16:24:56.718215 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e-ovn-combined-ca-bundle" (OuterVolumeSpecName: "ovn-combined-ca-bundle") pod "ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e" (UID: "ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e"). InnerVolumeSpecName "ovn-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:24:56 crc kubenswrapper[4949]: I1001 16:24:56.719743 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e-kube-api-access-fkkzs" (OuterVolumeSpecName: "kube-api-access-fkkzs") pod "ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e" (UID: "ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e"). InnerVolumeSpecName "kube-api-access-fkkzs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:24:56 crc kubenswrapper[4949]: I1001 16:24:56.735691 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e-ovncontroller-config-0" (OuterVolumeSpecName: "ovncontroller-config-0") pod "ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e" (UID: "ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e"). InnerVolumeSpecName "ovncontroller-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 16:24:56 crc kubenswrapper[4949]: I1001 16:24:56.746503 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e-inventory" (OuterVolumeSpecName: "inventory") pod "ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e" (UID: "ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:24:56 crc kubenswrapper[4949]: I1001 16:24:56.747722 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e" (UID: "ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:24:56 crc kubenswrapper[4949]: I1001 16:24:56.815794 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e-inventory\") on node \"crc\" DevicePath \"\"" Oct 01 16:24:56 crc kubenswrapper[4949]: I1001 16:24:56.815841 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 01 16:24:56 crc kubenswrapper[4949]: I1001 16:24:56.815860 4949 reconciler_common.go:293] "Volume detached for volume \"ovn-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e-ovn-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 16:24:56 crc kubenswrapper[4949]: I1001 16:24:56.815881 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fkkzs\" (UniqueName: \"kubernetes.io/projected/ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e-kube-api-access-fkkzs\") on node \"crc\" DevicePath \"\"" Oct 01 16:24:56 crc kubenswrapper[4949]: I1001 16:24:56.815899 4949 reconciler_common.go:293] "Volume detached for volume \"ovncontroller-config-0\" (UniqueName: \"kubernetes.io/configmap/ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e-ovncontroller-config-0\") on node \"crc\" DevicePath \"\"" Oct 01 16:24:56 crc kubenswrapper[4949]: I1001 16:24:56.815916 4949 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e-ceph\") on node \"crc\" DevicePath \"\"" Oct 01 16:24:57 crc kubenswrapper[4949]: I1001 16:24:57.211435 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zl9vq" event={"ID":"ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e","Type":"ContainerDied","Data":"45d54578e2950c6eef637650a505ca15b2f7353de7e2d04e394d8b34ef03524e"} Oct 01 16:24:57 crc kubenswrapper[4949]: I1001 16:24:57.211482 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45d54578e2950c6eef637650a505ca15b2f7353de7e2d04e394d8b34ef03524e" Oct 01 16:24:57 crc kubenswrapper[4949]: I1001 16:24:57.211506 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ovn-edpm-deployment-openstack-edpm-ipam-zl9vq" Oct 01 16:24:57 crc kubenswrapper[4949]: I1001 16:24:57.303851 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz"] Oct 01 16:24:57 crc kubenswrapper[4949]: E1001 16:24:57.304639 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Oct 01 16:24:57 crc kubenswrapper[4949]: I1001 16:24:57.304661 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Oct 01 16:24:57 crc kubenswrapper[4949]: I1001 16:24:57.304891 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e" containerName="ovn-edpm-deployment-openstack-edpm-ipam" Oct 01 16:24:57 crc kubenswrapper[4949]: I1001 16:24:57.305661 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz" Oct 01 16:24:57 crc kubenswrapper[4949]: I1001 16:24:57.309097 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dwhcl" Oct 01 16:24:57 crc kubenswrapper[4949]: I1001 16:24:57.309118 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 01 16:24:57 crc kubenswrapper[4949]: I1001 16:24:57.309311 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 01 16:24:57 crc kubenswrapper[4949]: I1001 16:24:57.309325 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 01 16:24:57 crc kubenswrapper[4949]: I1001 16:24:57.309389 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 01 16:24:57 crc kubenswrapper[4949]: I1001 16:24:57.309390 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"neutron-ovn-metadata-agent-neutron-config" Oct 01 16:24:57 crc kubenswrapper[4949]: I1001 16:24:57.309673 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-metadata-neutron-config" Oct 01 16:24:57 crc kubenswrapper[4949]: I1001 16:24:57.314943 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz"] Oct 01 16:24:57 crc kubenswrapper[4949]: I1001 16:24:57.455561 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-scrrs\" (UniqueName: \"kubernetes.io/projected/2b9b0f35-85c4-4286-9283-e3af60933d81-kube-api-access-scrrs\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz\" (UID: \"2b9b0f35-85c4-4286-9283-e3af60933d81\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz" Oct 01 16:24:57 crc kubenswrapper[4949]: I1001 16:24:57.455662 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2b9b0f35-85c4-4286-9283-e3af60933d81-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz\" (UID: \"2b9b0f35-85c4-4286-9283-e3af60933d81\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz" Oct 01 16:24:57 crc kubenswrapper[4949]: I1001 16:24:57.455723 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b9b0f35-85c4-4286-9283-e3af60933d81-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz\" (UID: \"2b9b0f35-85c4-4286-9283-e3af60933d81\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz" Oct 01 16:24:57 crc kubenswrapper[4949]: I1001 16:24:57.455767 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2b9b0f35-85c4-4286-9283-e3af60933d81-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz\" (UID: \"2b9b0f35-85c4-4286-9283-e3af60933d81\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz" Oct 01 16:24:57 crc kubenswrapper[4949]: I1001 16:24:57.455892 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2b9b0f35-85c4-4286-9283-e3af60933d81-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz\" (UID: \"2b9b0f35-85c4-4286-9283-e3af60933d81\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz" Oct 01 16:24:57 crc kubenswrapper[4949]: I1001 16:24:57.455951 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b9b0f35-85c4-4286-9283-e3af60933d81-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz\" (UID: \"2b9b0f35-85c4-4286-9283-e3af60933d81\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz" Oct 01 16:24:57 crc kubenswrapper[4949]: I1001 16:24:57.456052 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2b9b0f35-85c4-4286-9283-e3af60933d81-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz\" (UID: \"2b9b0f35-85c4-4286-9283-e3af60933d81\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz" Oct 01 16:24:57 crc kubenswrapper[4949]: I1001 16:24:57.557670 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2b9b0f35-85c4-4286-9283-e3af60933d81-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz\" (UID: \"2b9b0f35-85c4-4286-9283-e3af60933d81\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz" Oct 01 16:24:57 crc kubenswrapper[4949]: I1001 16:24:57.557745 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-scrrs\" (UniqueName: \"kubernetes.io/projected/2b9b0f35-85c4-4286-9283-e3af60933d81-kube-api-access-scrrs\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz\" (UID: \"2b9b0f35-85c4-4286-9283-e3af60933d81\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz" Oct 01 16:24:57 crc kubenswrapper[4949]: I1001 16:24:57.557790 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2b9b0f35-85c4-4286-9283-e3af60933d81-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz\" (UID: \"2b9b0f35-85c4-4286-9283-e3af60933d81\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz" Oct 01 16:24:57 crc kubenswrapper[4949]: I1001 16:24:57.557820 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b9b0f35-85c4-4286-9283-e3af60933d81-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz\" (UID: \"2b9b0f35-85c4-4286-9283-e3af60933d81\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz" Oct 01 16:24:57 crc kubenswrapper[4949]: I1001 16:24:57.557842 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2b9b0f35-85c4-4286-9283-e3af60933d81-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz\" (UID: \"2b9b0f35-85c4-4286-9283-e3af60933d81\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz" Oct 01 16:24:57 crc kubenswrapper[4949]: I1001 16:24:57.557876 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2b9b0f35-85c4-4286-9283-e3af60933d81-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz\" (UID: \"2b9b0f35-85c4-4286-9283-e3af60933d81\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz" Oct 01 16:24:57 crc kubenswrapper[4949]: I1001 16:24:57.557902 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b9b0f35-85c4-4286-9283-e3af60933d81-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz\" (UID: \"2b9b0f35-85c4-4286-9283-e3af60933d81\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz" Oct 01 16:24:57 crc kubenswrapper[4949]: I1001 16:24:57.561613 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b9b0f35-85c4-4286-9283-e3af60933d81-inventory\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz\" (UID: \"2b9b0f35-85c4-4286-9283-e3af60933d81\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz" Oct 01 16:24:57 crc kubenswrapper[4949]: I1001 16:24:57.562482 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2b9b0f35-85c4-4286-9283-e3af60933d81-neutron-ovn-metadata-agent-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz\" (UID: \"2b9b0f35-85c4-4286-9283-e3af60933d81\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz" Oct 01 16:24:57 crc kubenswrapper[4949]: I1001 16:24:57.562575 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2b9b0f35-85c4-4286-9283-e3af60933d81-nova-metadata-neutron-config-0\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz\" (UID: \"2b9b0f35-85c4-4286-9283-e3af60933d81\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz" Oct 01 16:24:57 crc kubenswrapper[4949]: I1001 16:24:57.563950 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2b9b0f35-85c4-4286-9283-e3af60933d81-ssh-key\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz\" (UID: \"2b9b0f35-85c4-4286-9283-e3af60933d81\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz" Oct 01 16:24:57 crc kubenswrapper[4949]: I1001 16:24:57.564065 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2b9b0f35-85c4-4286-9283-e3af60933d81-ceph\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz\" (UID: \"2b9b0f35-85c4-4286-9283-e3af60933d81\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz" Oct 01 16:24:57 crc kubenswrapper[4949]: I1001 16:24:57.566793 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b9b0f35-85c4-4286-9283-e3af60933d81-neutron-metadata-combined-ca-bundle\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz\" (UID: \"2b9b0f35-85c4-4286-9283-e3af60933d81\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz" Oct 01 16:24:57 crc kubenswrapper[4949]: I1001 16:24:57.573393 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-scrrs\" (UniqueName: \"kubernetes.io/projected/2b9b0f35-85c4-4286-9283-e3af60933d81-kube-api-access-scrrs\") pod \"neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz\" (UID: \"2b9b0f35-85c4-4286-9283-e3af60933d81\") " pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz" Oct 01 16:24:57 crc kubenswrapper[4949]: I1001 16:24:57.634321 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz" Oct 01 16:24:58 crc kubenswrapper[4949]: I1001 16:24:58.199682 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz"] Oct 01 16:24:58 crc kubenswrapper[4949]: W1001 16:24:58.202534 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2b9b0f35_85c4_4286_9283_e3af60933d81.slice/crio-06b75f3f4c49d1ff2b46f3583144fb5e49cc36081d58f44e4d0c2bd2b67c813a WatchSource:0}: Error finding container 06b75f3f4c49d1ff2b46f3583144fb5e49cc36081d58f44e4d0c2bd2b67c813a: Status 404 returned error can't find the container with id 06b75f3f4c49d1ff2b46f3583144fb5e49cc36081d58f44e4d0c2bd2b67c813a Oct 01 16:24:58 crc kubenswrapper[4949]: I1001 16:24:58.221615 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz" event={"ID":"2b9b0f35-85c4-4286-9283-e3af60933d81","Type":"ContainerStarted","Data":"06b75f3f4c49d1ff2b46f3583144fb5e49cc36081d58f44e4d0c2bd2b67c813a"} Oct 01 16:24:59 crc kubenswrapper[4949]: I1001 16:24:59.230968 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz" event={"ID":"2b9b0f35-85c4-4286-9283-e3af60933d81","Type":"ContainerStarted","Data":"605b110b81c409892e1e3dcda837ebd2618fd627dc255c4ad67008afca6dbbe5"} Oct 01 16:24:59 crc kubenswrapper[4949]: I1001 16:24:59.251027 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz" podStartSLOduration=1.582912766 podStartE2EDuration="2.250999424s" podCreationTimestamp="2025-10-01 16:24:57 +0000 UTC" firstStartedPulling="2025-10-01 16:24:58.204273533 +0000 UTC m=+2597.509879724" lastFinishedPulling="2025-10-01 16:24:58.872360191 +0000 UTC m=+2598.177966382" observedRunningTime="2025-10-01 16:24:59.246441448 +0000 UTC m=+2598.552047649" watchObservedRunningTime="2025-10-01 16:24:59.250999424 +0000 UTC m=+2598.556605655" Oct 01 16:25:48 crc kubenswrapper[4949]: I1001 16:25:48.038329 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 16:25:48 crc kubenswrapper[4949]: I1001 16:25:48.039071 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 16:26:11 crc kubenswrapper[4949]: I1001 16:26:11.977203 4949 generic.go:334] "Generic (PLEG): container finished" podID="2b9b0f35-85c4-4286-9283-e3af60933d81" containerID="605b110b81c409892e1e3dcda837ebd2618fd627dc255c4ad67008afca6dbbe5" exitCode=0 Oct 01 16:26:11 crc kubenswrapper[4949]: I1001 16:26:11.977351 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz" event={"ID":"2b9b0f35-85c4-4286-9283-e3af60933d81","Type":"ContainerDied","Data":"605b110b81c409892e1e3dcda837ebd2618fd627dc255c4ad67008afca6dbbe5"} Oct 01 16:26:13 crc kubenswrapper[4949]: I1001 16:26:13.385579 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz" Oct 01 16:26:13 crc kubenswrapper[4949]: I1001 16:26:13.493589 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2b9b0f35-85c4-4286-9283-e3af60933d81-neutron-ovn-metadata-agent-neutron-config-0\") pod \"2b9b0f35-85c4-4286-9283-e3af60933d81\" (UID: \"2b9b0f35-85c4-4286-9283-e3af60933d81\") " Oct 01 16:26:13 crc kubenswrapper[4949]: I1001 16:26:13.493988 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-scrrs\" (UniqueName: \"kubernetes.io/projected/2b9b0f35-85c4-4286-9283-e3af60933d81-kube-api-access-scrrs\") pod \"2b9b0f35-85c4-4286-9283-e3af60933d81\" (UID: \"2b9b0f35-85c4-4286-9283-e3af60933d81\") " Oct 01 16:26:13 crc kubenswrapper[4949]: I1001 16:26:13.494193 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2b9b0f35-85c4-4286-9283-e3af60933d81-ceph\") pod \"2b9b0f35-85c4-4286-9283-e3af60933d81\" (UID: \"2b9b0f35-85c4-4286-9283-e3af60933d81\") " Oct 01 16:26:13 crc kubenswrapper[4949]: I1001 16:26:13.494293 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2b9b0f35-85c4-4286-9283-e3af60933d81-ssh-key\") pod \"2b9b0f35-85c4-4286-9283-e3af60933d81\" (UID: \"2b9b0f35-85c4-4286-9283-e3af60933d81\") " Oct 01 16:26:13 crc kubenswrapper[4949]: I1001 16:26:13.494923 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2b9b0f35-85c4-4286-9283-e3af60933d81-nova-metadata-neutron-config-0\") pod \"2b9b0f35-85c4-4286-9283-e3af60933d81\" (UID: \"2b9b0f35-85c4-4286-9283-e3af60933d81\") " Oct 01 16:26:13 crc kubenswrapper[4949]: I1001 16:26:13.495042 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b9b0f35-85c4-4286-9283-e3af60933d81-neutron-metadata-combined-ca-bundle\") pod \"2b9b0f35-85c4-4286-9283-e3af60933d81\" (UID: \"2b9b0f35-85c4-4286-9283-e3af60933d81\") " Oct 01 16:26:13 crc kubenswrapper[4949]: I1001 16:26:13.495852 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b9b0f35-85c4-4286-9283-e3af60933d81-inventory\") pod \"2b9b0f35-85c4-4286-9283-e3af60933d81\" (UID: \"2b9b0f35-85c4-4286-9283-e3af60933d81\") " Oct 01 16:26:13 crc kubenswrapper[4949]: I1001 16:26:13.500499 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b9b0f35-85c4-4286-9283-e3af60933d81-neutron-metadata-combined-ca-bundle" (OuterVolumeSpecName: "neutron-metadata-combined-ca-bundle") pod "2b9b0f35-85c4-4286-9283-e3af60933d81" (UID: "2b9b0f35-85c4-4286-9283-e3af60933d81"). InnerVolumeSpecName "neutron-metadata-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:26:13 crc kubenswrapper[4949]: I1001 16:26:13.500576 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b9b0f35-85c4-4286-9283-e3af60933d81-kube-api-access-scrrs" (OuterVolumeSpecName: "kube-api-access-scrrs") pod "2b9b0f35-85c4-4286-9283-e3af60933d81" (UID: "2b9b0f35-85c4-4286-9283-e3af60933d81"). InnerVolumeSpecName "kube-api-access-scrrs". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:26:13 crc kubenswrapper[4949]: I1001 16:26:13.506480 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b9b0f35-85c4-4286-9283-e3af60933d81-ceph" (OuterVolumeSpecName: "ceph") pod "2b9b0f35-85c4-4286-9283-e3af60933d81" (UID: "2b9b0f35-85c4-4286-9283-e3af60933d81"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:26:13 crc kubenswrapper[4949]: I1001 16:26:13.520515 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b9b0f35-85c4-4286-9283-e3af60933d81-inventory" (OuterVolumeSpecName: "inventory") pod "2b9b0f35-85c4-4286-9283-e3af60933d81" (UID: "2b9b0f35-85c4-4286-9283-e3af60933d81"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:26:13 crc kubenswrapper[4949]: I1001 16:26:13.527357 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b9b0f35-85c4-4286-9283-e3af60933d81-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "2b9b0f35-85c4-4286-9283-e3af60933d81" (UID: "2b9b0f35-85c4-4286-9283-e3af60933d81"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:26:13 crc kubenswrapper[4949]: I1001 16:26:13.540553 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b9b0f35-85c4-4286-9283-e3af60933d81-neutron-ovn-metadata-agent-neutron-config-0" (OuterVolumeSpecName: "neutron-ovn-metadata-agent-neutron-config-0") pod "2b9b0f35-85c4-4286-9283-e3af60933d81" (UID: "2b9b0f35-85c4-4286-9283-e3af60933d81"). InnerVolumeSpecName "neutron-ovn-metadata-agent-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:26:13 crc kubenswrapper[4949]: I1001 16:26:13.550372 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b9b0f35-85c4-4286-9283-e3af60933d81-nova-metadata-neutron-config-0" (OuterVolumeSpecName: "nova-metadata-neutron-config-0") pod "2b9b0f35-85c4-4286-9283-e3af60933d81" (UID: "2b9b0f35-85c4-4286-9283-e3af60933d81"). InnerVolumeSpecName "nova-metadata-neutron-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:26:13 crc kubenswrapper[4949]: I1001 16:26:13.599033 4949 reconciler_common.go:293] "Volume detached for volume \"nova-metadata-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2b9b0f35-85c4-4286-9283-e3af60933d81-nova-metadata-neutron-config-0\") on node \"crc\" DevicePath \"\"" Oct 01 16:26:13 crc kubenswrapper[4949]: I1001 16:26:13.599094 4949 reconciler_common.go:293] "Volume detached for volume \"neutron-metadata-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/2b9b0f35-85c4-4286-9283-e3af60933d81-neutron-metadata-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 16:26:13 crc kubenswrapper[4949]: I1001 16:26:13.599117 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/2b9b0f35-85c4-4286-9283-e3af60933d81-inventory\") on node \"crc\" DevicePath \"\"" Oct 01 16:26:13 crc kubenswrapper[4949]: I1001 16:26:13.599157 4949 reconciler_common.go:293] "Volume detached for volume \"neutron-ovn-metadata-agent-neutron-config-0\" (UniqueName: \"kubernetes.io/secret/2b9b0f35-85c4-4286-9283-e3af60933d81-neutron-ovn-metadata-agent-neutron-config-0\") on node \"crc\" DevicePath \"\"" Oct 01 16:26:13 crc kubenswrapper[4949]: I1001 16:26:13.599178 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-scrrs\" (UniqueName: \"kubernetes.io/projected/2b9b0f35-85c4-4286-9283-e3af60933d81-kube-api-access-scrrs\") on node \"crc\" DevicePath \"\"" Oct 01 16:26:13 crc kubenswrapper[4949]: I1001 16:26:13.599200 4949 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/2b9b0f35-85c4-4286-9283-e3af60933d81-ceph\") on node \"crc\" DevicePath \"\"" Oct 01 16:26:13 crc kubenswrapper[4949]: I1001 16:26:13.599217 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/2b9b0f35-85c4-4286-9283-e3af60933d81-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 01 16:26:14 crc kubenswrapper[4949]: I1001 16:26:14.003304 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz" event={"ID":"2b9b0f35-85c4-4286-9283-e3af60933d81","Type":"ContainerDied","Data":"06b75f3f4c49d1ff2b46f3583144fb5e49cc36081d58f44e4d0c2bd2b67c813a"} Oct 01 16:26:14 crc kubenswrapper[4949]: I1001 16:26:14.003778 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06b75f3f4c49d1ff2b46f3583144fb5e49cc36081d58f44e4d0c2bd2b67c813a" Oct 01 16:26:14 crc kubenswrapper[4949]: I1001 16:26:14.003405 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz" Oct 01 16:26:14 crc kubenswrapper[4949]: I1001 16:26:14.123917 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qt7mp"] Oct 01 16:26:14 crc kubenswrapper[4949]: E1001 16:26:14.124357 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2b9b0f35-85c4-4286-9283-e3af60933d81" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Oct 01 16:26:14 crc kubenswrapper[4949]: I1001 16:26:14.124378 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="2b9b0f35-85c4-4286-9283-e3af60933d81" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Oct 01 16:26:14 crc kubenswrapper[4949]: I1001 16:26:14.124653 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="2b9b0f35-85c4-4286-9283-e3af60933d81" containerName="neutron-metadata-edpm-deployment-openstack-edpm-ipam" Oct 01 16:26:14 crc kubenswrapper[4949]: I1001 16:26:14.125562 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qt7mp" Oct 01 16:26:14 crc kubenswrapper[4949]: I1001 16:26:14.132600 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 01 16:26:14 crc kubenswrapper[4949]: I1001 16:26:14.132717 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 01 16:26:14 crc kubenswrapper[4949]: I1001 16:26:14.132853 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"libvirt-secret" Oct 01 16:26:14 crc kubenswrapper[4949]: I1001 16:26:14.133025 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dwhcl" Oct 01 16:26:14 crc kubenswrapper[4949]: I1001 16:26:14.133117 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 01 16:26:14 crc kubenswrapper[4949]: I1001 16:26:14.133217 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 01 16:26:14 crc kubenswrapper[4949]: I1001 16:26:14.175645 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qt7mp"] Oct 01 16:26:14 crc kubenswrapper[4949]: I1001 16:26:14.210939 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qt7mp\" (UID: \"b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qt7mp" Oct 01 16:26:14 crc kubenswrapper[4949]: I1001 16:26:14.210999 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qt7mp\" (UID: \"b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qt7mp" Oct 01 16:26:14 crc kubenswrapper[4949]: I1001 16:26:14.211088 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qt7mp\" (UID: \"b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qt7mp" Oct 01 16:26:14 crc kubenswrapper[4949]: I1001 16:26:14.211190 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qt7mp\" (UID: \"b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qt7mp" Oct 01 16:26:14 crc kubenswrapper[4949]: I1001 16:26:14.211245 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qt7mp\" (UID: \"b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qt7mp" Oct 01 16:26:14 crc kubenswrapper[4949]: I1001 16:26:14.211346 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cbbhh\" (UniqueName: \"kubernetes.io/projected/b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4-kube-api-access-cbbhh\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qt7mp\" (UID: \"b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qt7mp" Oct 01 16:26:14 crc kubenswrapper[4949]: I1001 16:26:14.312682 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qt7mp\" (UID: \"b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qt7mp" Oct 01 16:26:14 crc kubenswrapper[4949]: I1001 16:26:14.312777 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qt7mp\" (UID: \"b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qt7mp" Oct 01 16:26:14 crc kubenswrapper[4949]: I1001 16:26:14.312869 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qt7mp\" (UID: \"b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qt7mp" Oct 01 16:26:14 crc kubenswrapper[4949]: I1001 16:26:14.312930 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qt7mp\" (UID: \"b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qt7mp" Oct 01 16:26:14 crc kubenswrapper[4949]: I1001 16:26:14.313011 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cbbhh\" (UniqueName: \"kubernetes.io/projected/b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4-kube-api-access-cbbhh\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qt7mp\" (UID: \"b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qt7mp" Oct 01 16:26:14 crc kubenswrapper[4949]: I1001 16:26:14.313047 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qt7mp\" (UID: \"b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qt7mp" Oct 01 16:26:14 crc kubenswrapper[4949]: I1001 16:26:14.319352 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4-ssh-key\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qt7mp\" (UID: \"b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qt7mp" Oct 01 16:26:14 crc kubenswrapper[4949]: I1001 16:26:14.323702 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4-inventory\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qt7mp\" (UID: \"b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qt7mp" Oct 01 16:26:14 crc kubenswrapper[4949]: I1001 16:26:14.328699 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4-ceph\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qt7mp\" (UID: \"b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qt7mp" Oct 01 16:26:14 crc kubenswrapper[4949]: I1001 16:26:14.329101 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4-libvirt-secret-0\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qt7mp\" (UID: \"b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qt7mp" Oct 01 16:26:14 crc kubenswrapper[4949]: I1001 16:26:14.329276 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4-libvirt-combined-ca-bundle\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qt7mp\" (UID: \"b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qt7mp" Oct 01 16:26:14 crc kubenswrapper[4949]: I1001 16:26:14.331646 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cbbhh\" (UniqueName: \"kubernetes.io/projected/b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4-kube-api-access-cbbhh\") pod \"libvirt-edpm-deployment-openstack-edpm-ipam-qt7mp\" (UID: \"b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4\") " pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qt7mp" Oct 01 16:26:14 crc kubenswrapper[4949]: I1001 16:26:14.457797 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qt7mp" Oct 01 16:26:15 crc kubenswrapper[4949]: I1001 16:26:14.999868 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qt7mp"] Oct 01 16:26:16 crc kubenswrapper[4949]: I1001 16:26:16.024347 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qt7mp" event={"ID":"b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4","Type":"ContainerStarted","Data":"7142e9f48879dd6047c57f463a0258d8618abe4f04ae189675496911fe2dd21d"} Oct 01 16:26:16 crc kubenswrapper[4949]: I1001 16:26:16.024713 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qt7mp" event={"ID":"b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4","Type":"ContainerStarted","Data":"d61c5d4f337b14d9d962d84f27caac46d67e93ebfbfc3c1adbb7dbe0ad1e1fa8"} Oct 01 16:26:16 crc kubenswrapper[4949]: I1001 16:26:16.045737 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qt7mp" podStartSLOduration=1.344925741 podStartE2EDuration="2.045690065s" podCreationTimestamp="2025-10-01 16:26:14 +0000 UTC" firstStartedPulling="2025-10-01 16:26:15.007701104 +0000 UTC m=+2674.313307305" lastFinishedPulling="2025-10-01 16:26:15.708465428 +0000 UTC m=+2675.014071629" observedRunningTime="2025-10-01 16:26:16.039847305 +0000 UTC m=+2675.345453506" watchObservedRunningTime="2025-10-01 16:26:16.045690065 +0000 UTC m=+2675.351296256" Oct 01 16:26:18 crc kubenswrapper[4949]: I1001 16:26:18.041589 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 16:26:18 crc kubenswrapper[4949]: I1001 16:26:18.042075 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 16:26:47 crc kubenswrapper[4949]: I1001 16:26:47.499486 4949 scope.go:117] "RemoveContainer" containerID="a480d81ef37de1008b4719578786822a626b9e4ab82fa386802ac28dcd1eea95" Oct 01 16:26:48 crc kubenswrapper[4949]: I1001 16:26:48.038667 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 16:26:48 crc kubenswrapper[4949]: I1001 16:26:48.039050 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 16:26:48 crc kubenswrapper[4949]: I1001 16:26:48.039111 4949 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l6287" Oct 01 16:26:48 crc kubenswrapper[4949]: I1001 16:26:48.040095 4949 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1b065e9610be91fb7869bd4af48c01b9d60eab8531d64587fac8bb8990d9bc6e"} pod="openshift-machine-config-operator/machine-config-daemon-l6287" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 01 16:26:48 crc kubenswrapper[4949]: I1001 16:26:48.040234 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" containerID="cri-o://1b065e9610be91fb7869bd4af48c01b9d60eab8531d64587fac8bb8990d9bc6e" gracePeriod=600 Oct 01 16:26:48 crc kubenswrapper[4949]: I1001 16:26:48.369154 4949 generic.go:334] "Generic (PLEG): container finished" podID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerID="1b065e9610be91fb7869bd4af48c01b9d60eab8531d64587fac8bb8990d9bc6e" exitCode=0 Oct 01 16:26:48 crc kubenswrapper[4949]: I1001 16:26:48.369202 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" event={"ID":"0e15cd67-d4ad-49b8-96a6-da114105e558","Type":"ContainerDied","Data":"1b065e9610be91fb7869bd4af48c01b9d60eab8531d64587fac8bb8990d9bc6e"} Oct 01 16:26:48 crc kubenswrapper[4949]: I1001 16:26:48.369236 4949 scope.go:117] "RemoveContainer" containerID="757b624790258f8fb52b1acdc82513f0413ed869329247fea2c8ea990d6338de" Oct 01 16:26:49 crc kubenswrapper[4949]: I1001 16:26:49.382918 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" event={"ID":"0e15cd67-d4ad-49b8-96a6-da114105e558","Type":"ContainerStarted","Data":"2163a2b05def712ac5b674496b27b05a9fdad015f2ccd67d0776760e65d84a2c"} Oct 01 16:27:49 crc kubenswrapper[4949]: I1001 16:27:49.495595 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-wvznw"] Oct 01 16:27:49 crc kubenswrapper[4949]: I1001 16:27:49.499799 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wvznw" Oct 01 16:27:49 crc kubenswrapper[4949]: I1001 16:27:49.515326 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wvznw"] Oct 01 16:27:49 crc kubenswrapper[4949]: I1001 16:27:49.589116 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n5t4p\" (UniqueName: \"kubernetes.io/projected/1cb3bffb-23f4-4e6a-b2c0-335fc765da09-kube-api-access-n5t4p\") pod \"certified-operators-wvznw\" (UID: \"1cb3bffb-23f4-4e6a-b2c0-335fc765da09\") " pod="openshift-marketplace/certified-operators-wvznw" Oct 01 16:27:49 crc kubenswrapper[4949]: I1001 16:27:49.589503 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cb3bffb-23f4-4e6a-b2c0-335fc765da09-utilities\") pod \"certified-operators-wvznw\" (UID: \"1cb3bffb-23f4-4e6a-b2c0-335fc765da09\") " pod="openshift-marketplace/certified-operators-wvznw" Oct 01 16:27:49 crc kubenswrapper[4949]: I1001 16:27:49.589528 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cb3bffb-23f4-4e6a-b2c0-335fc765da09-catalog-content\") pod \"certified-operators-wvznw\" (UID: \"1cb3bffb-23f4-4e6a-b2c0-335fc765da09\") " pod="openshift-marketplace/certified-operators-wvznw" Oct 01 16:27:49 crc kubenswrapper[4949]: I1001 16:27:49.691175 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cb3bffb-23f4-4e6a-b2c0-335fc765da09-utilities\") pod \"certified-operators-wvznw\" (UID: \"1cb3bffb-23f4-4e6a-b2c0-335fc765da09\") " pod="openshift-marketplace/certified-operators-wvznw" Oct 01 16:27:49 crc kubenswrapper[4949]: I1001 16:27:49.691488 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cb3bffb-23f4-4e6a-b2c0-335fc765da09-catalog-content\") pod \"certified-operators-wvznw\" (UID: \"1cb3bffb-23f4-4e6a-b2c0-335fc765da09\") " pod="openshift-marketplace/certified-operators-wvznw" Oct 01 16:27:49 crc kubenswrapper[4949]: I1001 16:27:49.691610 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n5t4p\" (UniqueName: \"kubernetes.io/projected/1cb3bffb-23f4-4e6a-b2c0-335fc765da09-kube-api-access-n5t4p\") pod \"certified-operators-wvznw\" (UID: \"1cb3bffb-23f4-4e6a-b2c0-335fc765da09\") " pod="openshift-marketplace/certified-operators-wvznw" Oct 01 16:27:49 crc kubenswrapper[4949]: I1001 16:27:49.692239 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cb3bffb-23f4-4e6a-b2c0-335fc765da09-utilities\") pod \"certified-operators-wvznw\" (UID: \"1cb3bffb-23f4-4e6a-b2c0-335fc765da09\") " pod="openshift-marketplace/certified-operators-wvznw" Oct 01 16:27:49 crc kubenswrapper[4949]: I1001 16:27:49.692274 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cb3bffb-23f4-4e6a-b2c0-335fc765da09-catalog-content\") pod \"certified-operators-wvznw\" (UID: \"1cb3bffb-23f4-4e6a-b2c0-335fc765da09\") " pod="openshift-marketplace/certified-operators-wvznw" Oct 01 16:27:49 crc kubenswrapper[4949]: I1001 16:27:49.717893 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n5t4p\" (UniqueName: \"kubernetes.io/projected/1cb3bffb-23f4-4e6a-b2c0-335fc765da09-kube-api-access-n5t4p\") pod \"certified-operators-wvznw\" (UID: \"1cb3bffb-23f4-4e6a-b2c0-335fc765da09\") " pod="openshift-marketplace/certified-operators-wvznw" Oct 01 16:27:49 crc kubenswrapper[4949]: I1001 16:27:49.824869 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wvznw" Oct 01 16:27:50 crc kubenswrapper[4949]: I1001 16:27:50.328689 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-wvznw"] Oct 01 16:27:51 crc kubenswrapper[4949]: I1001 16:27:51.037095 4949 generic.go:334] "Generic (PLEG): container finished" podID="1cb3bffb-23f4-4e6a-b2c0-335fc765da09" containerID="8013f797e475c114a4c1d5ea4b872c7f65e2da335e3159028858a510aed5e92f" exitCode=0 Oct 01 16:27:51 crc kubenswrapper[4949]: I1001 16:27:51.037238 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wvznw" event={"ID":"1cb3bffb-23f4-4e6a-b2c0-335fc765da09","Type":"ContainerDied","Data":"8013f797e475c114a4c1d5ea4b872c7f65e2da335e3159028858a510aed5e92f"} Oct 01 16:27:51 crc kubenswrapper[4949]: I1001 16:27:51.037551 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wvznw" event={"ID":"1cb3bffb-23f4-4e6a-b2c0-335fc765da09","Type":"ContainerStarted","Data":"39602c81b3da98bdf35ab0e81d56b79a1d9f4bb230bd80e68c23f58436abdd38"} Oct 01 16:27:51 crc kubenswrapper[4949]: I1001 16:27:51.039572 4949 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 01 16:27:52 crc kubenswrapper[4949]: I1001 16:27:52.048288 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wvznw" event={"ID":"1cb3bffb-23f4-4e6a-b2c0-335fc765da09","Type":"ContainerStarted","Data":"4ffb14909be4ce5031c22c0b60212f865b01023d3007f608a7d4b423b776b067"} Oct 01 16:27:53 crc kubenswrapper[4949]: I1001 16:27:53.060938 4949 generic.go:334] "Generic (PLEG): container finished" podID="1cb3bffb-23f4-4e6a-b2c0-335fc765da09" containerID="4ffb14909be4ce5031c22c0b60212f865b01023d3007f608a7d4b423b776b067" exitCode=0 Oct 01 16:27:53 crc kubenswrapper[4949]: I1001 16:27:53.061019 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wvznw" event={"ID":"1cb3bffb-23f4-4e6a-b2c0-335fc765da09","Type":"ContainerDied","Data":"4ffb14909be4ce5031c22c0b60212f865b01023d3007f608a7d4b423b776b067"} Oct 01 16:27:55 crc kubenswrapper[4949]: I1001 16:27:55.088941 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wvznw" event={"ID":"1cb3bffb-23f4-4e6a-b2c0-335fc765da09","Type":"ContainerStarted","Data":"d63b4be6061955baf96e532578b93f55608dc30dbff18898718a7570c07a3325"} Oct 01 16:27:55 crc kubenswrapper[4949]: I1001 16:27:55.120985 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-wvznw" podStartSLOduration=2.972319766 podStartE2EDuration="6.120958671s" podCreationTimestamp="2025-10-01 16:27:49 +0000 UTC" firstStartedPulling="2025-10-01 16:27:51.039317074 +0000 UTC m=+2770.344923275" lastFinishedPulling="2025-10-01 16:27:54.187955979 +0000 UTC m=+2773.493562180" observedRunningTime="2025-10-01 16:27:55.105986171 +0000 UTC m=+2774.411592362" watchObservedRunningTime="2025-10-01 16:27:55.120958671 +0000 UTC m=+2774.426564882" Oct 01 16:27:59 crc kubenswrapper[4949]: I1001 16:27:59.825919 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-wvznw" Oct 01 16:27:59 crc kubenswrapper[4949]: I1001 16:27:59.826610 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-wvznw" Oct 01 16:27:59 crc kubenswrapper[4949]: I1001 16:27:59.876146 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-wvznw" Oct 01 16:28:00 crc kubenswrapper[4949]: I1001 16:28:00.210313 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-wvznw" Oct 01 16:28:00 crc kubenswrapper[4949]: I1001 16:28:00.268427 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wvznw"] Oct 01 16:28:02 crc kubenswrapper[4949]: I1001 16:28:02.171744 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-wvznw" podUID="1cb3bffb-23f4-4e6a-b2c0-335fc765da09" containerName="registry-server" containerID="cri-o://d63b4be6061955baf96e532578b93f55608dc30dbff18898718a7570c07a3325" gracePeriod=2 Oct 01 16:28:03 crc kubenswrapper[4949]: I1001 16:28:03.145095 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wvznw" Oct 01 16:28:03 crc kubenswrapper[4949]: I1001 16:28:03.183895 4949 generic.go:334] "Generic (PLEG): container finished" podID="1cb3bffb-23f4-4e6a-b2c0-335fc765da09" containerID="d63b4be6061955baf96e532578b93f55608dc30dbff18898718a7570c07a3325" exitCode=0 Oct 01 16:28:03 crc kubenswrapper[4949]: I1001 16:28:03.183941 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wvznw" event={"ID":"1cb3bffb-23f4-4e6a-b2c0-335fc765da09","Type":"ContainerDied","Data":"d63b4be6061955baf96e532578b93f55608dc30dbff18898718a7570c07a3325"} Oct 01 16:28:03 crc kubenswrapper[4949]: I1001 16:28:03.183977 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-wvznw" event={"ID":"1cb3bffb-23f4-4e6a-b2c0-335fc765da09","Type":"ContainerDied","Data":"39602c81b3da98bdf35ab0e81d56b79a1d9f4bb230bd80e68c23f58436abdd38"} Oct 01 16:28:03 crc kubenswrapper[4949]: I1001 16:28:03.184000 4949 scope.go:117] "RemoveContainer" containerID="d63b4be6061955baf96e532578b93f55608dc30dbff18898718a7570c07a3325" Oct 01 16:28:03 crc kubenswrapper[4949]: I1001 16:28:03.184167 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-wvznw" Oct 01 16:28:03 crc kubenswrapper[4949]: I1001 16:28:03.207512 4949 scope.go:117] "RemoveContainer" containerID="4ffb14909be4ce5031c22c0b60212f865b01023d3007f608a7d4b423b776b067" Oct 01 16:28:03 crc kubenswrapper[4949]: I1001 16:28:03.227523 4949 scope.go:117] "RemoveContainer" containerID="8013f797e475c114a4c1d5ea4b872c7f65e2da335e3159028858a510aed5e92f" Oct 01 16:28:03 crc kubenswrapper[4949]: I1001 16:28:03.267570 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cb3bffb-23f4-4e6a-b2c0-335fc765da09-catalog-content\") pod \"1cb3bffb-23f4-4e6a-b2c0-335fc765da09\" (UID: \"1cb3bffb-23f4-4e6a-b2c0-335fc765da09\") " Oct 01 16:28:03 crc kubenswrapper[4949]: I1001 16:28:03.267630 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cb3bffb-23f4-4e6a-b2c0-335fc765da09-utilities\") pod \"1cb3bffb-23f4-4e6a-b2c0-335fc765da09\" (UID: \"1cb3bffb-23f4-4e6a-b2c0-335fc765da09\") " Oct 01 16:28:03 crc kubenswrapper[4949]: I1001 16:28:03.267746 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n5t4p\" (UniqueName: \"kubernetes.io/projected/1cb3bffb-23f4-4e6a-b2c0-335fc765da09-kube-api-access-n5t4p\") pod \"1cb3bffb-23f4-4e6a-b2c0-335fc765da09\" (UID: \"1cb3bffb-23f4-4e6a-b2c0-335fc765da09\") " Oct 01 16:28:03 crc kubenswrapper[4949]: I1001 16:28:03.269879 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cb3bffb-23f4-4e6a-b2c0-335fc765da09-utilities" (OuterVolumeSpecName: "utilities") pod "1cb3bffb-23f4-4e6a-b2c0-335fc765da09" (UID: "1cb3bffb-23f4-4e6a-b2c0-335fc765da09"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:28:03 crc kubenswrapper[4949]: I1001 16:28:03.273604 4949 scope.go:117] "RemoveContainer" containerID="d63b4be6061955baf96e532578b93f55608dc30dbff18898718a7570c07a3325" Oct 01 16:28:03 crc kubenswrapper[4949]: E1001 16:28:03.274160 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d63b4be6061955baf96e532578b93f55608dc30dbff18898718a7570c07a3325\": container with ID starting with d63b4be6061955baf96e532578b93f55608dc30dbff18898718a7570c07a3325 not found: ID does not exist" containerID="d63b4be6061955baf96e532578b93f55608dc30dbff18898718a7570c07a3325" Oct 01 16:28:03 crc kubenswrapper[4949]: I1001 16:28:03.274208 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d63b4be6061955baf96e532578b93f55608dc30dbff18898718a7570c07a3325"} err="failed to get container status \"d63b4be6061955baf96e532578b93f55608dc30dbff18898718a7570c07a3325\": rpc error: code = NotFound desc = could not find container \"d63b4be6061955baf96e532578b93f55608dc30dbff18898718a7570c07a3325\": container with ID starting with d63b4be6061955baf96e532578b93f55608dc30dbff18898718a7570c07a3325 not found: ID does not exist" Oct 01 16:28:03 crc kubenswrapper[4949]: I1001 16:28:03.274238 4949 scope.go:117] "RemoveContainer" containerID="4ffb14909be4ce5031c22c0b60212f865b01023d3007f608a7d4b423b776b067" Oct 01 16:28:03 crc kubenswrapper[4949]: E1001 16:28:03.274567 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ffb14909be4ce5031c22c0b60212f865b01023d3007f608a7d4b423b776b067\": container with ID starting with 4ffb14909be4ce5031c22c0b60212f865b01023d3007f608a7d4b423b776b067 not found: ID does not exist" containerID="4ffb14909be4ce5031c22c0b60212f865b01023d3007f608a7d4b423b776b067" Oct 01 16:28:03 crc kubenswrapper[4949]: I1001 16:28:03.274621 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ffb14909be4ce5031c22c0b60212f865b01023d3007f608a7d4b423b776b067"} err="failed to get container status \"4ffb14909be4ce5031c22c0b60212f865b01023d3007f608a7d4b423b776b067\": rpc error: code = NotFound desc = could not find container \"4ffb14909be4ce5031c22c0b60212f865b01023d3007f608a7d4b423b776b067\": container with ID starting with 4ffb14909be4ce5031c22c0b60212f865b01023d3007f608a7d4b423b776b067 not found: ID does not exist" Oct 01 16:28:03 crc kubenswrapper[4949]: I1001 16:28:03.274649 4949 scope.go:117] "RemoveContainer" containerID="8013f797e475c114a4c1d5ea4b872c7f65e2da335e3159028858a510aed5e92f" Oct 01 16:28:03 crc kubenswrapper[4949]: E1001 16:28:03.275060 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8013f797e475c114a4c1d5ea4b872c7f65e2da335e3159028858a510aed5e92f\": container with ID starting with 8013f797e475c114a4c1d5ea4b872c7f65e2da335e3159028858a510aed5e92f not found: ID does not exist" containerID="8013f797e475c114a4c1d5ea4b872c7f65e2da335e3159028858a510aed5e92f" Oct 01 16:28:03 crc kubenswrapper[4949]: I1001 16:28:03.275097 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8013f797e475c114a4c1d5ea4b872c7f65e2da335e3159028858a510aed5e92f"} err="failed to get container status \"8013f797e475c114a4c1d5ea4b872c7f65e2da335e3159028858a510aed5e92f\": rpc error: code = NotFound desc = could not find container \"8013f797e475c114a4c1d5ea4b872c7f65e2da335e3159028858a510aed5e92f\": container with ID starting with 8013f797e475c114a4c1d5ea4b872c7f65e2da335e3159028858a510aed5e92f not found: ID does not exist" Oct 01 16:28:03 crc kubenswrapper[4949]: I1001 16:28:03.276652 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1cb3bffb-23f4-4e6a-b2c0-335fc765da09-kube-api-access-n5t4p" (OuterVolumeSpecName: "kube-api-access-n5t4p") pod "1cb3bffb-23f4-4e6a-b2c0-335fc765da09" (UID: "1cb3bffb-23f4-4e6a-b2c0-335fc765da09"). InnerVolumeSpecName "kube-api-access-n5t4p". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:28:03 crc kubenswrapper[4949]: I1001 16:28:03.370715 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1cb3bffb-23f4-4e6a-b2c0-335fc765da09-utilities\") on node \"crc\" DevicePath \"\"" Oct 01 16:28:03 crc kubenswrapper[4949]: I1001 16:28:03.370755 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n5t4p\" (UniqueName: \"kubernetes.io/projected/1cb3bffb-23f4-4e6a-b2c0-335fc765da09-kube-api-access-n5t4p\") on node \"crc\" DevicePath \"\"" Oct 01 16:28:04 crc kubenswrapper[4949]: I1001 16:28:04.225895 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1cb3bffb-23f4-4e6a-b2c0-335fc765da09-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1cb3bffb-23f4-4e6a-b2c0-335fc765da09" (UID: "1cb3bffb-23f4-4e6a-b2c0-335fc765da09"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:28:04 crc kubenswrapper[4949]: I1001 16:28:04.289080 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1cb3bffb-23f4-4e6a-b2c0-335fc765da09-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 01 16:28:04 crc kubenswrapper[4949]: I1001 16:28:04.421255 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-wvznw"] Oct 01 16:28:04 crc kubenswrapper[4949]: I1001 16:28:04.430429 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-wvznw"] Oct 01 16:28:05 crc kubenswrapper[4949]: I1001 16:28:05.612211 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1cb3bffb-23f4-4e6a-b2c0-335fc765da09" path="/var/lib/kubelet/pods/1cb3bffb-23f4-4e6a-b2c0-335fc765da09/volumes" Oct 01 16:28:48 crc kubenswrapper[4949]: I1001 16:28:48.039064 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 16:28:48 crc kubenswrapper[4949]: I1001 16:28:48.039568 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 16:29:18 crc kubenswrapper[4949]: I1001 16:29:18.038673 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 16:29:18 crc kubenswrapper[4949]: I1001 16:29:18.039309 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 16:29:48 crc kubenswrapper[4949]: I1001 16:29:48.038309 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 16:29:48 crc kubenswrapper[4949]: I1001 16:29:48.038944 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 16:29:48 crc kubenswrapper[4949]: I1001 16:29:48.039023 4949 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l6287" Oct 01 16:29:48 crc kubenswrapper[4949]: I1001 16:29:48.040645 4949 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2163a2b05def712ac5b674496b27b05a9fdad015f2ccd67d0776760e65d84a2c"} pod="openshift-machine-config-operator/machine-config-daemon-l6287" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 01 16:29:48 crc kubenswrapper[4949]: I1001 16:29:48.040750 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" containerID="cri-o://2163a2b05def712ac5b674496b27b05a9fdad015f2ccd67d0776760e65d84a2c" gracePeriod=600 Oct 01 16:29:48 crc kubenswrapper[4949]: E1001 16:29:48.166229 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:29:48 crc kubenswrapper[4949]: I1001 16:29:48.256266 4949 generic.go:334] "Generic (PLEG): container finished" podID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerID="2163a2b05def712ac5b674496b27b05a9fdad015f2ccd67d0776760e65d84a2c" exitCode=0 Oct 01 16:29:48 crc kubenswrapper[4949]: I1001 16:29:48.256311 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" event={"ID":"0e15cd67-d4ad-49b8-96a6-da114105e558","Type":"ContainerDied","Data":"2163a2b05def712ac5b674496b27b05a9fdad015f2ccd67d0776760e65d84a2c"} Oct 01 16:29:48 crc kubenswrapper[4949]: I1001 16:29:48.256342 4949 scope.go:117] "RemoveContainer" containerID="1b065e9610be91fb7869bd4af48c01b9d60eab8531d64587fac8bb8990d9bc6e" Oct 01 16:29:48 crc kubenswrapper[4949]: I1001 16:29:48.257087 4949 scope.go:117] "RemoveContainer" containerID="2163a2b05def712ac5b674496b27b05a9fdad015f2ccd67d0776760e65d84a2c" Oct 01 16:29:48 crc kubenswrapper[4949]: E1001 16:29:48.257556 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:29:49 crc kubenswrapper[4949]: I1001 16:29:49.235235 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-9mbp4"] Oct 01 16:29:49 crc kubenswrapper[4949]: E1001 16:29:49.235670 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cb3bffb-23f4-4e6a-b2c0-335fc765da09" containerName="extract-utilities" Oct 01 16:29:49 crc kubenswrapper[4949]: I1001 16:29:49.235684 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cb3bffb-23f4-4e6a-b2c0-335fc765da09" containerName="extract-utilities" Oct 01 16:29:49 crc kubenswrapper[4949]: E1001 16:29:49.235708 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cb3bffb-23f4-4e6a-b2c0-335fc765da09" containerName="extract-content" Oct 01 16:29:49 crc kubenswrapper[4949]: I1001 16:29:49.235716 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cb3bffb-23f4-4e6a-b2c0-335fc765da09" containerName="extract-content" Oct 01 16:29:49 crc kubenswrapper[4949]: E1001 16:29:49.235737 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1cb3bffb-23f4-4e6a-b2c0-335fc765da09" containerName="registry-server" Oct 01 16:29:49 crc kubenswrapper[4949]: I1001 16:29:49.235744 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="1cb3bffb-23f4-4e6a-b2c0-335fc765da09" containerName="registry-server" Oct 01 16:29:49 crc kubenswrapper[4949]: I1001 16:29:49.235967 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="1cb3bffb-23f4-4e6a-b2c0-335fc765da09" containerName="registry-server" Oct 01 16:29:49 crc kubenswrapper[4949]: I1001 16:29:49.237546 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9mbp4" Oct 01 16:29:49 crc kubenswrapper[4949]: I1001 16:29:49.251230 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9mbp4"] Oct 01 16:29:49 crc kubenswrapper[4949]: I1001 16:29:49.288097 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d04090b3-c163-4510-a46f-6e0510cfab4f-utilities\") pod \"community-operators-9mbp4\" (UID: \"d04090b3-c163-4510-a46f-6e0510cfab4f\") " pod="openshift-marketplace/community-operators-9mbp4" Oct 01 16:29:49 crc kubenswrapper[4949]: I1001 16:29:49.288275 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kbxs7\" (UniqueName: \"kubernetes.io/projected/d04090b3-c163-4510-a46f-6e0510cfab4f-kube-api-access-kbxs7\") pod \"community-operators-9mbp4\" (UID: \"d04090b3-c163-4510-a46f-6e0510cfab4f\") " pod="openshift-marketplace/community-operators-9mbp4" Oct 01 16:29:49 crc kubenswrapper[4949]: I1001 16:29:49.288311 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d04090b3-c163-4510-a46f-6e0510cfab4f-catalog-content\") pod \"community-operators-9mbp4\" (UID: \"d04090b3-c163-4510-a46f-6e0510cfab4f\") " pod="openshift-marketplace/community-operators-9mbp4" Oct 01 16:29:49 crc kubenswrapper[4949]: I1001 16:29:49.390285 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d04090b3-c163-4510-a46f-6e0510cfab4f-utilities\") pod \"community-operators-9mbp4\" (UID: \"d04090b3-c163-4510-a46f-6e0510cfab4f\") " pod="openshift-marketplace/community-operators-9mbp4" Oct 01 16:29:49 crc kubenswrapper[4949]: I1001 16:29:49.390608 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d04090b3-c163-4510-a46f-6e0510cfab4f-catalog-content\") pod \"community-operators-9mbp4\" (UID: \"d04090b3-c163-4510-a46f-6e0510cfab4f\") " pod="openshift-marketplace/community-operators-9mbp4" Oct 01 16:29:49 crc kubenswrapper[4949]: I1001 16:29:49.390697 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kbxs7\" (UniqueName: \"kubernetes.io/projected/d04090b3-c163-4510-a46f-6e0510cfab4f-kube-api-access-kbxs7\") pod \"community-operators-9mbp4\" (UID: \"d04090b3-c163-4510-a46f-6e0510cfab4f\") " pod="openshift-marketplace/community-operators-9mbp4" Oct 01 16:29:49 crc kubenswrapper[4949]: I1001 16:29:49.391088 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d04090b3-c163-4510-a46f-6e0510cfab4f-utilities\") pod \"community-operators-9mbp4\" (UID: \"d04090b3-c163-4510-a46f-6e0510cfab4f\") " pod="openshift-marketplace/community-operators-9mbp4" Oct 01 16:29:49 crc kubenswrapper[4949]: I1001 16:29:49.391177 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d04090b3-c163-4510-a46f-6e0510cfab4f-catalog-content\") pod \"community-operators-9mbp4\" (UID: \"d04090b3-c163-4510-a46f-6e0510cfab4f\") " pod="openshift-marketplace/community-operators-9mbp4" Oct 01 16:29:49 crc kubenswrapper[4949]: I1001 16:29:49.410426 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kbxs7\" (UniqueName: \"kubernetes.io/projected/d04090b3-c163-4510-a46f-6e0510cfab4f-kube-api-access-kbxs7\") pod \"community-operators-9mbp4\" (UID: \"d04090b3-c163-4510-a46f-6e0510cfab4f\") " pod="openshift-marketplace/community-operators-9mbp4" Oct 01 16:29:49 crc kubenswrapper[4949]: I1001 16:29:49.555995 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9mbp4" Oct 01 16:29:50 crc kubenswrapper[4949]: I1001 16:29:50.128796 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-9mbp4"] Oct 01 16:29:50 crc kubenswrapper[4949]: W1001 16:29:50.134103 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd04090b3_c163_4510_a46f_6e0510cfab4f.slice/crio-72eb767821c70e804802c53aa5c376066cbc47023e2dc2fbce6c820d7058dfb7 WatchSource:0}: Error finding container 72eb767821c70e804802c53aa5c376066cbc47023e2dc2fbce6c820d7058dfb7: Status 404 returned error can't find the container with id 72eb767821c70e804802c53aa5c376066cbc47023e2dc2fbce6c820d7058dfb7 Oct 01 16:29:50 crc kubenswrapper[4949]: I1001 16:29:50.283780 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9mbp4" event={"ID":"d04090b3-c163-4510-a46f-6e0510cfab4f","Type":"ContainerStarted","Data":"72eb767821c70e804802c53aa5c376066cbc47023e2dc2fbce6c820d7058dfb7"} Oct 01 16:29:51 crc kubenswrapper[4949]: I1001 16:29:51.293752 4949 generic.go:334] "Generic (PLEG): container finished" podID="d04090b3-c163-4510-a46f-6e0510cfab4f" containerID="d7062d115762a80cde011d05447d93c127ccb8e2ea4b77793247acdd899a65ca" exitCode=0 Oct 01 16:29:51 crc kubenswrapper[4949]: I1001 16:29:51.293801 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9mbp4" event={"ID":"d04090b3-c163-4510-a46f-6e0510cfab4f","Type":"ContainerDied","Data":"d7062d115762a80cde011d05447d93c127ccb8e2ea4b77793247acdd899a65ca"} Oct 01 16:29:51 crc kubenswrapper[4949]: I1001 16:29:51.645535 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-d2dn8"] Oct 01 16:29:51 crc kubenswrapper[4949]: I1001 16:29:51.649518 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d2dn8" Oct 01 16:29:51 crc kubenswrapper[4949]: I1001 16:29:51.657968 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d2dn8"] Oct 01 16:29:51 crc kubenswrapper[4949]: I1001 16:29:51.742485 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c86331e7-1eac-432b-b509-680a577fb2f5-catalog-content\") pod \"redhat-operators-d2dn8\" (UID: \"c86331e7-1eac-432b-b509-680a577fb2f5\") " pod="openshift-marketplace/redhat-operators-d2dn8" Oct 01 16:29:51 crc kubenswrapper[4949]: I1001 16:29:51.742743 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lhg2g\" (UniqueName: \"kubernetes.io/projected/c86331e7-1eac-432b-b509-680a577fb2f5-kube-api-access-lhg2g\") pod \"redhat-operators-d2dn8\" (UID: \"c86331e7-1eac-432b-b509-680a577fb2f5\") " pod="openshift-marketplace/redhat-operators-d2dn8" Oct 01 16:29:51 crc kubenswrapper[4949]: I1001 16:29:51.742910 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c86331e7-1eac-432b-b509-680a577fb2f5-utilities\") pod \"redhat-operators-d2dn8\" (UID: \"c86331e7-1eac-432b-b509-680a577fb2f5\") " pod="openshift-marketplace/redhat-operators-d2dn8" Oct 01 16:29:51 crc kubenswrapper[4949]: I1001 16:29:51.844493 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c86331e7-1eac-432b-b509-680a577fb2f5-catalog-content\") pod \"redhat-operators-d2dn8\" (UID: \"c86331e7-1eac-432b-b509-680a577fb2f5\") " pod="openshift-marketplace/redhat-operators-d2dn8" Oct 01 16:29:51 crc kubenswrapper[4949]: I1001 16:29:51.844813 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lhg2g\" (UniqueName: \"kubernetes.io/projected/c86331e7-1eac-432b-b509-680a577fb2f5-kube-api-access-lhg2g\") pod \"redhat-operators-d2dn8\" (UID: \"c86331e7-1eac-432b-b509-680a577fb2f5\") " pod="openshift-marketplace/redhat-operators-d2dn8" Oct 01 16:29:51 crc kubenswrapper[4949]: I1001 16:29:51.844870 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c86331e7-1eac-432b-b509-680a577fb2f5-utilities\") pod \"redhat-operators-d2dn8\" (UID: \"c86331e7-1eac-432b-b509-680a577fb2f5\") " pod="openshift-marketplace/redhat-operators-d2dn8" Oct 01 16:29:51 crc kubenswrapper[4949]: I1001 16:29:51.845209 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c86331e7-1eac-432b-b509-680a577fb2f5-catalog-content\") pod \"redhat-operators-d2dn8\" (UID: \"c86331e7-1eac-432b-b509-680a577fb2f5\") " pod="openshift-marketplace/redhat-operators-d2dn8" Oct 01 16:29:51 crc kubenswrapper[4949]: I1001 16:29:51.845285 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c86331e7-1eac-432b-b509-680a577fb2f5-utilities\") pod \"redhat-operators-d2dn8\" (UID: \"c86331e7-1eac-432b-b509-680a577fb2f5\") " pod="openshift-marketplace/redhat-operators-d2dn8" Oct 01 16:29:51 crc kubenswrapper[4949]: I1001 16:29:51.865762 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lhg2g\" (UniqueName: \"kubernetes.io/projected/c86331e7-1eac-432b-b509-680a577fb2f5-kube-api-access-lhg2g\") pod \"redhat-operators-d2dn8\" (UID: \"c86331e7-1eac-432b-b509-680a577fb2f5\") " pod="openshift-marketplace/redhat-operators-d2dn8" Oct 01 16:29:51 crc kubenswrapper[4949]: I1001 16:29:51.981911 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d2dn8" Oct 01 16:29:52 crc kubenswrapper[4949]: I1001 16:29:52.310162 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9mbp4" event={"ID":"d04090b3-c163-4510-a46f-6e0510cfab4f","Type":"ContainerStarted","Data":"0d6642d66ff9c80f19c55c8f937017cc00bc8874516abd7807842027407c6340"} Oct 01 16:29:52 crc kubenswrapper[4949]: I1001 16:29:52.499148 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-d2dn8"] Oct 01 16:29:53 crc kubenswrapper[4949]: I1001 16:29:53.319798 4949 generic.go:334] "Generic (PLEG): container finished" podID="c86331e7-1eac-432b-b509-680a577fb2f5" containerID="568082cf70cc74a2c276b4fb3164dbb1257fa01f83e2d82f55ceb4db3a211fff" exitCode=0 Oct 01 16:29:53 crc kubenswrapper[4949]: I1001 16:29:53.320108 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d2dn8" event={"ID":"c86331e7-1eac-432b-b509-680a577fb2f5","Type":"ContainerDied","Data":"568082cf70cc74a2c276b4fb3164dbb1257fa01f83e2d82f55ceb4db3a211fff"} Oct 01 16:29:53 crc kubenswrapper[4949]: I1001 16:29:53.320382 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d2dn8" event={"ID":"c86331e7-1eac-432b-b509-680a577fb2f5","Type":"ContainerStarted","Data":"3ab1acb2bead3999b2966512dc5e5c7ef3090e5125e6f8d71deca25f669f9b48"} Oct 01 16:29:53 crc kubenswrapper[4949]: I1001 16:29:53.323498 4949 generic.go:334] "Generic (PLEG): container finished" podID="d04090b3-c163-4510-a46f-6e0510cfab4f" containerID="0d6642d66ff9c80f19c55c8f937017cc00bc8874516abd7807842027407c6340" exitCode=0 Oct 01 16:29:53 crc kubenswrapper[4949]: I1001 16:29:53.323562 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9mbp4" event={"ID":"d04090b3-c163-4510-a46f-6e0510cfab4f","Type":"ContainerDied","Data":"0d6642d66ff9c80f19c55c8f937017cc00bc8874516abd7807842027407c6340"} Oct 01 16:29:54 crc kubenswrapper[4949]: I1001 16:29:54.360091 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9mbp4" event={"ID":"d04090b3-c163-4510-a46f-6e0510cfab4f","Type":"ContainerStarted","Data":"4a97a5deea8bfc89834b32e16d7ed40317f84c41bec5fbd860ad14ca571b2f0a"} Oct 01 16:29:54 crc kubenswrapper[4949]: I1001 16:29:54.385271 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-9mbp4" podStartSLOduration=2.711923668 podStartE2EDuration="5.385255737s" podCreationTimestamp="2025-10-01 16:29:49 +0000 UTC" firstStartedPulling="2025-10-01 16:29:51.296136116 +0000 UTC m=+2890.601742307" lastFinishedPulling="2025-10-01 16:29:53.969468185 +0000 UTC m=+2893.275074376" observedRunningTime="2025-10-01 16:29:54.379483298 +0000 UTC m=+2893.685089499" watchObservedRunningTime="2025-10-01 16:29:54.385255737 +0000 UTC m=+2893.690861928" Oct 01 16:29:55 crc kubenswrapper[4949]: I1001 16:29:55.371141 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d2dn8" event={"ID":"c86331e7-1eac-432b-b509-680a577fb2f5","Type":"ContainerStarted","Data":"f4483fba9812b6bbdf078dfeb1c5fce4e6d837039f6459e1506ee5e0e55bba9e"} Oct 01 16:29:56 crc kubenswrapper[4949]: I1001 16:29:56.392895 4949 generic.go:334] "Generic (PLEG): container finished" podID="c86331e7-1eac-432b-b509-680a577fb2f5" containerID="f4483fba9812b6bbdf078dfeb1c5fce4e6d837039f6459e1506ee5e0e55bba9e" exitCode=0 Oct 01 16:29:56 crc kubenswrapper[4949]: I1001 16:29:56.393059 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d2dn8" event={"ID":"c86331e7-1eac-432b-b509-680a577fb2f5","Type":"ContainerDied","Data":"f4483fba9812b6bbdf078dfeb1c5fce4e6d837039f6459e1506ee5e0e55bba9e"} Oct 01 16:29:57 crc kubenswrapper[4949]: I1001 16:29:57.402262 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d2dn8" event={"ID":"c86331e7-1eac-432b-b509-680a577fb2f5","Type":"ContainerStarted","Data":"edcb77817691feeac87d30d1dcc9e319bc5af9fd0cc02a7d04a6eb03c6766c9b"} Oct 01 16:29:57 crc kubenswrapper[4949]: I1001 16:29:57.432910 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-d2dn8" podStartSLOduration=2.78114569 podStartE2EDuration="6.432890121s" podCreationTimestamp="2025-10-01 16:29:51 +0000 UTC" firstStartedPulling="2025-10-01 16:29:53.322773359 +0000 UTC m=+2892.628379560" lastFinishedPulling="2025-10-01 16:29:56.9745178 +0000 UTC m=+2896.280123991" observedRunningTime="2025-10-01 16:29:57.427107561 +0000 UTC m=+2896.732713752" watchObservedRunningTime="2025-10-01 16:29:57.432890121 +0000 UTC m=+2896.738496312" Oct 01 16:29:59 crc kubenswrapper[4949]: I1001 16:29:59.556599 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-9mbp4" Oct 01 16:29:59 crc kubenswrapper[4949]: I1001 16:29:59.556963 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-9mbp4" Oct 01 16:29:59 crc kubenswrapper[4949]: I1001 16:29:59.601860 4949 scope.go:117] "RemoveContainer" containerID="2163a2b05def712ac5b674496b27b05a9fdad015f2ccd67d0776760e65d84a2c" Oct 01 16:29:59 crc kubenswrapper[4949]: E1001 16:29:59.602410 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:29:59 crc kubenswrapper[4949]: I1001 16:29:59.622076 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-9mbp4" Oct 01 16:30:00 crc kubenswrapper[4949]: I1001 16:30:00.186696 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29322270-7grzs"] Oct 01 16:30:00 crc kubenswrapper[4949]: I1001 16:30:00.188071 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29322270-7grzs" Oct 01 16:30:00 crc kubenswrapper[4949]: I1001 16:30:00.190167 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 01 16:30:00 crc kubenswrapper[4949]: I1001 16:30:00.192386 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 01 16:30:00 crc kubenswrapper[4949]: I1001 16:30:00.196323 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29322270-7grzs"] Oct 01 16:30:00 crc kubenswrapper[4949]: I1001 16:30:00.304948 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9525d427-6873-416a-bec5-e747a0ac944b-secret-volume\") pod \"collect-profiles-29322270-7grzs\" (UID: \"9525d427-6873-416a-bec5-e747a0ac944b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322270-7grzs" Oct 01 16:30:00 crc kubenswrapper[4949]: I1001 16:30:00.305012 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqhtg\" (UniqueName: \"kubernetes.io/projected/9525d427-6873-416a-bec5-e747a0ac944b-kube-api-access-cqhtg\") pod \"collect-profiles-29322270-7grzs\" (UID: \"9525d427-6873-416a-bec5-e747a0ac944b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322270-7grzs" Oct 01 16:30:00 crc kubenswrapper[4949]: I1001 16:30:00.305088 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9525d427-6873-416a-bec5-e747a0ac944b-config-volume\") pod \"collect-profiles-29322270-7grzs\" (UID: \"9525d427-6873-416a-bec5-e747a0ac944b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322270-7grzs" Oct 01 16:30:00 crc kubenswrapper[4949]: I1001 16:30:00.406883 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9525d427-6873-416a-bec5-e747a0ac944b-secret-volume\") pod \"collect-profiles-29322270-7grzs\" (UID: \"9525d427-6873-416a-bec5-e747a0ac944b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322270-7grzs" Oct 01 16:30:00 crc kubenswrapper[4949]: I1001 16:30:00.406940 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqhtg\" (UniqueName: \"kubernetes.io/projected/9525d427-6873-416a-bec5-e747a0ac944b-kube-api-access-cqhtg\") pod \"collect-profiles-29322270-7grzs\" (UID: \"9525d427-6873-416a-bec5-e747a0ac944b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322270-7grzs" Oct 01 16:30:00 crc kubenswrapper[4949]: I1001 16:30:00.407004 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9525d427-6873-416a-bec5-e747a0ac944b-config-volume\") pod \"collect-profiles-29322270-7grzs\" (UID: \"9525d427-6873-416a-bec5-e747a0ac944b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322270-7grzs" Oct 01 16:30:00 crc kubenswrapper[4949]: I1001 16:30:00.407996 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9525d427-6873-416a-bec5-e747a0ac944b-config-volume\") pod \"collect-profiles-29322270-7grzs\" (UID: \"9525d427-6873-416a-bec5-e747a0ac944b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322270-7grzs" Oct 01 16:30:00 crc kubenswrapper[4949]: I1001 16:30:00.421786 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9525d427-6873-416a-bec5-e747a0ac944b-secret-volume\") pod \"collect-profiles-29322270-7grzs\" (UID: \"9525d427-6873-416a-bec5-e747a0ac944b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322270-7grzs" Oct 01 16:30:00 crc kubenswrapper[4949]: I1001 16:30:00.434498 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqhtg\" (UniqueName: \"kubernetes.io/projected/9525d427-6873-416a-bec5-e747a0ac944b-kube-api-access-cqhtg\") pod \"collect-profiles-29322270-7grzs\" (UID: \"9525d427-6873-416a-bec5-e747a0ac944b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322270-7grzs" Oct 01 16:30:00 crc kubenswrapper[4949]: I1001 16:30:00.481151 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-9mbp4" Oct 01 16:30:00 crc kubenswrapper[4949]: I1001 16:30:00.511458 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29322270-7grzs" Oct 01 16:30:00 crc kubenswrapper[4949]: I1001 16:30:00.823860 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9mbp4"] Oct 01 16:30:00 crc kubenswrapper[4949]: I1001 16:30:00.979311 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29322270-7grzs"] Oct 01 16:30:00 crc kubenswrapper[4949]: W1001 16:30:00.995084 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9525d427_6873_416a_bec5_e747a0ac944b.slice/crio-6902b370e1ea48126b0d6682357cd225ef3cee640a9561b3462bf030b7e577e0 WatchSource:0}: Error finding container 6902b370e1ea48126b0d6682357cd225ef3cee640a9561b3462bf030b7e577e0: Status 404 returned error can't find the container with id 6902b370e1ea48126b0d6682357cd225ef3cee640a9561b3462bf030b7e577e0 Oct 01 16:30:01 crc kubenswrapper[4949]: I1001 16:30:01.445496 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29322270-7grzs" event={"ID":"9525d427-6873-416a-bec5-e747a0ac944b","Type":"ContainerStarted","Data":"6902b370e1ea48126b0d6682357cd225ef3cee640a9561b3462bf030b7e577e0"} Oct 01 16:30:01 crc kubenswrapper[4949]: I1001 16:30:01.982245 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-d2dn8" Oct 01 16:30:01 crc kubenswrapper[4949]: I1001 16:30:01.982329 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-d2dn8" Oct 01 16:30:02 crc kubenswrapper[4949]: I1001 16:30:02.459505 4949 generic.go:334] "Generic (PLEG): container finished" podID="9525d427-6873-416a-bec5-e747a0ac944b" containerID="9f489ad3ab453cef7d9b794ca5b1a1eb87386140c9c9a7dc2e55d8b6fb47c5e1" exitCode=0 Oct 01 16:30:02 crc kubenswrapper[4949]: I1001 16:30:02.459613 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29322270-7grzs" event={"ID":"9525d427-6873-416a-bec5-e747a0ac944b","Type":"ContainerDied","Data":"9f489ad3ab453cef7d9b794ca5b1a1eb87386140c9c9a7dc2e55d8b6fb47c5e1"} Oct 01 16:30:02 crc kubenswrapper[4949]: I1001 16:30:02.460028 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-9mbp4" podUID="d04090b3-c163-4510-a46f-6e0510cfab4f" containerName="registry-server" containerID="cri-o://4a97a5deea8bfc89834b32e16d7ed40317f84c41bec5fbd860ad14ca571b2f0a" gracePeriod=2 Oct 01 16:30:03 crc kubenswrapper[4949]: I1001 16:30:03.030765 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-d2dn8" podUID="c86331e7-1eac-432b-b509-680a577fb2f5" containerName="registry-server" probeResult="failure" output=< Oct 01 16:30:03 crc kubenswrapper[4949]: timeout: failed to connect service ":50051" within 1s Oct 01 16:30:03 crc kubenswrapper[4949]: > Oct 01 16:30:03 crc kubenswrapper[4949]: I1001 16:30:03.050102 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9mbp4" Oct 01 16:30:03 crc kubenswrapper[4949]: I1001 16:30:03.157955 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kbxs7\" (UniqueName: \"kubernetes.io/projected/d04090b3-c163-4510-a46f-6e0510cfab4f-kube-api-access-kbxs7\") pod \"d04090b3-c163-4510-a46f-6e0510cfab4f\" (UID: \"d04090b3-c163-4510-a46f-6e0510cfab4f\") " Oct 01 16:30:03 crc kubenswrapper[4949]: I1001 16:30:03.158198 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d04090b3-c163-4510-a46f-6e0510cfab4f-catalog-content\") pod \"d04090b3-c163-4510-a46f-6e0510cfab4f\" (UID: \"d04090b3-c163-4510-a46f-6e0510cfab4f\") " Oct 01 16:30:03 crc kubenswrapper[4949]: I1001 16:30:03.158288 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d04090b3-c163-4510-a46f-6e0510cfab4f-utilities\") pod \"d04090b3-c163-4510-a46f-6e0510cfab4f\" (UID: \"d04090b3-c163-4510-a46f-6e0510cfab4f\") " Oct 01 16:30:03 crc kubenswrapper[4949]: I1001 16:30:03.159248 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d04090b3-c163-4510-a46f-6e0510cfab4f-utilities" (OuterVolumeSpecName: "utilities") pod "d04090b3-c163-4510-a46f-6e0510cfab4f" (UID: "d04090b3-c163-4510-a46f-6e0510cfab4f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:30:03 crc kubenswrapper[4949]: I1001 16:30:03.163597 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d04090b3-c163-4510-a46f-6e0510cfab4f-kube-api-access-kbxs7" (OuterVolumeSpecName: "kube-api-access-kbxs7") pod "d04090b3-c163-4510-a46f-6e0510cfab4f" (UID: "d04090b3-c163-4510-a46f-6e0510cfab4f"). InnerVolumeSpecName "kube-api-access-kbxs7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:30:03 crc kubenswrapper[4949]: I1001 16:30:03.229866 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d04090b3-c163-4510-a46f-6e0510cfab4f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d04090b3-c163-4510-a46f-6e0510cfab4f" (UID: "d04090b3-c163-4510-a46f-6e0510cfab4f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:30:03 crc kubenswrapper[4949]: I1001 16:30:03.261655 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d04090b3-c163-4510-a46f-6e0510cfab4f-utilities\") on node \"crc\" DevicePath \"\"" Oct 01 16:30:03 crc kubenswrapper[4949]: I1001 16:30:03.262004 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kbxs7\" (UniqueName: \"kubernetes.io/projected/d04090b3-c163-4510-a46f-6e0510cfab4f-kube-api-access-kbxs7\") on node \"crc\" DevicePath \"\"" Oct 01 16:30:03 crc kubenswrapper[4949]: I1001 16:30:03.262089 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d04090b3-c163-4510-a46f-6e0510cfab4f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 01 16:30:03 crc kubenswrapper[4949]: I1001 16:30:03.474610 4949 generic.go:334] "Generic (PLEG): container finished" podID="d04090b3-c163-4510-a46f-6e0510cfab4f" containerID="4a97a5deea8bfc89834b32e16d7ed40317f84c41bec5fbd860ad14ca571b2f0a" exitCode=0 Oct 01 16:30:03 crc kubenswrapper[4949]: I1001 16:30:03.474689 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-9mbp4" Oct 01 16:30:03 crc kubenswrapper[4949]: I1001 16:30:03.474735 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9mbp4" event={"ID":"d04090b3-c163-4510-a46f-6e0510cfab4f","Type":"ContainerDied","Data":"4a97a5deea8bfc89834b32e16d7ed40317f84c41bec5fbd860ad14ca571b2f0a"} Oct 01 16:30:03 crc kubenswrapper[4949]: I1001 16:30:03.474771 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-9mbp4" event={"ID":"d04090b3-c163-4510-a46f-6e0510cfab4f","Type":"ContainerDied","Data":"72eb767821c70e804802c53aa5c376066cbc47023e2dc2fbce6c820d7058dfb7"} Oct 01 16:30:03 crc kubenswrapper[4949]: I1001 16:30:03.474812 4949 scope.go:117] "RemoveContainer" containerID="4a97a5deea8bfc89834b32e16d7ed40317f84c41bec5fbd860ad14ca571b2f0a" Oct 01 16:30:03 crc kubenswrapper[4949]: I1001 16:30:03.501742 4949 scope.go:117] "RemoveContainer" containerID="0d6642d66ff9c80f19c55c8f937017cc00bc8874516abd7807842027407c6340" Oct 01 16:30:03 crc kubenswrapper[4949]: I1001 16:30:03.520962 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-9mbp4"] Oct 01 16:30:03 crc kubenswrapper[4949]: I1001 16:30:03.536734 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-9mbp4"] Oct 01 16:30:03 crc kubenswrapper[4949]: I1001 16:30:03.562966 4949 scope.go:117] "RemoveContainer" containerID="d7062d115762a80cde011d05447d93c127ccb8e2ea4b77793247acdd899a65ca" Oct 01 16:30:03 crc kubenswrapper[4949]: I1001 16:30:03.585877 4949 scope.go:117] "RemoveContainer" containerID="4a97a5deea8bfc89834b32e16d7ed40317f84c41bec5fbd860ad14ca571b2f0a" Oct 01 16:30:03 crc kubenswrapper[4949]: E1001 16:30:03.588002 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4a97a5deea8bfc89834b32e16d7ed40317f84c41bec5fbd860ad14ca571b2f0a\": container with ID starting with 4a97a5deea8bfc89834b32e16d7ed40317f84c41bec5fbd860ad14ca571b2f0a not found: ID does not exist" containerID="4a97a5deea8bfc89834b32e16d7ed40317f84c41bec5fbd860ad14ca571b2f0a" Oct 01 16:30:03 crc kubenswrapper[4949]: I1001 16:30:03.588061 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4a97a5deea8bfc89834b32e16d7ed40317f84c41bec5fbd860ad14ca571b2f0a"} err="failed to get container status \"4a97a5deea8bfc89834b32e16d7ed40317f84c41bec5fbd860ad14ca571b2f0a\": rpc error: code = NotFound desc = could not find container \"4a97a5deea8bfc89834b32e16d7ed40317f84c41bec5fbd860ad14ca571b2f0a\": container with ID starting with 4a97a5deea8bfc89834b32e16d7ed40317f84c41bec5fbd860ad14ca571b2f0a not found: ID does not exist" Oct 01 16:30:03 crc kubenswrapper[4949]: I1001 16:30:03.588099 4949 scope.go:117] "RemoveContainer" containerID="0d6642d66ff9c80f19c55c8f937017cc00bc8874516abd7807842027407c6340" Oct 01 16:30:03 crc kubenswrapper[4949]: E1001 16:30:03.588673 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d6642d66ff9c80f19c55c8f937017cc00bc8874516abd7807842027407c6340\": container with ID starting with 0d6642d66ff9c80f19c55c8f937017cc00bc8874516abd7807842027407c6340 not found: ID does not exist" containerID="0d6642d66ff9c80f19c55c8f937017cc00bc8874516abd7807842027407c6340" Oct 01 16:30:03 crc kubenswrapper[4949]: I1001 16:30:03.588724 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d6642d66ff9c80f19c55c8f937017cc00bc8874516abd7807842027407c6340"} err="failed to get container status \"0d6642d66ff9c80f19c55c8f937017cc00bc8874516abd7807842027407c6340\": rpc error: code = NotFound desc = could not find container \"0d6642d66ff9c80f19c55c8f937017cc00bc8874516abd7807842027407c6340\": container with ID starting with 0d6642d66ff9c80f19c55c8f937017cc00bc8874516abd7807842027407c6340 not found: ID does not exist" Oct 01 16:30:03 crc kubenswrapper[4949]: I1001 16:30:03.588765 4949 scope.go:117] "RemoveContainer" containerID="d7062d115762a80cde011d05447d93c127ccb8e2ea4b77793247acdd899a65ca" Oct 01 16:30:03 crc kubenswrapper[4949]: E1001 16:30:03.589358 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d7062d115762a80cde011d05447d93c127ccb8e2ea4b77793247acdd899a65ca\": container with ID starting with d7062d115762a80cde011d05447d93c127ccb8e2ea4b77793247acdd899a65ca not found: ID does not exist" containerID="d7062d115762a80cde011d05447d93c127ccb8e2ea4b77793247acdd899a65ca" Oct 01 16:30:03 crc kubenswrapper[4949]: I1001 16:30:03.589386 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d7062d115762a80cde011d05447d93c127ccb8e2ea4b77793247acdd899a65ca"} err="failed to get container status \"d7062d115762a80cde011d05447d93c127ccb8e2ea4b77793247acdd899a65ca\": rpc error: code = NotFound desc = could not find container \"d7062d115762a80cde011d05447d93c127ccb8e2ea4b77793247acdd899a65ca\": container with ID starting with d7062d115762a80cde011d05447d93c127ccb8e2ea4b77793247acdd899a65ca not found: ID does not exist" Oct 01 16:30:03 crc kubenswrapper[4949]: I1001 16:30:03.614998 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d04090b3-c163-4510-a46f-6e0510cfab4f" path="/var/lib/kubelet/pods/d04090b3-c163-4510-a46f-6e0510cfab4f/volumes" Oct 01 16:30:03 crc kubenswrapper[4949]: I1001 16:30:03.847188 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29322270-7grzs" Oct 01 16:30:03 crc kubenswrapper[4949]: I1001 16:30:03.977252 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9525d427-6873-416a-bec5-e747a0ac944b-secret-volume\") pod \"9525d427-6873-416a-bec5-e747a0ac944b\" (UID: \"9525d427-6873-416a-bec5-e747a0ac944b\") " Oct 01 16:30:03 crc kubenswrapper[4949]: I1001 16:30:03.977357 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9525d427-6873-416a-bec5-e747a0ac944b-config-volume\") pod \"9525d427-6873-416a-bec5-e747a0ac944b\" (UID: \"9525d427-6873-416a-bec5-e747a0ac944b\") " Oct 01 16:30:03 crc kubenswrapper[4949]: I1001 16:30:03.977385 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqhtg\" (UniqueName: \"kubernetes.io/projected/9525d427-6873-416a-bec5-e747a0ac944b-kube-api-access-cqhtg\") pod \"9525d427-6873-416a-bec5-e747a0ac944b\" (UID: \"9525d427-6873-416a-bec5-e747a0ac944b\") " Oct 01 16:30:03 crc kubenswrapper[4949]: I1001 16:30:03.978288 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9525d427-6873-416a-bec5-e747a0ac944b-config-volume" (OuterVolumeSpecName: "config-volume") pod "9525d427-6873-416a-bec5-e747a0ac944b" (UID: "9525d427-6873-416a-bec5-e747a0ac944b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 16:30:03 crc kubenswrapper[4949]: I1001 16:30:03.992428 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9525d427-6873-416a-bec5-e747a0ac944b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9525d427-6873-416a-bec5-e747a0ac944b" (UID: "9525d427-6873-416a-bec5-e747a0ac944b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:30:03 crc kubenswrapper[4949]: I1001 16:30:03.992430 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9525d427-6873-416a-bec5-e747a0ac944b-kube-api-access-cqhtg" (OuterVolumeSpecName: "kube-api-access-cqhtg") pod "9525d427-6873-416a-bec5-e747a0ac944b" (UID: "9525d427-6873-416a-bec5-e747a0ac944b"). InnerVolumeSpecName "kube-api-access-cqhtg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:30:04 crc kubenswrapper[4949]: I1001 16:30:04.079223 4949 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9525d427-6873-416a-bec5-e747a0ac944b-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 01 16:30:04 crc kubenswrapper[4949]: I1001 16:30:04.079256 4949 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9525d427-6873-416a-bec5-e747a0ac944b-config-volume\") on node \"crc\" DevicePath \"\"" Oct 01 16:30:04 crc kubenswrapper[4949]: I1001 16:30:04.079269 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqhtg\" (UniqueName: \"kubernetes.io/projected/9525d427-6873-416a-bec5-e747a0ac944b-kube-api-access-cqhtg\") on node \"crc\" DevicePath \"\"" Oct 01 16:30:04 crc kubenswrapper[4949]: I1001 16:30:04.485818 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29322270-7grzs" event={"ID":"9525d427-6873-416a-bec5-e747a0ac944b","Type":"ContainerDied","Data":"6902b370e1ea48126b0d6682357cd225ef3cee640a9561b3462bf030b7e577e0"} Oct 01 16:30:04 crc kubenswrapper[4949]: I1001 16:30:04.485855 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6902b370e1ea48126b0d6682357cd225ef3cee640a9561b3462bf030b7e577e0" Oct 01 16:30:04 crc kubenswrapper[4949]: I1001 16:30:04.486310 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29322270-7grzs" Oct 01 16:30:04 crc kubenswrapper[4949]: I1001 16:30:04.977448 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29322225-2kx5d"] Oct 01 16:30:04 crc kubenswrapper[4949]: I1001 16:30:04.983465 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29322225-2kx5d"] Oct 01 16:30:05 crc kubenswrapper[4949]: I1001 16:30:05.618188 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca85de63-357f-46ae-8559-88caf4cf27f3" path="/var/lib/kubelet/pods/ca85de63-357f-46ae-8559-88caf4cf27f3/volumes" Oct 01 16:30:12 crc kubenswrapper[4949]: I1001 16:30:12.046940 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-d2dn8" Oct 01 16:30:12 crc kubenswrapper[4949]: I1001 16:30:12.094560 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-d2dn8" Oct 01 16:30:12 crc kubenswrapper[4949]: I1001 16:30:12.287463 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d2dn8"] Oct 01 16:30:12 crc kubenswrapper[4949]: I1001 16:30:12.602070 4949 scope.go:117] "RemoveContainer" containerID="2163a2b05def712ac5b674496b27b05a9fdad015f2ccd67d0776760e65d84a2c" Oct 01 16:30:12 crc kubenswrapper[4949]: E1001 16:30:12.602747 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:30:13 crc kubenswrapper[4949]: I1001 16:30:13.577992 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-d2dn8" podUID="c86331e7-1eac-432b-b509-680a577fb2f5" containerName="registry-server" containerID="cri-o://edcb77817691feeac87d30d1dcc9e319bc5af9fd0cc02a7d04a6eb03c6766c9b" gracePeriod=2 Oct 01 16:30:14 crc kubenswrapper[4949]: I1001 16:30:14.113991 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d2dn8" Oct 01 16:30:14 crc kubenswrapper[4949]: I1001 16:30:14.191835 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c86331e7-1eac-432b-b509-680a577fb2f5-catalog-content\") pod \"c86331e7-1eac-432b-b509-680a577fb2f5\" (UID: \"c86331e7-1eac-432b-b509-680a577fb2f5\") " Oct 01 16:30:14 crc kubenswrapper[4949]: I1001 16:30:14.191890 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lhg2g\" (UniqueName: \"kubernetes.io/projected/c86331e7-1eac-432b-b509-680a577fb2f5-kube-api-access-lhg2g\") pod \"c86331e7-1eac-432b-b509-680a577fb2f5\" (UID: \"c86331e7-1eac-432b-b509-680a577fb2f5\") " Oct 01 16:30:14 crc kubenswrapper[4949]: I1001 16:30:14.192070 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c86331e7-1eac-432b-b509-680a577fb2f5-utilities\") pod \"c86331e7-1eac-432b-b509-680a577fb2f5\" (UID: \"c86331e7-1eac-432b-b509-680a577fb2f5\") " Oct 01 16:30:14 crc kubenswrapper[4949]: I1001 16:30:14.193283 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c86331e7-1eac-432b-b509-680a577fb2f5-utilities" (OuterVolumeSpecName: "utilities") pod "c86331e7-1eac-432b-b509-680a577fb2f5" (UID: "c86331e7-1eac-432b-b509-680a577fb2f5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:30:14 crc kubenswrapper[4949]: I1001 16:30:14.200064 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c86331e7-1eac-432b-b509-680a577fb2f5-kube-api-access-lhg2g" (OuterVolumeSpecName: "kube-api-access-lhg2g") pod "c86331e7-1eac-432b-b509-680a577fb2f5" (UID: "c86331e7-1eac-432b-b509-680a577fb2f5"). InnerVolumeSpecName "kube-api-access-lhg2g". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:30:14 crc kubenswrapper[4949]: I1001 16:30:14.275042 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c86331e7-1eac-432b-b509-680a577fb2f5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c86331e7-1eac-432b-b509-680a577fb2f5" (UID: "c86331e7-1eac-432b-b509-680a577fb2f5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:30:14 crc kubenswrapper[4949]: I1001 16:30:14.294112 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c86331e7-1eac-432b-b509-680a577fb2f5-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 01 16:30:14 crc kubenswrapper[4949]: I1001 16:30:14.294184 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lhg2g\" (UniqueName: \"kubernetes.io/projected/c86331e7-1eac-432b-b509-680a577fb2f5-kube-api-access-lhg2g\") on node \"crc\" DevicePath \"\"" Oct 01 16:30:14 crc kubenswrapper[4949]: I1001 16:30:14.294196 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c86331e7-1eac-432b-b509-680a577fb2f5-utilities\") on node \"crc\" DevicePath \"\"" Oct 01 16:30:14 crc kubenswrapper[4949]: I1001 16:30:14.590878 4949 generic.go:334] "Generic (PLEG): container finished" podID="c86331e7-1eac-432b-b509-680a577fb2f5" containerID="edcb77817691feeac87d30d1dcc9e319bc5af9fd0cc02a7d04a6eb03c6766c9b" exitCode=0 Oct 01 16:30:14 crc kubenswrapper[4949]: I1001 16:30:14.590925 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d2dn8" event={"ID":"c86331e7-1eac-432b-b509-680a577fb2f5","Type":"ContainerDied","Data":"edcb77817691feeac87d30d1dcc9e319bc5af9fd0cc02a7d04a6eb03c6766c9b"} Oct 01 16:30:14 crc kubenswrapper[4949]: I1001 16:30:14.590974 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-d2dn8" event={"ID":"c86331e7-1eac-432b-b509-680a577fb2f5","Type":"ContainerDied","Data":"3ab1acb2bead3999b2966512dc5e5c7ef3090e5125e6f8d71deca25f669f9b48"} Oct 01 16:30:14 crc kubenswrapper[4949]: I1001 16:30:14.590998 4949 scope.go:117] "RemoveContainer" containerID="edcb77817691feeac87d30d1dcc9e319bc5af9fd0cc02a7d04a6eb03c6766c9b" Oct 01 16:30:14 crc kubenswrapper[4949]: I1001 16:30:14.592216 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-d2dn8" Oct 01 16:30:14 crc kubenswrapper[4949]: I1001 16:30:14.618291 4949 scope.go:117] "RemoveContainer" containerID="f4483fba9812b6bbdf078dfeb1c5fce4e6d837039f6459e1506ee5e0e55bba9e" Oct 01 16:30:14 crc kubenswrapper[4949]: I1001 16:30:14.643635 4949 scope.go:117] "RemoveContainer" containerID="568082cf70cc74a2c276b4fb3164dbb1257fa01f83e2d82f55ceb4db3a211fff" Oct 01 16:30:14 crc kubenswrapper[4949]: I1001 16:30:14.676520 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-d2dn8"] Oct 01 16:30:14 crc kubenswrapper[4949]: I1001 16:30:14.693202 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-d2dn8"] Oct 01 16:30:14 crc kubenswrapper[4949]: I1001 16:30:14.697536 4949 scope.go:117] "RemoveContainer" containerID="edcb77817691feeac87d30d1dcc9e319bc5af9fd0cc02a7d04a6eb03c6766c9b" Oct 01 16:30:14 crc kubenswrapper[4949]: E1001 16:30:14.698065 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"edcb77817691feeac87d30d1dcc9e319bc5af9fd0cc02a7d04a6eb03c6766c9b\": container with ID starting with edcb77817691feeac87d30d1dcc9e319bc5af9fd0cc02a7d04a6eb03c6766c9b not found: ID does not exist" containerID="edcb77817691feeac87d30d1dcc9e319bc5af9fd0cc02a7d04a6eb03c6766c9b" Oct 01 16:30:14 crc kubenswrapper[4949]: I1001 16:30:14.698175 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"edcb77817691feeac87d30d1dcc9e319bc5af9fd0cc02a7d04a6eb03c6766c9b"} err="failed to get container status \"edcb77817691feeac87d30d1dcc9e319bc5af9fd0cc02a7d04a6eb03c6766c9b\": rpc error: code = NotFound desc = could not find container \"edcb77817691feeac87d30d1dcc9e319bc5af9fd0cc02a7d04a6eb03c6766c9b\": container with ID starting with edcb77817691feeac87d30d1dcc9e319bc5af9fd0cc02a7d04a6eb03c6766c9b not found: ID does not exist" Oct 01 16:30:14 crc kubenswrapper[4949]: I1001 16:30:14.698218 4949 scope.go:117] "RemoveContainer" containerID="f4483fba9812b6bbdf078dfeb1c5fce4e6d837039f6459e1506ee5e0e55bba9e" Oct 01 16:30:14 crc kubenswrapper[4949]: E1001 16:30:14.698592 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f4483fba9812b6bbdf078dfeb1c5fce4e6d837039f6459e1506ee5e0e55bba9e\": container with ID starting with f4483fba9812b6bbdf078dfeb1c5fce4e6d837039f6459e1506ee5e0e55bba9e not found: ID does not exist" containerID="f4483fba9812b6bbdf078dfeb1c5fce4e6d837039f6459e1506ee5e0e55bba9e" Oct 01 16:30:14 crc kubenswrapper[4949]: I1001 16:30:14.698636 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f4483fba9812b6bbdf078dfeb1c5fce4e6d837039f6459e1506ee5e0e55bba9e"} err="failed to get container status \"f4483fba9812b6bbdf078dfeb1c5fce4e6d837039f6459e1506ee5e0e55bba9e\": rpc error: code = NotFound desc = could not find container \"f4483fba9812b6bbdf078dfeb1c5fce4e6d837039f6459e1506ee5e0e55bba9e\": container with ID starting with f4483fba9812b6bbdf078dfeb1c5fce4e6d837039f6459e1506ee5e0e55bba9e not found: ID does not exist" Oct 01 16:30:14 crc kubenswrapper[4949]: I1001 16:30:14.698662 4949 scope.go:117] "RemoveContainer" containerID="568082cf70cc74a2c276b4fb3164dbb1257fa01f83e2d82f55ceb4db3a211fff" Oct 01 16:30:14 crc kubenswrapper[4949]: E1001 16:30:14.698985 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"568082cf70cc74a2c276b4fb3164dbb1257fa01f83e2d82f55ceb4db3a211fff\": container with ID starting with 568082cf70cc74a2c276b4fb3164dbb1257fa01f83e2d82f55ceb4db3a211fff not found: ID does not exist" containerID="568082cf70cc74a2c276b4fb3164dbb1257fa01f83e2d82f55ceb4db3a211fff" Oct 01 16:30:14 crc kubenswrapper[4949]: I1001 16:30:14.699048 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"568082cf70cc74a2c276b4fb3164dbb1257fa01f83e2d82f55ceb4db3a211fff"} err="failed to get container status \"568082cf70cc74a2c276b4fb3164dbb1257fa01f83e2d82f55ceb4db3a211fff\": rpc error: code = NotFound desc = could not find container \"568082cf70cc74a2c276b4fb3164dbb1257fa01f83e2d82f55ceb4db3a211fff\": container with ID starting with 568082cf70cc74a2c276b4fb3164dbb1257fa01f83e2d82f55ceb4db3a211fff not found: ID does not exist" Oct 01 16:30:15 crc kubenswrapper[4949]: I1001 16:30:15.620455 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c86331e7-1eac-432b-b509-680a577fb2f5" path="/var/lib/kubelet/pods/c86331e7-1eac-432b-b509-680a577fb2f5/volumes" Oct 01 16:30:27 crc kubenswrapper[4949]: I1001 16:30:27.603277 4949 scope.go:117] "RemoveContainer" containerID="2163a2b05def712ac5b674496b27b05a9fdad015f2ccd67d0776760e65d84a2c" Oct 01 16:30:27 crc kubenswrapper[4949]: E1001 16:30:27.604081 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:30:39 crc kubenswrapper[4949]: I1001 16:30:39.602350 4949 scope.go:117] "RemoveContainer" containerID="2163a2b05def712ac5b674496b27b05a9fdad015f2ccd67d0776760e65d84a2c" Oct 01 16:30:39 crc kubenswrapper[4949]: E1001 16:30:39.603728 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:30:47 crc kubenswrapper[4949]: I1001 16:30:47.630039 4949 scope.go:117] "RemoveContainer" containerID="b9d979121aacc0de55c1025b22100430f3a5abb58daafe1d06d01c94c0852cac" Oct 01 16:30:51 crc kubenswrapper[4949]: I1001 16:30:51.612297 4949 scope.go:117] "RemoveContainer" containerID="2163a2b05def712ac5b674496b27b05a9fdad015f2ccd67d0776760e65d84a2c" Oct 01 16:30:51 crc kubenswrapper[4949]: E1001 16:30:51.613455 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:31:04 crc kubenswrapper[4949]: I1001 16:31:04.601762 4949 scope.go:117] "RemoveContainer" containerID="2163a2b05def712ac5b674496b27b05a9fdad015f2ccd67d0776760e65d84a2c" Oct 01 16:31:04 crc kubenswrapper[4949]: E1001 16:31:04.602579 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:31:11 crc kubenswrapper[4949]: I1001 16:31:11.272784 4949 generic.go:334] "Generic (PLEG): container finished" podID="b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4" containerID="7142e9f48879dd6047c57f463a0258d8618abe4f04ae189675496911fe2dd21d" exitCode=0 Oct 01 16:31:11 crc kubenswrapper[4949]: I1001 16:31:11.272916 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qt7mp" event={"ID":"b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4","Type":"ContainerDied","Data":"7142e9f48879dd6047c57f463a0258d8618abe4f04ae189675496911fe2dd21d"} Oct 01 16:31:12 crc kubenswrapper[4949]: I1001 16:31:12.700748 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qt7mp" Oct 01 16:31:12 crc kubenswrapper[4949]: I1001 16:31:12.879230 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4-libvirt-secret-0\") pod \"b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4\" (UID: \"b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4\") " Oct 01 16:31:12 crc kubenswrapper[4949]: I1001 16:31:12.879487 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4-ceph\") pod \"b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4\" (UID: \"b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4\") " Oct 01 16:31:12 crc kubenswrapper[4949]: I1001 16:31:12.879522 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4-inventory\") pod \"b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4\" (UID: \"b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4\") " Oct 01 16:31:12 crc kubenswrapper[4949]: I1001 16:31:12.879636 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cbbhh\" (UniqueName: \"kubernetes.io/projected/b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4-kube-api-access-cbbhh\") pod \"b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4\" (UID: \"b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4\") " Oct 01 16:31:12 crc kubenswrapper[4949]: I1001 16:31:12.879719 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4-ssh-key\") pod \"b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4\" (UID: \"b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4\") " Oct 01 16:31:12 crc kubenswrapper[4949]: I1001 16:31:12.879792 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4-libvirt-combined-ca-bundle\") pod \"b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4\" (UID: \"b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4\") " Oct 01 16:31:12 crc kubenswrapper[4949]: I1001 16:31:12.885549 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4-libvirt-combined-ca-bundle" (OuterVolumeSpecName: "libvirt-combined-ca-bundle") pod "b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4" (UID: "b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4"). InnerVolumeSpecName "libvirt-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:31:12 crc kubenswrapper[4949]: I1001 16:31:12.886319 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4-kube-api-access-cbbhh" (OuterVolumeSpecName: "kube-api-access-cbbhh") pod "b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4" (UID: "b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4"). InnerVolumeSpecName "kube-api-access-cbbhh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:31:12 crc kubenswrapper[4949]: I1001 16:31:12.897518 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4-ceph" (OuterVolumeSpecName: "ceph") pod "b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4" (UID: "b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:31:12 crc kubenswrapper[4949]: I1001 16:31:12.913228 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4" (UID: "b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:31:12 crc kubenswrapper[4949]: I1001 16:31:12.914988 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4-libvirt-secret-0" (OuterVolumeSpecName: "libvirt-secret-0") pod "b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4" (UID: "b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4"). InnerVolumeSpecName "libvirt-secret-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:31:12 crc kubenswrapper[4949]: I1001 16:31:12.938414 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4-inventory" (OuterVolumeSpecName: "inventory") pod "b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4" (UID: "b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:31:12 crc kubenswrapper[4949]: I1001 16:31:12.981336 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4-inventory\") on node \"crc\" DevicePath \"\"" Oct 01 16:31:12 crc kubenswrapper[4949]: I1001 16:31:12.981368 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cbbhh\" (UniqueName: \"kubernetes.io/projected/b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4-kube-api-access-cbbhh\") on node \"crc\" DevicePath \"\"" Oct 01 16:31:12 crc kubenswrapper[4949]: I1001 16:31:12.981403 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 01 16:31:12 crc kubenswrapper[4949]: I1001 16:31:12.981413 4949 reconciler_common.go:293] "Volume detached for volume \"libvirt-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4-libvirt-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 16:31:12 crc kubenswrapper[4949]: I1001 16:31:12.981422 4949 reconciler_common.go:293] "Volume detached for volume \"libvirt-secret-0\" (UniqueName: \"kubernetes.io/secret/b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4-libvirt-secret-0\") on node \"crc\" DevicePath \"\"" Oct 01 16:31:12 crc kubenswrapper[4949]: I1001 16:31:12.981431 4949 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4-ceph\") on node \"crc\" DevicePath \"\"" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.293776 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qt7mp" event={"ID":"b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4","Type":"ContainerDied","Data":"d61c5d4f337b14d9d962d84f27caac46d67e93ebfbfc3c1adbb7dbe0ad1e1fa8"} Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.293813 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d61c5d4f337b14d9d962d84f27caac46d67e93ebfbfc3c1adbb7dbe0ad1e1fa8" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.293819 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/libvirt-edpm-deployment-openstack-edpm-ipam-qt7mp" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.420533 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5"] Oct 01 16:31:13 crc kubenswrapper[4949]: E1001 16:31:13.421067 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d04090b3-c163-4510-a46f-6e0510cfab4f" containerName="extract-utilities" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.421095 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="d04090b3-c163-4510-a46f-6e0510cfab4f" containerName="extract-utilities" Oct 01 16:31:13 crc kubenswrapper[4949]: E1001 16:31:13.421110 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c86331e7-1eac-432b-b509-680a577fb2f5" containerName="extract-utilities" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.421136 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="c86331e7-1eac-432b-b509-680a577fb2f5" containerName="extract-utilities" Oct 01 16:31:13 crc kubenswrapper[4949]: E1001 16:31:13.421154 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c86331e7-1eac-432b-b509-680a577fb2f5" containerName="extract-content" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.421164 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="c86331e7-1eac-432b-b509-680a577fb2f5" containerName="extract-content" Oct 01 16:31:13 crc kubenswrapper[4949]: E1001 16:31:13.421182 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9525d427-6873-416a-bec5-e747a0ac944b" containerName="collect-profiles" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.421190 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="9525d427-6873-416a-bec5-e747a0ac944b" containerName="collect-profiles" Oct 01 16:31:13 crc kubenswrapper[4949]: E1001 16:31:13.421205 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d04090b3-c163-4510-a46f-6e0510cfab4f" containerName="registry-server" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.421212 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="d04090b3-c163-4510-a46f-6e0510cfab4f" containerName="registry-server" Oct 01 16:31:13 crc kubenswrapper[4949]: E1001 16:31:13.421233 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.421242 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Oct 01 16:31:13 crc kubenswrapper[4949]: E1001 16:31:13.421253 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c86331e7-1eac-432b-b509-680a577fb2f5" containerName="registry-server" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.421261 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="c86331e7-1eac-432b-b509-680a577fb2f5" containerName="registry-server" Oct 01 16:31:13 crc kubenswrapper[4949]: E1001 16:31:13.421274 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d04090b3-c163-4510-a46f-6e0510cfab4f" containerName="extract-content" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.421282 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="d04090b3-c163-4510-a46f-6e0510cfab4f" containerName="extract-content" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.421527 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4" containerName="libvirt-edpm-deployment-openstack-edpm-ipam" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.421551 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="9525d427-6873-416a-bec5-e747a0ac944b" containerName="collect-profiles" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.421568 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="c86331e7-1eac-432b-b509-680a577fb2f5" containerName="registry-server" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.421584 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="d04090b3-c163-4510-a46f-6e0510cfab4f" containerName="registry-server" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.422367 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.425624 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5"] Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.426644 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"openstack-aee-default-env" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.426865 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.427025 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-migration-ssh-key" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.427199 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"nova-cell1-compute-config" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.427385 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"openstack-edpm-ipam-dockercfg-dwhcl" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.427533 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplane-ansible-ssh-private-key-secret" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.427699 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"nova-extra-config" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.427887 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"ceph-nova" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.428797 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"dataplanenodeset-openstack-edpm-ipam" Oct 01 16:31:13 crc kubenswrapper[4949]: E1001 16:31:13.505114 4949 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb26e9d4b_3eb7_4325_b59b_dde6b2b7d2f4.slice/crio-d61c5d4f337b14d9d962d84f27caac46d67e93ebfbfc3c1adbb7dbe0ad1e1fa8\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb26e9d4b_3eb7_4325_b59b_dde6b2b7d2f4.slice\": RecentStats: unable to find data in memory cache]" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.590821 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/3c32efc4-95e1-4528-981f-0055372e12db-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5\" (UID: \"3c32efc4-95e1-4528-981f-0055372e12db\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.591184 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/3c32efc4-95e1-4528-981f-0055372e12db-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5\" (UID: \"3c32efc4-95e1-4528-981f-0055372e12db\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.591318 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/3c32efc4-95e1-4528-981f-0055372e12db-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5\" (UID: \"3c32efc4-95e1-4528-981f-0055372e12db\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.591483 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3c32efc4-95e1-4528-981f-0055372e12db-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5\" (UID: \"3c32efc4-95e1-4528-981f-0055372e12db\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.591565 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfz9p\" (UniqueName: \"kubernetes.io/projected/3c32efc4-95e1-4528-981f-0055372e12db-kube-api-access-gfz9p\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5\" (UID: \"3c32efc4-95e1-4528-981f-0055372e12db\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.591714 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/3c32efc4-95e1-4528-981f-0055372e12db-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5\" (UID: \"3c32efc4-95e1-4528-981f-0055372e12db\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.591850 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/3c32efc4-95e1-4528-981f-0055372e12db-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5\" (UID: \"3c32efc4-95e1-4528-981f-0055372e12db\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.591903 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3c32efc4-95e1-4528-981f-0055372e12db-ssh-key\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5\" (UID: \"3c32efc4-95e1-4528-981f-0055372e12db\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.591955 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/3c32efc4-95e1-4528-981f-0055372e12db-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5\" (UID: \"3c32efc4-95e1-4528-981f-0055372e12db\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.592009 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c32efc4-95e1-4528-981f-0055372e12db-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5\" (UID: \"3c32efc4-95e1-4528-981f-0055372e12db\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.592060 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c32efc4-95e1-4528-981f-0055372e12db-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5\" (UID: \"3c32efc4-95e1-4528-981f-0055372e12db\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.693919 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3c32efc4-95e1-4528-981f-0055372e12db-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5\" (UID: \"3c32efc4-95e1-4528-981f-0055372e12db\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.694205 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfz9p\" (UniqueName: \"kubernetes.io/projected/3c32efc4-95e1-4528-981f-0055372e12db-kube-api-access-gfz9p\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5\" (UID: \"3c32efc4-95e1-4528-981f-0055372e12db\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.694272 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/3c32efc4-95e1-4528-981f-0055372e12db-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5\" (UID: \"3c32efc4-95e1-4528-981f-0055372e12db\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.694315 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/3c32efc4-95e1-4528-981f-0055372e12db-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5\" (UID: \"3c32efc4-95e1-4528-981f-0055372e12db\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.694333 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3c32efc4-95e1-4528-981f-0055372e12db-ssh-key\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5\" (UID: \"3c32efc4-95e1-4528-981f-0055372e12db\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.694361 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/3c32efc4-95e1-4528-981f-0055372e12db-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5\" (UID: \"3c32efc4-95e1-4528-981f-0055372e12db\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.694393 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c32efc4-95e1-4528-981f-0055372e12db-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5\" (UID: \"3c32efc4-95e1-4528-981f-0055372e12db\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.694415 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c32efc4-95e1-4528-981f-0055372e12db-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5\" (UID: \"3c32efc4-95e1-4528-981f-0055372e12db\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.694467 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/3c32efc4-95e1-4528-981f-0055372e12db-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5\" (UID: \"3c32efc4-95e1-4528-981f-0055372e12db\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.694491 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/3c32efc4-95e1-4528-981f-0055372e12db-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5\" (UID: \"3c32efc4-95e1-4528-981f-0055372e12db\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.694532 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/3c32efc4-95e1-4528-981f-0055372e12db-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5\" (UID: \"3c32efc4-95e1-4528-981f-0055372e12db\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.697230 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/3c32efc4-95e1-4528-981f-0055372e12db-ceph-nova-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5\" (UID: \"3c32efc4-95e1-4528-981f-0055372e12db\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.698402 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/3c32efc4-95e1-4528-981f-0055372e12db-nova-extra-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5\" (UID: \"3c32efc4-95e1-4528-981f-0055372e12db\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.701358 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3c32efc4-95e1-4528-981f-0055372e12db-ceph\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5\" (UID: \"3c32efc4-95e1-4528-981f-0055372e12db\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.701663 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/3c32efc4-95e1-4528-981f-0055372e12db-nova-cell1-compute-config-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5\" (UID: \"3c32efc4-95e1-4528-981f-0055372e12db\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.701720 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3c32efc4-95e1-4528-981f-0055372e12db-ssh-key\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5\" (UID: \"3c32efc4-95e1-4528-981f-0055372e12db\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.702609 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/3c32efc4-95e1-4528-981f-0055372e12db-nova-cell1-compute-config-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5\" (UID: \"3c32efc4-95e1-4528-981f-0055372e12db\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.703300 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c32efc4-95e1-4528-981f-0055372e12db-inventory\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5\" (UID: \"3c32efc4-95e1-4528-981f-0055372e12db\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.704656 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/3c32efc4-95e1-4528-981f-0055372e12db-nova-migration-ssh-key-1\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5\" (UID: \"3c32efc4-95e1-4528-981f-0055372e12db\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.704840 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c32efc4-95e1-4528-981f-0055372e12db-nova-custom-ceph-combined-ca-bundle\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5\" (UID: \"3c32efc4-95e1-4528-981f-0055372e12db\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.714116 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/3c32efc4-95e1-4528-981f-0055372e12db-nova-migration-ssh-key-0\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5\" (UID: \"3c32efc4-95e1-4528-981f-0055372e12db\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.714670 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfz9p\" (UniqueName: \"kubernetes.io/projected/3c32efc4-95e1-4528-981f-0055372e12db-kube-api-access-gfz9p\") pod \"nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5\" (UID: \"3c32efc4-95e1-4528-981f-0055372e12db\") " pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5" Oct 01 16:31:13 crc kubenswrapper[4949]: I1001 16:31:13.747857 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5" Oct 01 16:31:14 crc kubenswrapper[4949]: I1001 16:31:14.303425 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5"] Oct 01 16:31:15 crc kubenswrapper[4949]: I1001 16:31:15.314048 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5" event={"ID":"3c32efc4-95e1-4528-981f-0055372e12db","Type":"ContainerStarted","Data":"89521a8bff3ba8b3230b20b19d5f0c0f561918d203266eee5f99733ff49d0b32"} Oct 01 16:31:15 crc kubenswrapper[4949]: I1001 16:31:15.314441 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5" event={"ID":"3c32efc4-95e1-4528-981f-0055372e12db","Type":"ContainerStarted","Data":"64426f40170c41b79bea25b57a1d41000e4568b954144a9aa9073e77c2e4f625"} Oct 01 16:31:15 crc kubenswrapper[4949]: I1001 16:31:15.342881 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5" podStartSLOduration=1.786720152 podStartE2EDuration="2.342860751s" podCreationTimestamp="2025-10-01 16:31:13 +0000 UTC" firstStartedPulling="2025-10-01 16:31:14.319388694 +0000 UTC m=+2973.624994895" lastFinishedPulling="2025-10-01 16:31:14.875529293 +0000 UTC m=+2974.181135494" observedRunningTime="2025-10-01 16:31:15.331701173 +0000 UTC m=+2974.637307374" watchObservedRunningTime="2025-10-01 16:31:15.342860751 +0000 UTC m=+2974.648466942" Oct 01 16:31:15 crc kubenswrapper[4949]: I1001 16:31:15.603194 4949 scope.go:117] "RemoveContainer" containerID="2163a2b05def712ac5b674496b27b05a9fdad015f2ccd67d0776760e65d84a2c" Oct 01 16:31:15 crc kubenswrapper[4949]: E1001 16:31:15.603451 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:31:26 crc kubenswrapper[4949]: I1001 16:31:26.601543 4949 scope.go:117] "RemoveContainer" containerID="2163a2b05def712ac5b674496b27b05a9fdad015f2ccd67d0776760e65d84a2c" Oct 01 16:31:26 crc kubenswrapper[4949]: E1001 16:31:26.602328 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:31:37 crc kubenswrapper[4949]: I1001 16:31:37.602649 4949 scope.go:117] "RemoveContainer" containerID="2163a2b05def712ac5b674496b27b05a9fdad015f2ccd67d0776760e65d84a2c" Oct 01 16:31:37 crc kubenswrapper[4949]: E1001 16:31:37.603486 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:31:49 crc kubenswrapper[4949]: I1001 16:31:49.602076 4949 scope.go:117] "RemoveContainer" containerID="2163a2b05def712ac5b674496b27b05a9fdad015f2ccd67d0776760e65d84a2c" Oct 01 16:31:49 crc kubenswrapper[4949]: E1001 16:31:49.602849 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:32:01 crc kubenswrapper[4949]: I1001 16:32:01.608312 4949 scope.go:117] "RemoveContainer" containerID="2163a2b05def712ac5b674496b27b05a9fdad015f2ccd67d0776760e65d84a2c" Oct 01 16:32:01 crc kubenswrapper[4949]: E1001 16:32:01.609094 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:32:13 crc kubenswrapper[4949]: I1001 16:32:13.601991 4949 scope.go:117] "RemoveContainer" containerID="2163a2b05def712ac5b674496b27b05a9fdad015f2ccd67d0776760e65d84a2c" Oct 01 16:32:13 crc kubenswrapper[4949]: E1001 16:32:13.602945 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:32:24 crc kubenswrapper[4949]: I1001 16:32:24.753434 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-9xvj6"] Oct 01 16:32:24 crc kubenswrapper[4949]: I1001 16:32:24.757012 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9xvj6" Oct 01 16:32:24 crc kubenswrapper[4949]: I1001 16:32:24.762585 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9xvj6"] Oct 01 16:32:24 crc kubenswrapper[4949]: I1001 16:32:24.882214 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whxqz\" (UniqueName: \"kubernetes.io/projected/05262b67-71ad-45cb-9f8d-8d96a19162c4-kube-api-access-whxqz\") pod \"redhat-marketplace-9xvj6\" (UID: \"05262b67-71ad-45cb-9f8d-8d96a19162c4\") " pod="openshift-marketplace/redhat-marketplace-9xvj6" Oct 01 16:32:24 crc kubenswrapper[4949]: I1001 16:32:24.882329 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05262b67-71ad-45cb-9f8d-8d96a19162c4-catalog-content\") pod \"redhat-marketplace-9xvj6\" (UID: \"05262b67-71ad-45cb-9f8d-8d96a19162c4\") " pod="openshift-marketplace/redhat-marketplace-9xvj6" Oct 01 16:32:24 crc kubenswrapper[4949]: I1001 16:32:24.882528 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05262b67-71ad-45cb-9f8d-8d96a19162c4-utilities\") pod \"redhat-marketplace-9xvj6\" (UID: \"05262b67-71ad-45cb-9f8d-8d96a19162c4\") " pod="openshift-marketplace/redhat-marketplace-9xvj6" Oct 01 16:32:24 crc kubenswrapper[4949]: I1001 16:32:24.989744 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-whxqz\" (UniqueName: \"kubernetes.io/projected/05262b67-71ad-45cb-9f8d-8d96a19162c4-kube-api-access-whxqz\") pod \"redhat-marketplace-9xvj6\" (UID: \"05262b67-71ad-45cb-9f8d-8d96a19162c4\") " pod="openshift-marketplace/redhat-marketplace-9xvj6" Oct 01 16:32:24 crc kubenswrapper[4949]: I1001 16:32:24.989819 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05262b67-71ad-45cb-9f8d-8d96a19162c4-catalog-content\") pod \"redhat-marketplace-9xvj6\" (UID: \"05262b67-71ad-45cb-9f8d-8d96a19162c4\") " pod="openshift-marketplace/redhat-marketplace-9xvj6" Oct 01 16:32:24 crc kubenswrapper[4949]: I1001 16:32:24.989958 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05262b67-71ad-45cb-9f8d-8d96a19162c4-utilities\") pod \"redhat-marketplace-9xvj6\" (UID: \"05262b67-71ad-45cb-9f8d-8d96a19162c4\") " pod="openshift-marketplace/redhat-marketplace-9xvj6" Oct 01 16:32:24 crc kubenswrapper[4949]: I1001 16:32:24.990365 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05262b67-71ad-45cb-9f8d-8d96a19162c4-catalog-content\") pod \"redhat-marketplace-9xvj6\" (UID: \"05262b67-71ad-45cb-9f8d-8d96a19162c4\") " pod="openshift-marketplace/redhat-marketplace-9xvj6" Oct 01 16:32:24 crc kubenswrapper[4949]: I1001 16:32:24.990476 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05262b67-71ad-45cb-9f8d-8d96a19162c4-utilities\") pod \"redhat-marketplace-9xvj6\" (UID: \"05262b67-71ad-45cb-9f8d-8d96a19162c4\") " pod="openshift-marketplace/redhat-marketplace-9xvj6" Oct 01 16:32:25 crc kubenswrapper[4949]: I1001 16:32:25.020904 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-whxqz\" (UniqueName: \"kubernetes.io/projected/05262b67-71ad-45cb-9f8d-8d96a19162c4-kube-api-access-whxqz\") pod \"redhat-marketplace-9xvj6\" (UID: \"05262b67-71ad-45cb-9f8d-8d96a19162c4\") " pod="openshift-marketplace/redhat-marketplace-9xvj6" Oct 01 16:32:25 crc kubenswrapper[4949]: I1001 16:32:25.090801 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9xvj6" Oct 01 16:32:25 crc kubenswrapper[4949]: W1001 16:32:25.593362 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05262b67_71ad_45cb_9f8d_8d96a19162c4.slice/crio-fdee8acdc21a1565fb59bc13c5d7a4411b09e6b86bf443ca8baa56432f6c2b5b WatchSource:0}: Error finding container fdee8acdc21a1565fb59bc13c5d7a4411b09e6b86bf443ca8baa56432f6c2b5b: Status 404 returned error can't find the container with id fdee8acdc21a1565fb59bc13c5d7a4411b09e6b86bf443ca8baa56432f6c2b5b Oct 01 16:32:25 crc kubenswrapper[4949]: I1001 16:32:25.595740 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-9xvj6"] Oct 01 16:32:25 crc kubenswrapper[4949]: I1001 16:32:25.603320 4949 scope.go:117] "RemoveContainer" containerID="2163a2b05def712ac5b674496b27b05a9fdad015f2ccd67d0776760e65d84a2c" Oct 01 16:32:25 crc kubenswrapper[4949]: E1001 16:32:25.604498 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:32:26 crc kubenswrapper[4949]: I1001 16:32:26.025649 4949 generic.go:334] "Generic (PLEG): container finished" podID="05262b67-71ad-45cb-9f8d-8d96a19162c4" containerID="aa03efe9d37def8e32ad73b690be7fa2e54ea387674d7f06c0b32cfeb7266d24" exitCode=0 Oct 01 16:32:26 crc kubenswrapper[4949]: I1001 16:32:26.025755 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9xvj6" event={"ID":"05262b67-71ad-45cb-9f8d-8d96a19162c4","Type":"ContainerDied","Data":"aa03efe9d37def8e32ad73b690be7fa2e54ea387674d7f06c0b32cfeb7266d24"} Oct 01 16:32:26 crc kubenswrapper[4949]: I1001 16:32:26.025990 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9xvj6" event={"ID":"05262b67-71ad-45cb-9f8d-8d96a19162c4","Type":"ContainerStarted","Data":"fdee8acdc21a1565fb59bc13c5d7a4411b09e6b86bf443ca8baa56432f6c2b5b"} Oct 01 16:32:28 crc kubenswrapper[4949]: I1001 16:32:28.050352 4949 generic.go:334] "Generic (PLEG): container finished" podID="05262b67-71ad-45cb-9f8d-8d96a19162c4" containerID="9e3792b9763d430aa4f8152447930ec15eb32d1cc7558955840b8d4c508f29f6" exitCode=0 Oct 01 16:32:28 crc kubenswrapper[4949]: I1001 16:32:28.050463 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9xvj6" event={"ID":"05262b67-71ad-45cb-9f8d-8d96a19162c4","Type":"ContainerDied","Data":"9e3792b9763d430aa4f8152447930ec15eb32d1cc7558955840b8d4c508f29f6"} Oct 01 16:32:29 crc kubenswrapper[4949]: I1001 16:32:29.065579 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9xvj6" event={"ID":"05262b67-71ad-45cb-9f8d-8d96a19162c4","Type":"ContainerStarted","Data":"d45b750c54caf5a6e55ba926e8ec6872f63615d227893718fa9fac44382ea0c9"} Oct 01 16:32:29 crc kubenswrapper[4949]: I1001 16:32:29.098745 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-9xvj6" podStartSLOduration=2.498386036 podStartE2EDuration="5.098720234s" podCreationTimestamp="2025-10-01 16:32:24 +0000 UTC" firstStartedPulling="2025-10-01 16:32:26.028032235 +0000 UTC m=+3045.333638426" lastFinishedPulling="2025-10-01 16:32:28.628366393 +0000 UTC m=+3047.933972624" observedRunningTime="2025-10-01 16:32:29.086002673 +0000 UTC m=+3048.391608904" watchObservedRunningTime="2025-10-01 16:32:29.098720234 +0000 UTC m=+3048.404326455" Oct 01 16:32:35 crc kubenswrapper[4949]: I1001 16:32:35.092068 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-9xvj6" Oct 01 16:32:35 crc kubenswrapper[4949]: I1001 16:32:35.093876 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-9xvj6" Oct 01 16:32:35 crc kubenswrapper[4949]: I1001 16:32:35.164718 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-9xvj6" Oct 01 16:32:36 crc kubenswrapper[4949]: I1001 16:32:36.210618 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-9xvj6" Oct 01 16:32:36 crc kubenswrapper[4949]: I1001 16:32:36.267307 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9xvj6"] Oct 01 16:32:38 crc kubenswrapper[4949]: I1001 16:32:38.155629 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-9xvj6" podUID="05262b67-71ad-45cb-9f8d-8d96a19162c4" containerName="registry-server" containerID="cri-o://d45b750c54caf5a6e55ba926e8ec6872f63615d227893718fa9fac44382ea0c9" gracePeriod=2 Oct 01 16:32:38 crc kubenswrapper[4949]: I1001 16:32:38.601613 4949 scope.go:117] "RemoveContainer" containerID="2163a2b05def712ac5b674496b27b05a9fdad015f2ccd67d0776760e65d84a2c" Oct 01 16:32:38 crc kubenswrapper[4949]: E1001 16:32:38.602162 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:32:38 crc kubenswrapper[4949]: I1001 16:32:38.720401 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9xvj6" Oct 01 16:32:38 crc kubenswrapper[4949]: I1001 16:32:38.886760 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-whxqz\" (UniqueName: \"kubernetes.io/projected/05262b67-71ad-45cb-9f8d-8d96a19162c4-kube-api-access-whxqz\") pod \"05262b67-71ad-45cb-9f8d-8d96a19162c4\" (UID: \"05262b67-71ad-45cb-9f8d-8d96a19162c4\") " Oct 01 16:32:38 crc kubenswrapper[4949]: I1001 16:32:38.886829 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05262b67-71ad-45cb-9f8d-8d96a19162c4-utilities\") pod \"05262b67-71ad-45cb-9f8d-8d96a19162c4\" (UID: \"05262b67-71ad-45cb-9f8d-8d96a19162c4\") " Oct 01 16:32:38 crc kubenswrapper[4949]: I1001 16:32:38.887039 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05262b67-71ad-45cb-9f8d-8d96a19162c4-catalog-content\") pod \"05262b67-71ad-45cb-9f8d-8d96a19162c4\" (UID: \"05262b67-71ad-45cb-9f8d-8d96a19162c4\") " Oct 01 16:32:38 crc kubenswrapper[4949]: I1001 16:32:38.888847 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05262b67-71ad-45cb-9f8d-8d96a19162c4-utilities" (OuterVolumeSpecName: "utilities") pod "05262b67-71ad-45cb-9f8d-8d96a19162c4" (UID: "05262b67-71ad-45cb-9f8d-8d96a19162c4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:32:38 crc kubenswrapper[4949]: I1001 16:32:38.902541 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05262b67-71ad-45cb-9f8d-8d96a19162c4-kube-api-access-whxqz" (OuterVolumeSpecName: "kube-api-access-whxqz") pod "05262b67-71ad-45cb-9f8d-8d96a19162c4" (UID: "05262b67-71ad-45cb-9f8d-8d96a19162c4"). InnerVolumeSpecName "kube-api-access-whxqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:32:38 crc kubenswrapper[4949]: I1001 16:32:38.915173 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/05262b67-71ad-45cb-9f8d-8d96a19162c4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "05262b67-71ad-45cb-9f8d-8d96a19162c4" (UID: "05262b67-71ad-45cb-9f8d-8d96a19162c4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:32:38 crc kubenswrapper[4949]: I1001 16:32:38.990476 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-whxqz\" (UniqueName: \"kubernetes.io/projected/05262b67-71ad-45cb-9f8d-8d96a19162c4-kube-api-access-whxqz\") on node \"crc\" DevicePath \"\"" Oct 01 16:32:38 crc kubenswrapper[4949]: I1001 16:32:38.990536 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/05262b67-71ad-45cb-9f8d-8d96a19162c4-utilities\") on node \"crc\" DevicePath \"\"" Oct 01 16:32:38 crc kubenswrapper[4949]: I1001 16:32:38.990556 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/05262b67-71ad-45cb-9f8d-8d96a19162c4-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 01 16:32:39 crc kubenswrapper[4949]: I1001 16:32:39.170926 4949 generic.go:334] "Generic (PLEG): container finished" podID="05262b67-71ad-45cb-9f8d-8d96a19162c4" containerID="d45b750c54caf5a6e55ba926e8ec6872f63615d227893718fa9fac44382ea0c9" exitCode=0 Oct 01 16:32:39 crc kubenswrapper[4949]: I1001 16:32:39.171005 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9xvj6" event={"ID":"05262b67-71ad-45cb-9f8d-8d96a19162c4","Type":"ContainerDied","Data":"d45b750c54caf5a6e55ba926e8ec6872f63615d227893718fa9fac44382ea0c9"} Oct 01 16:32:39 crc kubenswrapper[4949]: I1001 16:32:39.171027 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-9xvj6" Oct 01 16:32:39 crc kubenswrapper[4949]: I1001 16:32:39.171065 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-9xvj6" event={"ID":"05262b67-71ad-45cb-9f8d-8d96a19162c4","Type":"ContainerDied","Data":"fdee8acdc21a1565fb59bc13c5d7a4411b09e6b86bf443ca8baa56432f6c2b5b"} Oct 01 16:32:39 crc kubenswrapper[4949]: I1001 16:32:39.171108 4949 scope.go:117] "RemoveContainer" containerID="d45b750c54caf5a6e55ba926e8ec6872f63615d227893718fa9fac44382ea0c9" Oct 01 16:32:39 crc kubenswrapper[4949]: I1001 16:32:39.211615 4949 scope.go:117] "RemoveContainer" containerID="9e3792b9763d430aa4f8152447930ec15eb32d1cc7558955840b8d4c508f29f6" Oct 01 16:32:39 crc kubenswrapper[4949]: I1001 16:32:39.233435 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-9xvj6"] Oct 01 16:32:39 crc kubenswrapper[4949]: I1001 16:32:39.247389 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-9xvj6"] Oct 01 16:32:39 crc kubenswrapper[4949]: I1001 16:32:39.261721 4949 scope.go:117] "RemoveContainer" containerID="aa03efe9d37def8e32ad73b690be7fa2e54ea387674d7f06c0b32cfeb7266d24" Oct 01 16:32:39 crc kubenswrapper[4949]: I1001 16:32:39.310453 4949 scope.go:117] "RemoveContainer" containerID="d45b750c54caf5a6e55ba926e8ec6872f63615d227893718fa9fac44382ea0c9" Oct 01 16:32:39 crc kubenswrapper[4949]: E1001 16:32:39.311155 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d45b750c54caf5a6e55ba926e8ec6872f63615d227893718fa9fac44382ea0c9\": container with ID starting with d45b750c54caf5a6e55ba926e8ec6872f63615d227893718fa9fac44382ea0c9 not found: ID does not exist" containerID="d45b750c54caf5a6e55ba926e8ec6872f63615d227893718fa9fac44382ea0c9" Oct 01 16:32:39 crc kubenswrapper[4949]: I1001 16:32:39.311219 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d45b750c54caf5a6e55ba926e8ec6872f63615d227893718fa9fac44382ea0c9"} err="failed to get container status \"d45b750c54caf5a6e55ba926e8ec6872f63615d227893718fa9fac44382ea0c9\": rpc error: code = NotFound desc = could not find container \"d45b750c54caf5a6e55ba926e8ec6872f63615d227893718fa9fac44382ea0c9\": container with ID starting with d45b750c54caf5a6e55ba926e8ec6872f63615d227893718fa9fac44382ea0c9 not found: ID does not exist" Oct 01 16:32:39 crc kubenswrapper[4949]: I1001 16:32:39.311263 4949 scope.go:117] "RemoveContainer" containerID="9e3792b9763d430aa4f8152447930ec15eb32d1cc7558955840b8d4c508f29f6" Oct 01 16:32:39 crc kubenswrapper[4949]: E1001 16:32:39.311882 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9e3792b9763d430aa4f8152447930ec15eb32d1cc7558955840b8d4c508f29f6\": container with ID starting with 9e3792b9763d430aa4f8152447930ec15eb32d1cc7558955840b8d4c508f29f6 not found: ID does not exist" containerID="9e3792b9763d430aa4f8152447930ec15eb32d1cc7558955840b8d4c508f29f6" Oct 01 16:32:39 crc kubenswrapper[4949]: I1001 16:32:39.311945 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9e3792b9763d430aa4f8152447930ec15eb32d1cc7558955840b8d4c508f29f6"} err="failed to get container status \"9e3792b9763d430aa4f8152447930ec15eb32d1cc7558955840b8d4c508f29f6\": rpc error: code = NotFound desc = could not find container \"9e3792b9763d430aa4f8152447930ec15eb32d1cc7558955840b8d4c508f29f6\": container with ID starting with 9e3792b9763d430aa4f8152447930ec15eb32d1cc7558955840b8d4c508f29f6 not found: ID does not exist" Oct 01 16:32:39 crc kubenswrapper[4949]: I1001 16:32:39.311988 4949 scope.go:117] "RemoveContainer" containerID="aa03efe9d37def8e32ad73b690be7fa2e54ea387674d7f06c0b32cfeb7266d24" Oct 01 16:32:39 crc kubenswrapper[4949]: E1001 16:32:39.312518 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa03efe9d37def8e32ad73b690be7fa2e54ea387674d7f06c0b32cfeb7266d24\": container with ID starting with aa03efe9d37def8e32ad73b690be7fa2e54ea387674d7f06c0b32cfeb7266d24 not found: ID does not exist" containerID="aa03efe9d37def8e32ad73b690be7fa2e54ea387674d7f06c0b32cfeb7266d24" Oct 01 16:32:39 crc kubenswrapper[4949]: I1001 16:32:39.312573 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa03efe9d37def8e32ad73b690be7fa2e54ea387674d7f06c0b32cfeb7266d24"} err="failed to get container status \"aa03efe9d37def8e32ad73b690be7fa2e54ea387674d7f06c0b32cfeb7266d24\": rpc error: code = NotFound desc = could not find container \"aa03efe9d37def8e32ad73b690be7fa2e54ea387674d7f06c0b32cfeb7266d24\": container with ID starting with aa03efe9d37def8e32ad73b690be7fa2e54ea387674d7f06c0b32cfeb7266d24 not found: ID does not exist" Oct 01 16:32:39 crc kubenswrapper[4949]: I1001 16:32:39.616706 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="05262b67-71ad-45cb-9f8d-8d96a19162c4" path="/var/lib/kubelet/pods/05262b67-71ad-45cb-9f8d-8d96a19162c4/volumes" Oct 01 16:32:53 crc kubenswrapper[4949]: I1001 16:32:53.602100 4949 scope.go:117] "RemoveContainer" containerID="2163a2b05def712ac5b674496b27b05a9fdad015f2ccd67d0776760e65d84a2c" Oct 01 16:32:53 crc kubenswrapper[4949]: E1001 16:32:53.602836 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:33:06 crc kubenswrapper[4949]: I1001 16:33:06.624952 4949 scope.go:117] "RemoveContainer" containerID="2163a2b05def712ac5b674496b27b05a9fdad015f2ccd67d0776760e65d84a2c" Oct 01 16:33:06 crc kubenswrapper[4949]: E1001 16:33:06.626032 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:33:17 crc kubenswrapper[4949]: I1001 16:33:17.603145 4949 scope.go:117] "RemoveContainer" containerID="2163a2b05def712ac5b674496b27b05a9fdad015f2ccd67d0776760e65d84a2c" Oct 01 16:33:17 crc kubenswrapper[4949]: E1001 16:33:17.604455 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:33:32 crc kubenswrapper[4949]: I1001 16:33:32.602325 4949 scope.go:117] "RemoveContainer" containerID="2163a2b05def712ac5b674496b27b05a9fdad015f2ccd67d0776760e65d84a2c" Oct 01 16:33:32 crc kubenswrapper[4949]: E1001 16:33:32.604744 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:33:45 crc kubenswrapper[4949]: I1001 16:33:45.601430 4949 scope.go:117] "RemoveContainer" containerID="2163a2b05def712ac5b674496b27b05a9fdad015f2ccd67d0776760e65d84a2c" Oct 01 16:33:45 crc kubenswrapper[4949]: E1001 16:33:45.602084 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:34:00 crc kubenswrapper[4949]: I1001 16:34:00.602111 4949 scope.go:117] "RemoveContainer" containerID="2163a2b05def712ac5b674496b27b05a9fdad015f2ccd67d0776760e65d84a2c" Oct 01 16:34:00 crc kubenswrapper[4949]: E1001 16:34:00.602930 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:34:14 crc kubenswrapper[4949]: I1001 16:34:14.602174 4949 scope.go:117] "RemoveContainer" containerID="2163a2b05def712ac5b674496b27b05a9fdad015f2ccd67d0776760e65d84a2c" Oct 01 16:34:14 crc kubenswrapper[4949]: E1001 16:34:14.603053 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:34:25 crc kubenswrapper[4949]: I1001 16:34:25.601583 4949 scope.go:117] "RemoveContainer" containerID="2163a2b05def712ac5b674496b27b05a9fdad015f2ccd67d0776760e65d84a2c" Oct 01 16:34:25 crc kubenswrapper[4949]: E1001 16:34:25.602449 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:34:36 crc kubenswrapper[4949]: I1001 16:34:36.602347 4949 scope.go:117] "RemoveContainer" containerID="2163a2b05def712ac5b674496b27b05a9fdad015f2ccd67d0776760e65d84a2c" Oct 01 16:34:36 crc kubenswrapper[4949]: E1001 16:34:36.603361 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:34:48 crc kubenswrapper[4949]: I1001 16:34:48.602939 4949 scope.go:117] "RemoveContainer" containerID="2163a2b05def712ac5b674496b27b05a9fdad015f2ccd67d0776760e65d84a2c" Oct 01 16:34:49 crc kubenswrapper[4949]: I1001 16:34:49.575886 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" event={"ID":"0e15cd67-d4ad-49b8-96a6-da114105e558","Type":"ContainerStarted","Data":"2ae213e758e493fc55d198c1a675039b65ae449df8af35955dd5bb19ef866b5d"} Oct 01 16:35:13 crc kubenswrapper[4949]: I1001 16:35:13.874031 4949 generic.go:334] "Generic (PLEG): container finished" podID="3c32efc4-95e1-4528-981f-0055372e12db" containerID="89521a8bff3ba8b3230b20b19d5f0c0f561918d203266eee5f99733ff49d0b32" exitCode=0 Oct 01 16:35:13 crc kubenswrapper[4949]: I1001 16:35:13.874147 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5" event={"ID":"3c32efc4-95e1-4528-981f-0055372e12db","Type":"ContainerDied","Data":"89521a8bff3ba8b3230b20b19d5f0c0f561918d203266eee5f99733ff49d0b32"} Oct 01 16:35:15 crc kubenswrapper[4949]: I1001 16:35:15.306334 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5" Oct 01 16:35:15 crc kubenswrapper[4949]: I1001 16:35:15.419739 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c32efc4-95e1-4528-981f-0055372e12db-nova-custom-ceph-combined-ca-bundle\") pod \"3c32efc4-95e1-4528-981f-0055372e12db\" (UID: \"3c32efc4-95e1-4528-981f-0055372e12db\") " Oct 01 16:35:15 crc kubenswrapper[4949]: I1001 16:35:15.419858 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/3c32efc4-95e1-4528-981f-0055372e12db-nova-migration-ssh-key-1\") pod \"3c32efc4-95e1-4528-981f-0055372e12db\" (UID: \"3c32efc4-95e1-4528-981f-0055372e12db\") " Oct 01 16:35:15 crc kubenswrapper[4949]: I1001 16:35:15.419901 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c32efc4-95e1-4528-981f-0055372e12db-inventory\") pod \"3c32efc4-95e1-4528-981f-0055372e12db\" (UID: \"3c32efc4-95e1-4528-981f-0055372e12db\") " Oct 01 16:35:15 crc kubenswrapper[4949]: I1001 16:35:15.419957 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/3c32efc4-95e1-4528-981f-0055372e12db-ceph-nova-0\") pod \"3c32efc4-95e1-4528-981f-0055372e12db\" (UID: \"3c32efc4-95e1-4528-981f-0055372e12db\") " Oct 01 16:35:15 crc kubenswrapper[4949]: I1001 16:35:15.419999 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/3c32efc4-95e1-4528-981f-0055372e12db-nova-cell1-compute-config-0\") pod \"3c32efc4-95e1-4528-981f-0055372e12db\" (UID: \"3c32efc4-95e1-4528-981f-0055372e12db\") " Oct 01 16:35:15 crc kubenswrapper[4949]: I1001 16:35:15.420041 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfz9p\" (UniqueName: \"kubernetes.io/projected/3c32efc4-95e1-4528-981f-0055372e12db-kube-api-access-gfz9p\") pod \"3c32efc4-95e1-4528-981f-0055372e12db\" (UID: \"3c32efc4-95e1-4528-981f-0055372e12db\") " Oct 01 16:35:15 crc kubenswrapper[4949]: I1001 16:35:15.420074 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/3c32efc4-95e1-4528-981f-0055372e12db-nova-extra-config-0\") pod \"3c32efc4-95e1-4528-981f-0055372e12db\" (UID: \"3c32efc4-95e1-4528-981f-0055372e12db\") " Oct 01 16:35:15 crc kubenswrapper[4949]: I1001 16:35:15.420167 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3c32efc4-95e1-4528-981f-0055372e12db-ssh-key\") pod \"3c32efc4-95e1-4528-981f-0055372e12db\" (UID: \"3c32efc4-95e1-4528-981f-0055372e12db\") " Oct 01 16:35:15 crc kubenswrapper[4949]: I1001 16:35:15.420204 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3c32efc4-95e1-4528-981f-0055372e12db-ceph\") pod \"3c32efc4-95e1-4528-981f-0055372e12db\" (UID: \"3c32efc4-95e1-4528-981f-0055372e12db\") " Oct 01 16:35:15 crc kubenswrapper[4949]: I1001 16:35:15.420272 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/3c32efc4-95e1-4528-981f-0055372e12db-nova-migration-ssh-key-0\") pod \"3c32efc4-95e1-4528-981f-0055372e12db\" (UID: \"3c32efc4-95e1-4528-981f-0055372e12db\") " Oct 01 16:35:15 crc kubenswrapper[4949]: I1001 16:35:15.420306 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/3c32efc4-95e1-4528-981f-0055372e12db-nova-cell1-compute-config-1\") pod \"3c32efc4-95e1-4528-981f-0055372e12db\" (UID: \"3c32efc4-95e1-4528-981f-0055372e12db\") " Oct 01 16:35:15 crc kubenswrapper[4949]: I1001 16:35:15.426734 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c32efc4-95e1-4528-981f-0055372e12db-ceph" (OuterVolumeSpecName: "ceph") pod "3c32efc4-95e1-4528-981f-0055372e12db" (UID: "3c32efc4-95e1-4528-981f-0055372e12db"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:35:15 crc kubenswrapper[4949]: I1001 16:35:15.426838 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c32efc4-95e1-4528-981f-0055372e12db-kube-api-access-gfz9p" (OuterVolumeSpecName: "kube-api-access-gfz9p") pod "3c32efc4-95e1-4528-981f-0055372e12db" (UID: "3c32efc4-95e1-4528-981f-0055372e12db"). InnerVolumeSpecName "kube-api-access-gfz9p". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:35:15 crc kubenswrapper[4949]: I1001 16:35:15.427711 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c32efc4-95e1-4528-981f-0055372e12db-nova-custom-ceph-combined-ca-bundle" (OuterVolumeSpecName: "nova-custom-ceph-combined-ca-bundle") pod "3c32efc4-95e1-4528-981f-0055372e12db" (UID: "3c32efc4-95e1-4528-981f-0055372e12db"). InnerVolumeSpecName "nova-custom-ceph-combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:35:15 crc kubenswrapper[4949]: I1001 16:35:15.451492 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c32efc4-95e1-4528-981f-0055372e12db-ceph-nova-0" (OuterVolumeSpecName: "ceph-nova-0") pod "3c32efc4-95e1-4528-981f-0055372e12db" (UID: "3c32efc4-95e1-4528-981f-0055372e12db"). InnerVolumeSpecName "ceph-nova-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 16:35:15 crc kubenswrapper[4949]: I1001 16:35:15.458672 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c32efc4-95e1-4528-981f-0055372e12db-nova-migration-ssh-key-1" (OuterVolumeSpecName: "nova-migration-ssh-key-1") pod "3c32efc4-95e1-4528-981f-0055372e12db" (UID: "3c32efc4-95e1-4528-981f-0055372e12db"). InnerVolumeSpecName "nova-migration-ssh-key-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:35:15 crc kubenswrapper[4949]: I1001 16:35:15.459821 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c32efc4-95e1-4528-981f-0055372e12db-inventory" (OuterVolumeSpecName: "inventory") pod "3c32efc4-95e1-4528-981f-0055372e12db" (UID: "3c32efc4-95e1-4528-981f-0055372e12db"). InnerVolumeSpecName "inventory". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:35:15 crc kubenswrapper[4949]: I1001 16:35:15.460277 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c32efc4-95e1-4528-981f-0055372e12db-nova-cell1-compute-config-1" (OuterVolumeSpecName: "nova-cell1-compute-config-1") pod "3c32efc4-95e1-4528-981f-0055372e12db" (UID: "3c32efc4-95e1-4528-981f-0055372e12db"). InnerVolumeSpecName "nova-cell1-compute-config-1". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:35:15 crc kubenswrapper[4949]: I1001 16:35:15.465815 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c32efc4-95e1-4528-981f-0055372e12db-nova-cell1-compute-config-0" (OuterVolumeSpecName: "nova-cell1-compute-config-0") pod "3c32efc4-95e1-4528-981f-0055372e12db" (UID: "3c32efc4-95e1-4528-981f-0055372e12db"). InnerVolumeSpecName "nova-cell1-compute-config-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:35:15 crc kubenswrapper[4949]: I1001 16:35:15.466491 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c32efc4-95e1-4528-981f-0055372e12db-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "3c32efc4-95e1-4528-981f-0055372e12db" (UID: "3c32efc4-95e1-4528-981f-0055372e12db"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:35:15 crc kubenswrapper[4949]: I1001 16:35:15.466523 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c32efc4-95e1-4528-981f-0055372e12db-nova-extra-config-0" (OuterVolumeSpecName: "nova-extra-config-0") pod "3c32efc4-95e1-4528-981f-0055372e12db" (UID: "3c32efc4-95e1-4528-981f-0055372e12db"). InnerVolumeSpecName "nova-extra-config-0". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 16:35:15 crc kubenswrapper[4949]: I1001 16:35:15.474553 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c32efc4-95e1-4528-981f-0055372e12db-nova-migration-ssh-key-0" (OuterVolumeSpecName: "nova-migration-ssh-key-0") pod "3c32efc4-95e1-4528-981f-0055372e12db" (UID: "3c32efc4-95e1-4528-981f-0055372e12db"). InnerVolumeSpecName "nova-migration-ssh-key-0". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:35:15 crc kubenswrapper[4949]: I1001 16:35:15.522940 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/3c32efc4-95e1-4528-981f-0055372e12db-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 01 16:35:15 crc kubenswrapper[4949]: I1001 16:35:15.522979 4949 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/secret/3c32efc4-95e1-4528-981f-0055372e12db-ceph\") on node \"crc\" DevicePath \"\"" Oct 01 16:35:15 crc kubenswrapper[4949]: I1001 16:35:15.522993 4949 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-0\" (UniqueName: \"kubernetes.io/secret/3c32efc4-95e1-4528-981f-0055372e12db-nova-migration-ssh-key-0\") on node \"crc\" DevicePath \"\"" Oct 01 16:35:15 crc kubenswrapper[4949]: I1001 16:35:15.523010 4949 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-1\" (UniqueName: \"kubernetes.io/secret/3c32efc4-95e1-4528-981f-0055372e12db-nova-cell1-compute-config-1\") on node \"crc\" DevicePath \"\"" Oct 01 16:35:15 crc kubenswrapper[4949]: I1001 16:35:15.523024 4949 reconciler_common.go:293] "Volume detached for volume \"nova-custom-ceph-combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3c32efc4-95e1-4528-981f-0055372e12db-nova-custom-ceph-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 16:35:15 crc kubenswrapper[4949]: I1001 16:35:15.523039 4949 reconciler_common.go:293] "Volume detached for volume \"nova-migration-ssh-key-1\" (UniqueName: \"kubernetes.io/secret/3c32efc4-95e1-4528-981f-0055372e12db-nova-migration-ssh-key-1\") on node \"crc\" DevicePath \"\"" Oct 01 16:35:15 crc kubenswrapper[4949]: I1001 16:35:15.523054 4949 reconciler_common.go:293] "Volume detached for volume \"inventory\" (UniqueName: \"kubernetes.io/secret/3c32efc4-95e1-4528-981f-0055372e12db-inventory\") on node \"crc\" DevicePath \"\"" Oct 01 16:35:15 crc kubenswrapper[4949]: I1001 16:35:15.523069 4949 reconciler_common.go:293] "Volume detached for volume \"ceph-nova-0\" (UniqueName: \"kubernetes.io/configmap/3c32efc4-95e1-4528-981f-0055372e12db-ceph-nova-0\") on node \"crc\" DevicePath \"\"" Oct 01 16:35:15 crc kubenswrapper[4949]: I1001 16:35:15.523082 4949 reconciler_common.go:293] "Volume detached for volume \"nova-cell1-compute-config-0\" (UniqueName: \"kubernetes.io/secret/3c32efc4-95e1-4528-981f-0055372e12db-nova-cell1-compute-config-0\") on node \"crc\" DevicePath \"\"" Oct 01 16:35:15 crc kubenswrapper[4949]: I1001 16:35:15.523095 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfz9p\" (UniqueName: \"kubernetes.io/projected/3c32efc4-95e1-4528-981f-0055372e12db-kube-api-access-gfz9p\") on node \"crc\" DevicePath \"\"" Oct 01 16:35:15 crc kubenswrapper[4949]: I1001 16:35:15.523107 4949 reconciler_common.go:293] "Volume detached for volume \"nova-extra-config-0\" (UniqueName: \"kubernetes.io/configmap/3c32efc4-95e1-4528-981f-0055372e12db-nova-extra-config-0\") on node \"crc\" DevicePath \"\"" Oct 01 16:35:15 crc kubenswrapper[4949]: I1001 16:35:15.895030 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5" event={"ID":"3c32efc4-95e1-4528-981f-0055372e12db","Type":"ContainerDied","Data":"64426f40170c41b79bea25b57a1d41000e4568b954144a9aa9073e77c2e4f625"} Oct 01 16:35:15 crc kubenswrapper[4949]: I1001 16:35:15.895068 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64426f40170c41b79bea25b57a1d41000e4568b954144a9aa9073e77c2e4f625" Oct 01 16:35:15 crc kubenswrapper[4949]: I1001 16:35:15.895112 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.620690 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-backup-0"] Oct 01 16:35:29 crc kubenswrapper[4949]: E1001 16:35:29.621548 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05262b67-71ad-45cb-9f8d-8d96a19162c4" containerName="extract-utilities" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.621561 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="05262b67-71ad-45cb-9f8d-8d96a19162c4" containerName="extract-utilities" Oct 01 16:35:29 crc kubenswrapper[4949]: E1001 16:35:29.621584 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05262b67-71ad-45cb-9f8d-8d96a19162c4" containerName="extract-content" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.621590 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="05262b67-71ad-45cb-9f8d-8d96a19162c4" containerName="extract-content" Oct 01 16:35:29 crc kubenswrapper[4949]: E1001 16:35:29.621604 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3c32efc4-95e1-4528-981f-0055372e12db" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.621611 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="3c32efc4-95e1-4528-981f-0055372e12db" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Oct 01 16:35:29 crc kubenswrapper[4949]: E1001 16:35:29.621636 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="05262b67-71ad-45cb-9f8d-8d96a19162c4" containerName="registry-server" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.621641 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="05262b67-71ad-45cb-9f8d-8d96a19162c4" containerName="registry-server" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.621795 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="05262b67-71ad-45cb-9f8d-8d96a19162c4" containerName="registry-server" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.621812 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="3c32efc4-95e1-4528-981f-0055372e12db" containerName="nova-custom-ceph-edpm-deployment-openstack-edpm-ipam" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.622710 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.625718 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-backup-config-data" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.625859 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceph-conf-files" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.659457 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/cinder-volume-volume1-0"] Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.660999 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.662826 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cinder-volume-volume1-config-data" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.672783 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.684633 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.693815 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdb84d6a-de04-4107-917f-c2a6599ed2dc-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"bdb84d6a-de04-4107-917f-c2a6599ed2dc\") " pod="openstack/cinder-backup-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.693855 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bdb84d6a-de04-4107-917f-c2a6599ed2dc-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"bdb84d6a-de04-4107-917f-c2a6599ed2dc\") " pod="openstack/cinder-backup-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.693875 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/bdb84d6a-de04-4107-917f-c2a6599ed2dc-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"bdb84d6a-de04-4107-917f-c2a6599ed2dc\") " pod="openstack/cinder-backup-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.693892 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bdb84d6a-de04-4107-917f-c2a6599ed2dc-run\") pod \"cinder-backup-0\" (UID: \"bdb84d6a-de04-4107-917f-c2a6599ed2dc\") " pod="openstack/cinder-backup-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.693919 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/bdb84d6a-de04-4107-917f-c2a6599ed2dc-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"bdb84d6a-de04-4107-917f-c2a6599ed2dc\") " pod="openstack/cinder-backup-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.693937 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bdb84d6a-de04-4107-917f-c2a6599ed2dc-lib-modules\") pod \"cinder-backup-0\" (UID: \"bdb84d6a-de04-4107-917f-c2a6599ed2dc\") " pod="openstack/cinder-backup-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.693973 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdb84d6a-de04-4107-917f-c2a6599ed2dc-scripts\") pod \"cinder-backup-0\" (UID: \"bdb84d6a-de04-4107-917f-c2a6599ed2dc\") " pod="openstack/cinder-backup-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.694005 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdb84d6a-de04-4107-917f-c2a6599ed2dc-config-data\") pod \"cinder-backup-0\" (UID: \"bdb84d6a-de04-4107-917f-c2a6599ed2dc\") " pod="openstack/cinder-backup-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.694031 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lnq65\" (UniqueName: \"kubernetes.io/projected/bdb84d6a-de04-4107-917f-c2a6599ed2dc-kube-api-access-lnq65\") pod \"cinder-backup-0\" (UID: \"bdb84d6a-de04-4107-917f-c2a6599ed2dc\") " pod="openstack/cinder-backup-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.694057 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bdb84d6a-de04-4107-917f-c2a6599ed2dc-config-data-custom\") pod \"cinder-backup-0\" (UID: \"bdb84d6a-de04-4107-917f-c2a6599ed2dc\") " pod="openstack/cinder-backup-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.694074 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/bdb84d6a-de04-4107-917f-c2a6599ed2dc-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"bdb84d6a-de04-4107-917f-c2a6599ed2dc\") " pod="openstack/cinder-backup-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.694089 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/bdb84d6a-de04-4107-917f-c2a6599ed2dc-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"bdb84d6a-de04-4107-917f-c2a6599ed2dc\") " pod="openstack/cinder-backup-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.694105 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/bdb84d6a-de04-4107-917f-c2a6599ed2dc-dev\") pod \"cinder-backup-0\" (UID: \"bdb84d6a-de04-4107-917f-c2a6599ed2dc\") " pod="openstack/cinder-backup-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.694140 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/bdb84d6a-de04-4107-917f-c2a6599ed2dc-etc-nvme\") pod \"cinder-backup-0\" (UID: \"bdb84d6a-de04-4107-917f-c2a6599ed2dc\") " pod="openstack/cinder-backup-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.694169 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bdb84d6a-de04-4107-917f-c2a6599ed2dc-sys\") pod \"cinder-backup-0\" (UID: \"bdb84d6a-de04-4107-917f-c2a6599ed2dc\") " pod="openstack/cinder-backup-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.694184 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/bdb84d6a-de04-4107-917f-c2a6599ed2dc-ceph\") pod \"cinder-backup-0\" (UID: \"bdb84d6a-de04-4107-917f-c2a6599ed2dc\") " pod="openstack/cinder-backup-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.797029 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a9aee921-7753-4725-a42d-ee8161afd631-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"a9aee921-7753-4725-a42d-ee8161afd631\") " pod="openstack/cinder-volume-volume1-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.797107 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdb84d6a-de04-4107-917f-c2a6599ed2dc-config-data\") pod \"cinder-backup-0\" (UID: \"bdb84d6a-de04-4107-917f-c2a6599ed2dc\") " pod="openstack/cinder-backup-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.797155 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a9aee921-7753-4725-a42d-ee8161afd631-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"a9aee921-7753-4725-a42d-ee8161afd631\") " pod="openstack/cinder-volume-volume1-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.797217 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9aee921-7753-4725-a42d-ee8161afd631-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"a9aee921-7753-4725-a42d-ee8161afd631\") " pod="openstack/cinder-volume-volume1-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.797260 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a9aee921-7753-4725-a42d-ee8161afd631-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"a9aee921-7753-4725-a42d-ee8161afd631\") " pod="openstack/cinder-volume-volume1-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.797292 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lnq65\" (UniqueName: \"kubernetes.io/projected/bdb84d6a-de04-4107-917f-c2a6599ed2dc-kube-api-access-lnq65\") pod \"cinder-backup-0\" (UID: \"bdb84d6a-de04-4107-917f-c2a6599ed2dc\") " pod="openstack/cinder-backup-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.797366 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a9aee921-7753-4725-a42d-ee8161afd631-run\") pod \"cinder-volume-volume1-0\" (UID: \"a9aee921-7753-4725-a42d-ee8161afd631\") " pod="openstack/cinder-volume-volume1-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.797385 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bdb84d6a-de04-4107-917f-c2a6599ed2dc-config-data-custom\") pod \"cinder-backup-0\" (UID: \"bdb84d6a-de04-4107-917f-c2a6599ed2dc\") " pod="openstack/cinder-backup-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.799443 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/bdb84d6a-de04-4107-917f-c2a6599ed2dc-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"bdb84d6a-de04-4107-917f-c2a6599ed2dc\") " pod="openstack/cinder-backup-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.799460 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/bdb84d6a-de04-4107-917f-c2a6599ed2dc-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"bdb84d6a-de04-4107-917f-c2a6599ed2dc\") " pod="openstack/cinder-backup-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.799478 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/a9aee921-7753-4725-a42d-ee8161afd631-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"a9aee921-7753-4725-a42d-ee8161afd631\") " pod="openstack/cinder-volume-volume1-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.799494 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/bdb84d6a-de04-4107-917f-c2a6599ed2dc-dev\") pod \"cinder-backup-0\" (UID: \"bdb84d6a-de04-4107-917f-c2a6599ed2dc\") " pod="openstack/cinder-backup-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.799513 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/bdb84d6a-de04-4107-917f-c2a6599ed2dc-etc-nvme\") pod \"cinder-backup-0\" (UID: \"bdb84d6a-de04-4107-917f-c2a6599ed2dc\") " pod="openstack/cinder-backup-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.799530 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a9aee921-7753-4725-a42d-ee8161afd631-dev\") pod \"cinder-volume-volume1-0\" (UID: \"a9aee921-7753-4725-a42d-ee8161afd631\") " pod="openstack/cinder-volume-volume1-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.799545 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a9aee921-7753-4725-a42d-ee8161afd631-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"a9aee921-7753-4725-a42d-ee8161afd631\") " pod="openstack/cinder-volume-volume1-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.799582 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bdb84d6a-de04-4107-917f-c2a6599ed2dc-sys\") pod \"cinder-backup-0\" (UID: \"bdb84d6a-de04-4107-917f-c2a6599ed2dc\") " pod="openstack/cinder-backup-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.799599 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/bdb84d6a-de04-4107-917f-c2a6599ed2dc-ceph\") pod \"cinder-backup-0\" (UID: \"bdb84d6a-de04-4107-917f-c2a6599ed2dc\") " pod="openstack/cinder-backup-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.799634 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lmqz6\" (UniqueName: \"kubernetes.io/projected/a9aee921-7753-4725-a42d-ee8161afd631-kube-api-access-lmqz6\") pod \"cinder-volume-volume1-0\" (UID: \"a9aee921-7753-4725-a42d-ee8161afd631\") " pod="openstack/cinder-volume-volume1-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.799667 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdb84d6a-de04-4107-917f-c2a6599ed2dc-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"bdb84d6a-de04-4107-917f-c2a6599ed2dc\") " pod="openstack/cinder-backup-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.799689 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bdb84d6a-de04-4107-917f-c2a6599ed2dc-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"bdb84d6a-de04-4107-917f-c2a6599ed2dc\") " pod="openstack/cinder-backup-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.799709 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/bdb84d6a-de04-4107-917f-c2a6599ed2dc-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"bdb84d6a-de04-4107-917f-c2a6599ed2dc\") " pod="openstack/cinder-backup-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.799724 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bdb84d6a-de04-4107-917f-c2a6599ed2dc-run\") pod \"cinder-backup-0\" (UID: \"bdb84d6a-de04-4107-917f-c2a6599ed2dc\") " pod="openstack/cinder-backup-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.799751 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/a9aee921-7753-4725-a42d-ee8161afd631-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"a9aee921-7753-4725-a42d-ee8161afd631\") " pod="openstack/cinder-volume-volume1-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.799767 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9aee921-7753-4725-a42d-ee8161afd631-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"a9aee921-7753-4725-a42d-ee8161afd631\") " pod="openstack/cinder-volume-volume1-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.799782 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a9aee921-7753-4725-a42d-ee8161afd631-sys\") pod \"cinder-volume-volume1-0\" (UID: \"a9aee921-7753-4725-a42d-ee8161afd631\") " pod="openstack/cinder-volume-volume1-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.799814 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/bdb84d6a-de04-4107-917f-c2a6599ed2dc-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"bdb84d6a-de04-4107-917f-c2a6599ed2dc\") " pod="openstack/cinder-backup-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.799836 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bdb84d6a-de04-4107-917f-c2a6599ed2dc-lib-modules\") pod \"cinder-backup-0\" (UID: \"bdb84d6a-de04-4107-917f-c2a6599ed2dc\") " pod="openstack/cinder-backup-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.799864 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a9aee921-7753-4725-a42d-ee8161afd631-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"a9aee921-7753-4725-a42d-ee8161afd631\") " pod="openstack/cinder-volume-volume1-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.799880 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9aee921-7753-4725-a42d-ee8161afd631-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"a9aee921-7753-4725-a42d-ee8161afd631\") " pod="openstack/cinder-volume-volume1-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.799905 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a9aee921-7753-4725-a42d-ee8161afd631-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"a9aee921-7753-4725-a42d-ee8161afd631\") " pod="openstack/cinder-volume-volume1-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.799920 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a9aee921-7753-4725-a42d-ee8161afd631-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"a9aee921-7753-4725-a42d-ee8161afd631\") " pod="openstack/cinder-volume-volume1-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.799948 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdb84d6a-de04-4107-917f-c2a6599ed2dc-scripts\") pod \"cinder-backup-0\" (UID: \"bdb84d6a-de04-4107-917f-c2a6599ed2dc\") " pod="openstack/cinder-backup-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.800814 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/bdb84d6a-de04-4107-917f-c2a6599ed2dc-lib-modules\") pod \"cinder-backup-0\" (UID: \"bdb84d6a-de04-4107-917f-c2a6599ed2dc\") " pod="openstack/cinder-backup-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.800856 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/bdb84d6a-de04-4107-917f-c2a6599ed2dc-run\") pod \"cinder-backup-0\" (UID: \"bdb84d6a-de04-4107-917f-c2a6599ed2dc\") " pod="openstack/cinder-backup-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.800967 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/bdb84d6a-de04-4107-917f-c2a6599ed2dc-var-lib-cinder\") pod \"cinder-backup-0\" (UID: \"bdb84d6a-de04-4107-917f-c2a6599ed2dc\") " pod="openstack/cinder-backup-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.801303 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/bdb84d6a-de04-4107-917f-c2a6599ed2dc-var-locks-brick\") pod \"cinder-backup-0\" (UID: \"bdb84d6a-de04-4107-917f-c2a6599ed2dc\") " pod="openstack/cinder-backup-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.801351 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/bdb84d6a-de04-4107-917f-c2a6599ed2dc-sys\") pod \"cinder-backup-0\" (UID: \"bdb84d6a-de04-4107-917f-c2a6599ed2dc\") " pod="openstack/cinder-backup-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.801376 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/bdb84d6a-de04-4107-917f-c2a6599ed2dc-etc-iscsi\") pod \"cinder-backup-0\" (UID: \"bdb84d6a-de04-4107-917f-c2a6599ed2dc\") " pod="openstack/cinder-backup-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.801365 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/bdb84d6a-de04-4107-917f-c2a6599ed2dc-etc-machine-id\") pod \"cinder-backup-0\" (UID: \"bdb84d6a-de04-4107-917f-c2a6599ed2dc\") " pod="openstack/cinder-backup-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.801490 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/bdb84d6a-de04-4107-917f-c2a6599ed2dc-var-locks-cinder\") pod \"cinder-backup-0\" (UID: \"bdb84d6a-de04-4107-917f-c2a6599ed2dc\") " pod="openstack/cinder-backup-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.801516 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/bdb84d6a-de04-4107-917f-c2a6599ed2dc-etc-nvme\") pod \"cinder-backup-0\" (UID: \"bdb84d6a-de04-4107-917f-c2a6599ed2dc\") " pod="openstack/cinder-backup-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.801720 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/bdb84d6a-de04-4107-917f-c2a6599ed2dc-dev\") pod \"cinder-backup-0\" (UID: \"bdb84d6a-de04-4107-917f-c2a6599ed2dc\") " pod="openstack/cinder-backup-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.804277 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bdb84d6a-de04-4107-917f-c2a6599ed2dc-config-data\") pod \"cinder-backup-0\" (UID: \"bdb84d6a-de04-4107-917f-c2a6599ed2dc\") " pod="openstack/cinder-backup-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.805553 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/bdb84d6a-de04-4107-917f-c2a6599ed2dc-scripts\") pod \"cinder-backup-0\" (UID: \"bdb84d6a-de04-4107-917f-c2a6599ed2dc\") " pod="openstack/cinder-backup-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.805749 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/bdb84d6a-de04-4107-917f-c2a6599ed2dc-combined-ca-bundle\") pod \"cinder-backup-0\" (UID: \"bdb84d6a-de04-4107-917f-c2a6599ed2dc\") " pod="openstack/cinder-backup-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.806424 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/bdb84d6a-de04-4107-917f-c2a6599ed2dc-ceph\") pod \"cinder-backup-0\" (UID: \"bdb84d6a-de04-4107-917f-c2a6599ed2dc\") " pod="openstack/cinder-backup-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.816942 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/bdb84d6a-de04-4107-917f-c2a6599ed2dc-config-data-custom\") pod \"cinder-backup-0\" (UID: \"bdb84d6a-de04-4107-917f-c2a6599ed2dc\") " pod="openstack/cinder-backup-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.820895 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lnq65\" (UniqueName: \"kubernetes.io/projected/bdb84d6a-de04-4107-917f-c2a6599ed2dc-kube-api-access-lnq65\") pod \"cinder-backup-0\" (UID: \"bdb84d6a-de04-4107-917f-c2a6599ed2dc\") " pod="openstack/cinder-backup-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.901423 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lmqz6\" (UniqueName: \"kubernetes.io/projected/a9aee921-7753-4725-a42d-ee8161afd631-kube-api-access-lmqz6\") pod \"cinder-volume-volume1-0\" (UID: \"a9aee921-7753-4725-a42d-ee8161afd631\") " pod="openstack/cinder-volume-volume1-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.902003 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/a9aee921-7753-4725-a42d-ee8161afd631-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"a9aee921-7753-4725-a42d-ee8161afd631\") " pod="openstack/cinder-volume-volume1-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.902149 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9aee921-7753-4725-a42d-ee8161afd631-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"a9aee921-7753-4725-a42d-ee8161afd631\") " pod="openstack/cinder-volume-volume1-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.902259 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a9aee921-7753-4725-a42d-ee8161afd631-sys\") pod \"cinder-volume-volume1-0\" (UID: \"a9aee921-7753-4725-a42d-ee8161afd631\") " pod="openstack/cinder-volume-volume1-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.902392 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a9aee921-7753-4725-a42d-ee8161afd631-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"a9aee921-7753-4725-a42d-ee8161afd631\") " pod="openstack/cinder-volume-volume1-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.902498 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9aee921-7753-4725-a42d-ee8161afd631-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"a9aee921-7753-4725-a42d-ee8161afd631\") " pod="openstack/cinder-volume-volume1-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.902612 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a9aee921-7753-4725-a42d-ee8161afd631-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"a9aee921-7753-4725-a42d-ee8161afd631\") " pod="openstack/cinder-volume-volume1-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.902709 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a9aee921-7753-4725-a42d-ee8161afd631-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"a9aee921-7753-4725-a42d-ee8161afd631\") " pod="openstack/cinder-volume-volume1-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.902777 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a9aee921-7753-4725-a42d-ee8161afd631-sys\") pod \"cinder-volume-volume1-0\" (UID: \"a9aee921-7753-4725-a42d-ee8161afd631\") " pod="openstack/cinder-volume-volume1-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.902085 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-cinder\" (UniqueName: \"kubernetes.io/host-path/a9aee921-7753-4725-a42d-ee8161afd631-var-locks-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"a9aee921-7753-4725-a42d-ee8161afd631\") " pod="openstack/cinder-volume-volume1-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.902993 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a9aee921-7753-4725-a42d-ee8161afd631-etc-iscsi\") pod \"cinder-volume-volume1-0\" (UID: \"a9aee921-7753-4725-a42d-ee8161afd631\") " pod="openstack/cinder-volume-volume1-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.903074 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a9aee921-7753-4725-a42d-ee8161afd631-etc-nvme\") pod \"cinder-volume-volume1-0\" (UID: \"a9aee921-7753-4725-a42d-ee8161afd631\") " pod="openstack/cinder-volume-volume1-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.903191 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a9aee921-7753-4725-a42d-ee8161afd631-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"a9aee921-7753-4725-a42d-ee8161afd631\") " pod="openstack/cinder-volume-volume1-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.903281 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a9aee921-7753-4725-a42d-ee8161afd631-etc-machine-id\") pod \"cinder-volume-volume1-0\" (UID: \"a9aee921-7753-4725-a42d-ee8161afd631\") " pod="openstack/cinder-volume-volume1-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.903319 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a9aee921-7753-4725-a42d-ee8161afd631-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"a9aee921-7753-4725-a42d-ee8161afd631\") " pod="openstack/cinder-volume-volume1-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.903370 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9aee921-7753-4725-a42d-ee8161afd631-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"a9aee921-7753-4725-a42d-ee8161afd631\") " pod="openstack/cinder-volume-volume1-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.903393 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a9aee921-7753-4725-a42d-ee8161afd631-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"a9aee921-7753-4725-a42d-ee8161afd631\") " pod="openstack/cinder-volume-volume1-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.903419 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a9aee921-7753-4725-a42d-ee8161afd631-lib-modules\") pod \"cinder-volume-volume1-0\" (UID: \"a9aee921-7753-4725-a42d-ee8161afd631\") " pod="openstack/cinder-volume-volume1-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.903488 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a9aee921-7753-4725-a42d-ee8161afd631-run\") pod \"cinder-volume-volume1-0\" (UID: \"a9aee921-7753-4725-a42d-ee8161afd631\") " pod="openstack/cinder-volume-volume1-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.903498 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a9aee921-7753-4725-a42d-ee8161afd631-var-locks-brick\") pod \"cinder-volume-volume1-0\" (UID: \"a9aee921-7753-4725-a42d-ee8161afd631\") " pod="openstack/cinder-volume-volume1-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.903531 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a9aee921-7753-4725-a42d-ee8161afd631-run\") pod \"cinder-volume-volume1-0\" (UID: \"a9aee921-7753-4725-a42d-ee8161afd631\") " pod="openstack/cinder-volume-volume1-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.903557 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/a9aee921-7753-4725-a42d-ee8161afd631-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"a9aee921-7753-4725-a42d-ee8161afd631\") " pod="openstack/cinder-volume-volume1-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.903621 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a9aee921-7753-4725-a42d-ee8161afd631-dev\") pod \"cinder-volume-volume1-0\" (UID: \"a9aee921-7753-4725-a42d-ee8161afd631\") " pod="openstack/cinder-volume-volume1-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.903654 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a9aee921-7753-4725-a42d-ee8161afd631-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"a9aee921-7753-4725-a42d-ee8161afd631\") " pod="openstack/cinder-volume-volume1-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.903671 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-cinder\" (UniqueName: \"kubernetes.io/host-path/a9aee921-7753-4725-a42d-ee8161afd631-var-lib-cinder\") pod \"cinder-volume-volume1-0\" (UID: \"a9aee921-7753-4725-a42d-ee8161afd631\") " pod="openstack/cinder-volume-volume1-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.903718 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a9aee921-7753-4725-a42d-ee8161afd631-dev\") pod \"cinder-volume-volume1-0\" (UID: \"a9aee921-7753-4725-a42d-ee8161afd631\") " pod="openstack/cinder-volume-volume1-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.906110 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a9aee921-7753-4725-a42d-ee8161afd631-scripts\") pod \"cinder-volume-volume1-0\" (UID: \"a9aee921-7753-4725-a42d-ee8161afd631\") " pod="openstack/cinder-volume-volume1-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.906734 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a9aee921-7753-4725-a42d-ee8161afd631-combined-ca-bundle\") pod \"cinder-volume-volume1-0\" (UID: \"a9aee921-7753-4725-a42d-ee8161afd631\") " pod="openstack/cinder-volume-volume1-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.909013 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a9aee921-7753-4725-a42d-ee8161afd631-config-data\") pod \"cinder-volume-volume1-0\" (UID: \"a9aee921-7753-4725-a42d-ee8161afd631\") " pod="openstack/cinder-volume-volume1-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.912947 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a9aee921-7753-4725-a42d-ee8161afd631-config-data-custom\") pod \"cinder-volume-volume1-0\" (UID: \"a9aee921-7753-4725-a42d-ee8161afd631\") " pod="openstack/cinder-volume-volume1-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.914564 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a9aee921-7753-4725-a42d-ee8161afd631-ceph\") pod \"cinder-volume-volume1-0\" (UID: \"a9aee921-7753-4725-a42d-ee8161afd631\") " pod="openstack/cinder-volume-volume1-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.924629 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lmqz6\" (UniqueName: \"kubernetes.io/projected/a9aee921-7753-4725-a42d-ee8161afd631-kube-api-access-lmqz6\") pod \"cinder-volume-volume1-0\" (UID: \"a9aee921-7753-4725-a42d-ee8161afd631\") " pod="openstack/cinder-volume-volume1-0" Oct 01 16:35:29 crc kubenswrapper[4949]: I1001 16:35:29.960858 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-backup-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.007331 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/cinder-volume-volume1-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.206755 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-create-srpdf"] Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.209577 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-srpdf" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.222142 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-srpdf"] Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.261572 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-788cb9bcc-8rj25"] Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.263643 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-788cb9bcc-8rj25" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.266814 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-config-data" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.267036 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon-horizon-dockercfg-5vlwl" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.267151 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"horizon-scripts" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.267251 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"horizon" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.306414 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-788cb9bcc-8rj25"] Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.315517 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qdh6\" (UniqueName: \"kubernetes.io/projected/4fbb69f3-8bbc-4acd-97eb-b67dcd314f2d-kube-api-access-5qdh6\") pod \"manila-db-create-srpdf\" (UID: \"4fbb69f3-8bbc-4acd-97eb-b67dcd314f2d\") " pod="openstack/manila-db-create-srpdf" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.370475 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.376030 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.379440 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-scripts" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.379661 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-glance-dockercfg-7kv4g" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.379776 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.384782 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.399253 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.415025 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7756fd4cb7-fb6wc"] Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.417982 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7756fd4cb7-fb6wc" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.420333 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qdh6\" (UniqueName: \"kubernetes.io/projected/4fbb69f3-8bbc-4acd-97eb-b67dcd314f2d-kube-api-access-5qdh6\") pod \"manila-db-create-srpdf\" (UID: \"4fbb69f3-8bbc-4acd-97eb-b67dcd314f2d\") " pod="openstack/manila-db-create-srpdf" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.420388 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/51ce1d58-e3c1-4eff-819b-b4f6f19e3498-horizon-secret-key\") pod \"horizon-788cb9bcc-8rj25\" (UID: \"51ce1d58-e3c1-4eff-819b-b4f6f19e3498\") " pod="openstack/horizon-788cb9bcc-8rj25" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.420421 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/51ce1d58-e3c1-4eff-819b-b4f6f19e3498-scripts\") pod \"horizon-788cb9bcc-8rj25\" (UID: \"51ce1d58-e3c1-4eff-819b-b4f6f19e3498\") " pod="openstack/horizon-788cb9bcc-8rj25" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.420455 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/51ce1d58-e3c1-4eff-819b-b4f6f19e3498-config-data\") pod \"horizon-788cb9bcc-8rj25\" (UID: \"51ce1d58-e3c1-4eff-819b-b4f6f19e3498\") " pod="openstack/horizon-788cb9bcc-8rj25" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.420555 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flsp2\" (UniqueName: \"kubernetes.io/projected/51ce1d58-e3c1-4eff-819b-b4f6f19e3498-kube-api-access-flsp2\") pod \"horizon-788cb9bcc-8rj25\" (UID: \"51ce1d58-e3c1-4eff-819b-b4f6f19e3498\") " pod="openstack/horizon-788cb9bcc-8rj25" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.420591 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51ce1d58-e3c1-4eff-819b-b4f6f19e3498-logs\") pod \"horizon-788cb9bcc-8rj25\" (UID: \"51ce1d58-e3c1-4eff-819b-b4f6f19e3498\") " pod="openstack/horizon-788cb9bcc-8rj25" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.434037 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7756fd4cb7-fb6wc"] Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.438977 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.440869 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.447321 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.447805 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.447873 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.448210 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qdh6\" (UniqueName: \"kubernetes.io/projected/4fbb69f3-8bbc-4acd-97eb-b67dcd314f2d-kube-api-access-5qdh6\") pod \"manila-db-create-srpdf\" (UID: \"4fbb69f3-8bbc-4acd-97eb-b67dcd314f2d\") " pod="openstack/manila-db-create-srpdf" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.522417 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51ce1d58-e3c1-4eff-819b-b4f6f19e3498-logs\") pod \"horizon-788cb9bcc-8rj25\" (UID: \"51ce1d58-e3c1-4eff-819b-b4f6f19e3498\") " pod="openstack/horizon-788cb9bcc-8rj25" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.522509 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f2b3a7c7-2c23-4e95-a04f-4b8156bcedef-ceph\") pod \"glance-default-internal-api-0\" (UID: \"f2b3a7c7-2c23-4e95-a04f-4b8156bcedef\") " pod="openstack/glance-default-internal-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.522542 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2b3a7c7-2c23-4e95-a04f-4b8156bcedef-logs\") pod \"glance-default-internal-api-0\" (UID: \"f2b3a7c7-2c23-4e95-a04f-4b8156bcedef\") " pod="openstack/glance-default-internal-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.522599 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p95d6\" (UniqueName: \"kubernetes.io/projected/f2b3a7c7-2c23-4e95-a04f-4b8156bcedef-kube-api-access-p95d6\") pod \"glance-default-internal-api-0\" (UID: \"f2b3a7c7-2c23-4e95-a04f-4b8156bcedef\") " pod="openstack/glance-default-internal-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.522633 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"f2b3a7c7-2c23-4e95-a04f-4b8156bcedef\") " pod="openstack/glance-default-internal-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.522659 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7vwkc\" (UniqueName: \"kubernetes.io/projected/a2eee1a1-972c-4bf6-9dab-d55162980ed9-kube-api-access-7vwkc\") pod \"horizon-7756fd4cb7-fb6wc\" (UID: \"a2eee1a1-972c-4bf6-9dab-d55162980ed9\") " pod="openstack/horizon-7756fd4cb7-fb6wc" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.522691 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/51ce1d58-e3c1-4eff-819b-b4f6f19e3498-horizon-secret-key\") pod \"horizon-788cb9bcc-8rj25\" (UID: \"51ce1d58-e3c1-4eff-819b-b4f6f19e3498\") " pod="openstack/horizon-788cb9bcc-8rj25" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.522737 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2b3a7c7-2c23-4e95-a04f-4b8156bcedef-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f2b3a7c7-2c23-4e95-a04f-4b8156bcedef\") " pod="openstack/glance-default-internal-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.522759 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2b3a7c7-2c23-4e95-a04f-4b8156bcedef-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f2b3a7c7-2c23-4e95-a04f-4b8156bcedef\") " pod="openstack/glance-default-internal-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.522781 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/51ce1d58-e3c1-4eff-819b-b4f6f19e3498-scripts\") pod \"horizon-788cb9bcc-8rj25\" (UID: \"51ce1d58-e3c1-4eff-819b-b4f6f19e3498\") " pod="openstack/horizon-788cb9bcc-8rj25" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.522821 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/51ce1d58-e3c1-4eff-819b-b4f6f19e3498-config-data\") pod \"horizon-788cb9bcc-8rj25\" (UID: \"51ce1d58-e3c1-4eff-819b-b4f6f19e3498\") " pod="openstack/horizon-788cb9bcc-8rj25" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.522861 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2b3a7c7-2c23-4e95-a04f-4b8156bcedef-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f2b3a7c7-2c23-4e95-a04f-4b8156bcedef\") " pod="openstack/glance-default-internal-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.522907 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2b3a7c7-2c23-4e95-a04f-4b8156bcedef-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f2b3a7c7-2c23-4e95-a04f-4b8156bcedef\") " pod="openstack/glance-default-internal-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.522910 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51ce1d58-e3c1-4eff-819b-b4f6f19e3498-logs\") pod \"horizon-788cb9bcc-8rj25\" (UID: \"51ce1d58-e3c1-4eff-819b-b4f6f19e3498\") " pod="openstack/horizon-788cb9bcc-8rj25" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.522985 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2b3a7c7-2c23-4e95-a04f-4b8156bcedef-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f2b3a7c7-2c23-4e95-a04f-4b8156bcedef\") " pod="openstack/glance-default-internal-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.523021 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2eee1a1-972c-4bf6-9dab-d55162980ed9-logs\") pod \"horizon-7756fd4cb7-fb6wc\" (UID: \"a2eee1a1-972c-4bf6-9dab-d55162980ed9\") " pod="openstack/horizon-7756fd4cb7-fb6wc" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.523044 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a2eee1a1-972c-4bf6-9dab-d55162980ed9-scripts\") pod \"horizon-7756fd4cb7-fb6wc\" (UID: \"a2eee1a1-972c-4bf6-9dab-d55162980ed9\") " pod="openstack/horizon-7756fd4cb7-fb6wc" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.523080 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a2eee1a1-972c-4bf6-9dab-d55162980ed9-config-data\") pod \"horizon-7756fd4cb7-fb6wc\" (UID: \"a2eee1a1-972c-4bf6-9dab-d55162980ed9\") " pod="openstack/horizon-7756fd4cb7-fb6wc" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.523108 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-flsp2\" (UniqueName: \"kubernetes.io/projected/51ce1d58-e3c1-4eff-819b-b4f6f19e3498-kube-api-access-flsp2\") pod \"horizon-788cb9bcc-8rj25\" (UID: \"51ce1d58-e3c1-4eff-819b-b4f6f19e3498\") " pod="openstack/horizon-788cb9bcc-8rj25" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.523152 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a2eee1a1-972c-4bf6-9dab-d55162980ed9-horizon-secret-key\") pod \"horizon-7756fd4cb7-fb6wc\" (UID: \"a2eee1a1-972c-4bf6-9dab-d55162980ed9\") " pod="openstack/horizon-7756fd4cb7-fb6wc" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.524091 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/51ce1d58-e3c1-4eff-819b-b4f6f19e3498-scripts\") pod \"horizon-788cb9bcc-8rj25\" (UID: \"51ce1d58-e3c1-4eff-819b-b4f6f19e3498\") " pod="openstack/horizon-788cb9bcc-8rj25" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.524390 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/51ce1d58-e3c1-4eff-819b-b4f6f19e3498-config-data\") pod \"horizon-788cb9bcc-8rj25\" (UID: \"51ce1d58-e3c1-4eff-819b-b4f6f19e3498\") " pod="openstack/horizon-788cb9bcc-8rj25" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.526788 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/51ce1d58-e3c1-4eff-819b-b4f6f19e3498-horizon-secret-key\") pod \"horizon-788cb9bcc-8rj25\" (UID: \"51ce1d58-e3c1-4eff-819b-b4f6f19e3498\") " pod="openstack/horizon-788cb9bcc-8rj25" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.541966 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-flsp2\" (UniqueName: \"kubernetes.io/projected/51ce1d58-e3c1-4eff-819b-b4f6f19e3498-kube-api-access-flsp2\") pod \"horizon-788cb9bcc-8rj25\" (UID: \"51ce1d58-e3c1-4eff-819b-b4f6f19e3498\") " pod="openstack/horizon-788cb9bcc-8rj25" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.546611 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-srpdf" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.600404 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-backup-0"] Oct 01 16:35:30 crc kubenswrapper[4949]: W1001 16:35:30.601458 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbdb84d6a_de04_4107_917f_c2a6599ed2dc.slice/crio-db42cda8c91a51345d2bbab1008d79edb02d25abc62bd87e3d808197c308ff7c WatchSource:0}: Error finding container db42cda8c91a51345d2bbab1008d79edb02d25abc62bd87e3d808197c308ff7c: Status 404 returned error can't find the container with id db42cda8c91a51345d2bbab1008d79edb02d25abc62bd87e3d808197c308ff7c Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.604850 4949 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.625024 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vz8lr\" (UniqueName: \"kubernetes.io/projected/fa66b61c-a20e-48af-9432-8a980d60b47c-kube-api-access-vz8lr\") pod \"glance-default-external-api-0\" (UID: \"fa66b61c-a20e-48af-9432-8a980d60b47c\") " pod="openstack/glance-default-external-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.625059 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa66b61c-a20e-48af-9432-8a980d60b47c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fa66b61c-a20e-48af-9432-8a980d60b47c\") " pod="openstack/glance-default-external-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.625112 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2b3a7c7-2c23-4e95-a04f-4b8156bcedef-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f2b3a7c7-2c23-4e95-a04f-4b8156bcedef\") " pod="openstack/glance-default-internal-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.625178 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa66b61c-a20e-48af-9432-8a980d60b47c-scripts\") pod \"glance-default-external-api-0\" (UID: \"fa66b61c-a20e-48af-9432-8a980d60b47c\") " pod="openstack/glance-default-external-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.625222 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2b3a7c7-2c23-4e95-a04f-4b8156bcedef-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f2b3a7c7-2c23-4e95-a04f-4b8156bcedef\") " pod="openstack/glance-default-internal-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.625282 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa66b61c-a20e-48af-9432-8a980d60b47c-logs\") pod \"glance-default-external-api-0\" (UID: \"fa66b61c-a20e-48af-9432-8a980d60b47c\") " pod="openstack/glance-default-external-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.625344 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2b3a7c7-2c23-4e95-a04f-4b8156bcedef-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f2b3a7c7-2c23-4e95-a04f-4b8156bcedef\") " pod="openstack/glance-default-internal-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.625397 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2eee1a1-972c-4bf6-9dab-d55162980ed9-logs\") pod \"horizon-7756fd4cb7-fb6wc\" (UID: \"a2eee1a1-972c-4bf6-9dab-d55162980ed9\") " pod="openstack/horizon-7756fd4cb7-fb6wc" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.625418 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a2eee1a1-972c-4bf6-9dab-d55162980ed9-scripts\") pod \"horizon-7756fd4cb7-fb6wc\" (UID: \"a2eee1a1-972c-4bf6-9dab-d55162980ed9\") " pod="openstack/horizon-7756fd4cb7-fb6wc" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.625438 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa66b61c-a20e-48af-9432-8a980d60b47c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fa66b61c-a20e-48af-9432-8a980d60b47c\") " pod="openstack/glance-default-external-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.625470 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fa66b61c-a20e-48af-9432-8a980d60b47c-ceph\") pod \"glance-default-external-api-0\" (UID: \"fa66b61c-a20e-48af-9432-8a980d60b47c\") " pod="openstack/glance-default-external-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.625508 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a2eee1a1-972c-4bf6-9dab-d55162980ed9-config-data\") pod \"horizon-7756fd4cb7-fb6wc\" (UID: \"a2eee1a1-972c-4bf6-9dab-d55162980ed9\") " pod="openstack/horizon-7756fd4cb7-fb6wc" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.625537 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a2eee1a1-972c-4bf6-9dab-d55162980ed9-horizon-secret-key\") pod \"horizon-7756fd4cb7-fb6wc\" (UID: \"a2eee1a1-972c-4bf6-9dab-d55162980ed9\") " pod="openstack/horizon-7756fd4cb7-fb6wc" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.625559 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"fa66b61c-a20e-48af-9432-8a980d60b47c\") " pod="openstack/glance-default-external-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.625644 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f2b3a7c7-2c23-4e95-a04f-4b8156bcedef-ceph\") pod \"glance-default-internal-api-0\" (UID: \"f2b3a7c7-2c23-4e95-a04f-4b8156bcedef\") " pod="openstack/glance-default-internal-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.625680 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2b3a7c7-2c23-4e95-a04f-4b8156bcedef-logs\") pod \"glance-default-internal-api-0\" (UID: \"f2b3a7c7-2c23-4e95-a04f-4b8156bcedef\") " pod="openstack/glance-default-internal-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.625716 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa66b61c-a20e-48af-9432-8a980d60b47c-config-data\") pod \"glance-default-external-api-0\" (UID: \"fa66b61c-a20e-48af-9432-8a980d60b47c\") " pod="openstack/glance-default-external-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.625770 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p95d6\" (UniqueName: \"kubernetes.io/projected/f2b3a7c7-2c23-4e95-a04f-4b8156bcedef-kube-api-access-p95d6\") pod \"glance-default-internal-api-0\" (UID: \"f2b3a7c7-2c23-4e95-a04f-4b8156bcedef\") " pod="openstack/glance-default-internal-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.625808 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"f2b3a7c7-2c23-4e95-a04f-4b8156bcedef\") " pod="openstack/glance-default-internal-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.625836 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7vwkc\" (UniqueName: \"kubernetes.io/projected/a2eee1a1-972c-4bf6-9dab-d55162980ed9-kube-api-access-7vwkc\") pod \"horizon-7756fd4cb7-fb6wc\" (UID: \"a2eee1a1-972c-4bf6-9dab-d55162980ed9\") " pod="openstack/horizon-7756fd4cb7-fb6wc" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.625865 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2b3a7c7-2c23-4e95-a04f-4b8156bcedef-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f2b3a7c7-2c23-4e95-a04f-4b8156bcedef\") " pod="openstack/glance-default-internal-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.625885 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2b3a7c7-2c23-4e95-a04f-4b8156bcedef-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f2b3a7c7-2c23-4e95-a04f-4b8156bcedef\") " pod="openstack/glance-default-internal-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.625906 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa66b61c-a20e-48af-9432-8a980d60b47c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fa66b61c-a20e-48af-9432-8a980d60b47c\") " pod="openstack/glance-default-external-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.659568 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-788cb9bcc-8rj25" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.660019 4949 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"f2b3a7c7-2c23-4e95-a04f-4b8156bcedef\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-internal-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.660220 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2eee1a1-972c-4bf6-9dab-d55162980ed9-logs\") pod \"horizon-7756fd4cb7-fb6wc\" (UID: \"a2eee1a1-972c-4bf6-9dab-d55162980ed9\") " pod="openstack/horizon-7756fd4cb7-fb6wc" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.660897 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2b3a7c7-2c23-4e95-a04f-4b8156bcedef-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f2b3a7c7-2c23-4e95-a04f-4b8156bcedef\") " pod="openstack/glance-default-internal-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.660920 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2b3a7c7-2c23-4e95-a04f-4b8156bcedef-logs\") pod \"glance-default-internal-api-0\" (UID: \"f2b3a7c7-2c23-4e95-a04f-4b8156bcedef\") " pod="openstack/glance-default-internal-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.661795 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a2eee1a1-972c-4bf6-9dab-d55162980ed9-config-data\") pod \"horizon-7756fd4cb7-fb6wc\" (UID: \"a2eee1a1-972c-4bf6-9dab-d55162980ed9\") " pod="openstack/horizon-7756fd4cb7-fb6wc" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.664020 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2b3a7c7-2c23-4e95-a04f-4b8156bcedef-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f2b3a7c7-2c23-4e95-a04f-4b8156bcedef\") " pod="openstack/glance-default-internal-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.664963 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f2b3a7c7-2c23-4e95-a04f-4b8156bcedef-ceph\") pod \"glance-default-internal-api-0\" (UID: \"f2b3a7c7-2c23-4e95-a04f-4b8156bcedef\") " pod="openstack/glance-default-internal-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.665697 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a2eee1a1-972c-4bf6-9dab-d55162980ed9-horizon-secret-key\") pod \"horizon-7756fd4cb7-fb6wc\" (UID: \"a2eee1a1-972c-4bf6-9dab-d55162980ed9\") " pod="openstack/horizon-7756fd4cb7-fb6wc" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.666454 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2b3a7c7-2c23-4e95-a04f-4b8156bcedef-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f2b3a7c7-2c23-4e95-a04f-4b8156bcedef\") " pod="openstack/glance-default-internal-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.668489 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2b3a7c7-2c23-4e95-a04f-4b8156bcedef-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"f2b3a7c7-2c23-4e95-a04f-4b8156bcedef\") " pod="openstack/glance-default-internal-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.670957 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2b3a7c7-2c23-4e95-a04f-4b8156bcedef-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"f2b3a7c7-2c23-4e95-a04f-4b8156bcedef\") " pod="openstack/glance-default-internal-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.676917 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a2eee1a1-972c-4bf6-9dab-d55162980ed9-scripts\") pod \"horizon-7756fd4cb7-fb6wc\" (UID: \"a2eee1a1-972c-4bf6-9dab-d55162980ed9\") " pod="openstack/horizon-7756fd4cb7-fb6wc" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.682687 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7vwkc\" (UniqueName: \"kubernetes.io/projected/a2eee1a1-972c-4bf6-9dab-d55162980ed9-kube-api-access-7vwkc\") pod \"horizon-7756fd4cb7-fb6wc\" (UID: \"a2eee1a1-972c-4bf6-9dab-d55162980ed9\") " pod="openstack/horizon-7756fd4cb7-fb6wc" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.683258 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p95d6\" (UniqueName: \"kubernetes.io/projected/f2b3a7c7-2c23-4e95-a04f-4b8156bcedef-kube-api-access-p95d6\") pod \"glance-default-internal-api-0\" (UID: \"f2b3a7c7-2c23-4e95-a04f-4b8156bcedef\") " pod="openstack/glance-default-internal-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.706343 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"f2b3a7c7-2c23-4e95-a04f-4b8156bcedef\") " pod="openstack/glance-default-internal-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.727838 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa66b61c-a20e-48af-9432-8a980d60b47c-config-data\") pod \"glance-default-external-api-0\" (UID: \"fa66b61c-a20e-48af-9432-8a980d60b47c\") " pod="openstack/glance-default-external-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.727969 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa66b61c-a20e-48af-9432-8a980d60b47c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fa66b61c-a20e-48af-9432-8a980d60b47c\") " pod="openstack/glance-default-external-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.728030 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa66b61c-a20e-48af-9432-8a980d60b47c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fa66b61c-a20e-48af-9432-8a980d60b47c\") " pod="openstack/glance-default-external-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.728056 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vz8lr\" (UniqueName: \"kubernetes.io/projected/fa66b61c-a20e-48af-9432-8a980d60b47c-kube-api-access-vz8lr\") pod \"glance-default-external-api-0\" (UID: \"fa66b61c-a20e-48af-9432-8a980d60b47c\") " pod="openstack/glance-default-external-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.728092 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa66b61c-a20e-48af-9432-8a980d60b47c-scripts\") pod \"glance-default-external-api-0\" (UID: \"fa66b61c-a20e-48af-9432-8a980d60b47c\") " pod="openstack/glance-default-external-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.728184 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa66b61c-a20e-48af-9432-8a980d60b47c-logs\") pod \"glance-default-external-api-0\" (UID: \"fa66b61c-a20e-48af-9432-8a980d60b47c\") " pod="openstack/glance-default-external-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.728242 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa66b61c-a20e-48af-9432-8a980d60b47c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fa66b61c-a20e-48af-9432-8a980d60b47c\") " pod="openstack/glance-default-external-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.728282 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fa66b61c-a20e-48af-9432-8a980d60b47c-ceph\") pod \"glance-default-external-api-0\" (UID: \"fa66b61c-a20e-48af-9432-8a980d60b47c\") " pod="openstack/glance-default-external-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.728333 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"fa66b61c-a20e-48af-9432-8a980d60b47c\") " pod="openstack/glance-default-external-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.728526 4949 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"fa66b61c-a20e-48af-9432-8a980d60b47c\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.729235 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa66b61c-a20e-48af-9432-8a980d60b47c-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"fa66b61c-a20e-48af-9432-8a980d60b47c\") " pod="openstack/glance-default-external-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.729684 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa66b61c-a20e-48af-9432-8a980d60b47c-logs\") pod \"glance-default-external-api-0\" (UID: \"fa66b61c-a20e-48af-9432-8a980d60b47c\") " pod="openstack/glance-default-external-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.732821 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fa66b61c-a20e-48af-9432-8a980d60b47c-ceph\") pod \"glance-default-external-api-0\" (UID: \"fa66b61c-a20e-48af-9432-8a980d60b47c\") " pod="openstack/glance-default-external-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.734163 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa66b61c-a20e-48af-9432-8a980d60b47c-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"fa66b61c-a20e-48af-9432-8a980d60b47c\") " pod="openstack/glance-default-external-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.734663 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa66b61c-a20e-48af-9432-8a980d60b47c-scripts\") pod \"glance-default-external-api-0\" (UID: \"fa66b61c-a20e-48af-9432-8a980d60b47c\") " pod="openstack/glance-default-external-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.734961 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa66b61c-a20e-48af-9432-8a980d60b47c-config-data\") pod \"glance-default-external-api-0\" (UID: \"fa66b61c-a20e-48af-9432-8a980d60b47c\") " pod="openstack/glance-default-external-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.735761 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa66b61c-a20e-48af-9432-8a980d60b47c-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"fa66b61c-a20e-48af-9432-8a980d60b47c\") " pod="openstack/glance-default-external-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.739167 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7756fd4cb7-fb6wc" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.749951 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vz8lr\" (UniqueName: \"kubernetes.io/projected/fa66b61c-a20e-48af-9432-8a980d60b47c-kube-api-access-vz8lr\") pod \"glance-default-external-api-0\" (UID: \"fa66b61c-a20e-48af-9432-8a980d60b47c\") " pod="openstack/glance-default-external-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.762086 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"fa66b61c-a20e-48af-9432-8a980d60b47c\") " pod="openstack/glance-default-external-api-0" Oct 01 16:35:30 crc kubenswrapper[4949]: I1001 16:35:30.786751 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/cinder-volume-volume1-0"] Oct 01 16:35:30 crc kubenswrapper[4949]: W1001 16:35:30.803958 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda9aee921_7753_4725_a42d_ee8161afd631.slice/crio-d8b73d802979e086a7ff7f8cd3fc373af5613e92c46c32a18d67def987bdc182 WatchSource:0}: Error finding container d8b73d802979e086a7ff7f8cd3fc373af5613e92c46c32a18d67def987bdc182: Status 404 returned error can't find the container with id d8b73d802979e086a7ff7f8cd3fc373af5613e92c46c32a18d67def987bdc182 Oct 01 16:35:31 crc kubenswrapper[4949]: I1001 16:35:31.006029 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 01 16:35:31 crc kubenswrapper[4949]: I1001 16:35:31.088704 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 01 16:35:31 crc kubenswrapper[4949]: I1001 16:35:31.096693 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-create-srpdf"] Oct 01 16:35:31 crc kubenswrapper[4949]: I1001 16:35:31.119367 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7756fd4cb7-fb6wc"] Oct 01 16:35:31 crc kubenswrapper[4949]: I1001 16:35:31.142151 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"a9aee921-7753-4725-a42d-ee8161afd631","Type":"ContainerStarted","Data":"d8b73d802979e086a7ff7f8cd3fc373af5613e92c46c32a18d67def987bdc182"} Oct 01 16:35:31 crc kubenswrapper[4949]: I1001 16:35:31.149339 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"bdb84d6a-de04-4107-917f-c2a6599ed2dc","Type":"ContainerStarted","Data":"db42cda8c91a51345d2bbab1008d79edb02d25abc62bd87e3d808197c308ff7c"} Oct 01 16:35:31 crc kubenswrapper[4949]: W1001 16:35:31.155529 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda2eee1a1_972c_4bf6_9dab_d55162980ed9.slice/crio-9e8c5d72acd6c4d43d59b2e3aa1b6db9c567c7b2f69a5d7769a53acc373825ef WatchSource:0}: Error finding container 9e8c5d72acd6c4d43d59b2e3aa1b6db9c567c7b2f69a5d7769a53acc373825ef: Status 404 returned error can't find the container with id 9e8c5d72acd6c4d43d59b2e3aa1b6db9c567c7b2f69a5d7769a53acc373825ef Oct 01 16:35:31 crc kubenswrapper[4949]: W1001 16:35:31.157239 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4fbb69f3_8bbc_4acd_97eb_b67dcd314f2d.slice/crio-9f73ad09e7514b6833a0fd571bf7e85c9da213a76987d31fe6793e6f6593d1b3 WatchSource:0}: Error finding container 9f73ad09e7514b6833a0fd571bf7e85c9da213a76987d31fe6793e6f6593d1b3: Status 404 returned error can't find the container with id 9f73ad09e7514b6833a0fd571bf7e85c9da213a76987d31fe6793e6f6593d1b3 Oct 01 16:35:31 crc kubenswrapper[4949]: I1001 16:35:31.214117 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-788cb9bcc-8rj25"] Oct 01 16:35:31 crc kubenswrapper[4949]: I1001 16:35:31.617752 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 01 16:35:31 crc kubenswrapper[4949]: W1001 16:35:31.755454 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf2b3a7c7_2c23_4e95_a04f_4b8156bcedef.slice/crio-15e5861fc500f02fb10683a6f9ccd8da14ce0643e679019c2fac937168c10811 WatchSource:0}: Error finding container 15e5861fc500f02fb10683a6f9ccd8da14ce0643e679019c2fac937168c10811: Status 404 returned error can't find the container with id 15e5861fc500f02fb10683a6f9ccd8da14ce0643e679019c2fac937168c10811 Oct 01 16:35:31 crc kubenswrapper[4949]: I1001 16:35:31.799004 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 01 16:35:32 crc kubenswrapper[4949]: I1001 16:35:32.163460 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f2b3a7c7-2c23-4e95-a04f-4b8156bcedef","Type":"ContainerStarted","Data":"15e5861fc500f02fb10683a6f9ccd8da14ce0643e679019c2fac937168c10811"} Oct 01 16:35:32 crc kubenswrapper[4949]: I1001 16:35:32.165963 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7756fd4cb7-fb6wc" event={"ID":"a2eee1a1-972c-4bf6-9dab-d55162980ed9","Type":"ContainerStarted","Data":"9e8c5d72acd6c4d43d59b2e3aa1b6db9c567c7b2f69a5d7769a53acc373825ef"} Oct 01 16:35:32 crc kubenswrapper[4949]: I1001 16:35:32.169257 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-788cb9bcc-8rj25" event={"ID":"51ce1d58-e3c1-4eff-819b-b4f6f19e3498","Type":"ContainerStarted","Data":"6d19bbc71b970035c754ff116fd49a27a4ccbbed800f90d3279e8d134fc0b1ec"} Oct 01 16:35:32 crc kubenswrapper[4949]: I1001 16:35:32.172341 4949 generic.go:334] "Generic (PLEG): container finished" podID="4fbb69f3-8bbc-4acd-97eb-b67dcd314f2d" containerID="4ee2ecb36bb3f42c8e16bdfacc147b29d74a14d5d411bb8773a939e730be58a7" exitCode=0 Oct 01 16:35:32 crc kubenswrapper[4949]: I1001 16:35:32.172398 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-srpdf" event={"ID":"4fbb69f3-8bbc-4acd-97eb-b67dcd314f2d","Type":"ContainerDied","Data":"4ee2ecb36bb3f42c8e16bdfacc147b29d74a14d5d411bb8773a939e730be58a7"} Oct 01 16:35:32 crc kubenswrapper[4949]: I1001 16:35:32.172423 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-srpdf" event={"ID":"4fbb69f3-8bbc-4acd-97eb-b67dcd314f2d","Type":"ContainerStarted","Data":"9f73ad09e7514b6833a0fd571bf7e85c9da213a76987d31fe6793e6f6593d1b3"} Oct 01 16:35:32 crc kubenswrapper[4949]: I1001 16:35:32.181694 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fa66b61c-a20e-48af-9432-8a980d60b47c","Type":"ContainerStarted","Data":"e9d93511dddd9198c329d746bf65a8599027be770dbff5811ba579b28bb68ca0"} Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.031934 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7756fd4cb7-fb6wc"] Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.065535 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-7f78d9658d-xl5nq"] Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.067552 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7f78d9658d-xl5nq" Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.071549 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-horizon-svc" Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.075691 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7f78d9658d-xl5nq"] Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.094588 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.117342 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-788cb9bcc-8rj25"] Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.157424 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/horizon-797b4b5c88-m9tdj"] Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.159006 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-797b4b5c88-m9tdj" Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.168742 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.180444 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-797b4b5c88-m9tdj"] Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.208913 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"bdb84d6a-de04-4107-917f-c2a6599ed2dc","Type":"ContainerStarted","Data":"4879b13e6ed417367cea725692adb4ae77d9c7802172e0c41b21e68f0d3eccea"} Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.208953 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-backup-0" event={"ID":"bdb84d6a-de04-4107-917f-c2a6599ed2dc","Type":"ContainerStarted","Data":"17b5f36267678ab3bfdc1cb42498a5857d52a8b388a1d0878064711dded6276a"} Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.212117 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"a9aee921-7753-4725-a42d-ee8161afd631","Type":"ContainerStarted","Data":"34ef300bd40d76f5ef0c0552904f3d7382c4cc4c2038f22679b5d04fad704166"} Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.212209 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/cinder-volume-volume1-0" event={"ID":"a9aee921-7753-4725-a42d-ee8161afd631","Type":"ContainerStarted","Data":"34b6169b8857f82d4d28e8b7dd6b335beda74677ba94db7a31feff4ed6ef08af"} Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.213345 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b49dfcdb-910b-48e0-91bb-a426980dc277-horizon-tls-certs\") pod \"horizon-7f78d9658d-xl5nq\" (UID: \"b49dfcdb-910b-48e0-91bb-a426980dc277\") " pod="openstack/horizon-7f78d9658d-xl5nq" Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.213428 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b49dfcdb-910b-48e0-91bb-a426980dc277-config-data\") pod \"horizon-7f78d9658d-xl5nq\" (UID: \"b49dfcdb-910b-48e0-91bb-a426980dc277\") " pod="openstack/horizon-7f78d9658d-xl5nq" Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.213455 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b49dfcdb-910b-48e0-91bb-a426980dc277-logs\") pod \"horizon-7f78d9658d-xl5nq\" (UID: \"b49dfcdb-910b-48e0-91bb-a426980dc277\") " pod="openstack/horizon-7f78d9658d-xl5nq" Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.213506 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b49dfcdb-910b-48e0-91bb-a426980dc277-combined-ca-bundle\") pod \"horizon-7f78d9658d-xl5nq\" (UID: \"b49dfcdb-910b-48e0-91bb-a426980dc277\") " pod="openstack/horizon-7f78d9658d-xl5nq" Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.213545 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcn9w\" (UniqueName: \"kubernetes.io/projected/b49dfcdb-910b-48e0-91bb-a426980dc277-kube-api-access-wcn9w\") pod \"horizon-7f78d9658d-xl5nq\" (UID: \"b49dfcdb-910b-48e0-91bb-a426980dc277\") " pod="openstack/horizon-7f78d9658d-xl5nq" Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.213628 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b49dfcdb-910b-48e0-91bb-a426980dc277-scripts\") pod \"horizon-7f78d9658d-xl5nq\" (UID: \"b49dfcdb-910b-48e0-91bb-a426980dc277\") " pod="openstack/horizon-7f78d9658d-xl5nq" Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.213659 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b49dfcdb-910b-48e0-91bb-a426980dc277-horizon-secret-key\") pod \"horizon-7f78d9658d-xl5nq\" (UID: \"b49dfcdb-910b-48e0-91bb-a426980dc277\") " pod="openstack/horizon-7f78d9658d-xl5nq" Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.215193 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fa66b61c-a20e-48af-9432-8a980d60b47c","Type":"ContainerStarted","Data":"ffec770b09716c408b86ce284671dfa4ae5a6300e6da80906218ad22cfe81c00"} Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.216858 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f2b3a7c7-2c23-4e95-a04f-4b8156bcedef","Type":"ContainerStarted","Data":"6b7167e6d6353ef5fa2d5fd624b973a44fbb14eafdd511b2b558a60fa2faf78c"} Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.245341 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-backup-0" podStartSLOduration=3.038702349 podStartE2EDuration="4.245295061s" podCreationTimestamp="2025-10-01 16:35:29 +0000 UTC" firstStartedPulling="2025-10-01 16:35:30.604571778 +0000 UTC m=+3229.910177979" lastFinishedPulling="2025-10-01 16:35:31.8111645 +0000 UTC m=+3231.116770691" observedRunningTime="2025-10-01 16:35:33.230673287 +0000 UTC m=+3232.536279478" watchObservedRunningTime="2025-10-01 16:35:33.245295061 +0000 UTC m=+3232.550901252" Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.275173 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/cinder-volume-volume1-0" podStartSLOduration=3.269676794 podStartE2EDuration="4.275155305s" podCreationTimestamp="2025-10-01 16:35:29 +0000 UTC" firstStartedPulling="2025-10-01 16:35:30.806842461 +0000 UTC m=+3230.112448642" lastFinishedPulling="2025-10-01 16:35:31.812320972 +0000 UTC m=+3231.117927153" observedRunningTime="2025-10-01 16:35:33.250977258 +0000 UTC m=+3232.556583459" watchObservedRunningTime="2025-10-01 16:35:33.275155305 +0000 UTC m=+3232.580761496" Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.315618 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b49dfcdb-910b-48e0-91bb-a426980dc277-horizon-tls-certs\") pod \"horizon-7f78d9658d-xl5nq\" (UID: \"b49dfcdb-910b-48e0-91bb-a426980dc277\") " pod="openstack/horizon-7f78d9658d-xl5nq" Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.315671 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b49dfcdb-910b-48e0-91bb-a426980dc277-config-data\") pod \"horizon-7f78d9658d-xl5nq\" (UID: \"b49dfcdb-910b-48e0-91bb-a426980dc277\") " pod="openstack/horizon-7f78d9658d-xl5nq" Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.315686 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b49dfcdb-910b-48e0-91bb-a426980dc277-logs\") pod \"horizon-7f78d9658d-xl5nq\" (UID: \"b49dfcdb-910b-48e0-91bb-a426980dc277\") " pod="openstack/horizon-7f78d9658d-xl5nq" Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.315713 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d84j6\" (UniqueName: \"kubernetes.io/projected/3ff4ae7d-bc42-404d-ab53-e189c6d9a00a-kube-api-access-d84j6\") pod \"horizon-797b4b5c88-m9tdj\" (UID: \"3ff4ae7d-bc42-404d-ab53-e189c6d9a00a\") " pod="openstack/horizon-797b4b5c88-m9tdj" Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.315739 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ff4ae7d-bc42-404d-ab53-e189c6d9a00a-config-data\") pod \"horizon-797b4b5c88-m9tdj\" (UID: \"3ff4ae7d-bc42-404d-ab53-e189c6d9a00a\") " pod="openstack/horizon-797b4b5c88-m9tdj" Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.315775 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b49dfcdb-910b-48e0-91bb-a426980dc277-combined-ca-bundle\") pod \"horizon-7f78d9658d-xl5nq\" (UID: \"b49dfcdb-910b-48e0-91bb-a426980dc277\") " pod="openstack/horizon-7f78d9658d-xl5nq" Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.315800 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcn9w\" (UniqueName: \"kubernetes.io/projected/b49dfcdb-910b-48e0-91bb-a426980dc277-kube-api-access-wcn9w\") pod \"horizon-7f78d9658d-xl5nq\" (UID: \"b49dfcdb-910b-48e0-91bb-a426980dc277\") " pod="openstack/horizon-7f78d9658d-xl5nq" Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.315871 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ff4ae7d-bc42-404d-ab53-e189c6d9a00a-horizon-tls-certs\") pod \"horizon-797b4b5c88-m9tdj\" (UID: \"3ff4ae7d-bc42-404d-ab53-e189c6d9a00a\") " pod="openstack/horizon-797b4b5c88-m9tdj" Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.315916 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b49dfcdb-910b-48e0-91bb-a426980dc277-scripts\") pod \"horizon-7f78d9658d-xl5nq\" (UID: \"b49dfcdb-910b-48e0-91bb-a426980dc277\") " pod="openstack/horizon-7f78d9658d-xl5nq" Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.315931 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b49dfcdb-910b-48e0-91bb-a426980dc277-horizon-secret-key\") pod \"horizon-7f78d9658d-xl5nq\" (UID: \"b49dfcdb-910b-48e0-91bb-a426980dc277\") " pod="openstack/horizon-7f78d9658d-xl5nq" Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.315950 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ff4ae7d-bc42-404d-ab53-e189c6d9a00a-combined-ca-bundle\") pod \"horizon-797b4b5c88-m9tdj\" (UID: \"3ff4ae7d-bc42-404d-ab53-e189c6d9a00a\") " pod="openstack/horizon-797b4b5c88-m9tdj" Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.315982 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ff4ae7d-bc42-404d-ab53-e189c6d9a00a-logs\") pod \"horizon-797b4b5c88-m9tdj\" (UID: \"3ff4ae7d-bc42-404d-ab53-e189c6d9a00a\") " pod="openstack/horizon-797b4b5c88-m9tdj" Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.316003 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3ff4ae7d-bc42-404d-ab53-e189c6d9a00a-horizon-secret-key\") pod \"horizon-797b4b5c88-m9tdj\" (UID: \"3ff4ae7d-bc42-404d-ab53-e189c6d9a00a\") " pod="openstack/horizon-797b4b5c88-m9tdj" Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.316026 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3ff4ae7d-bc42-404d-ab53-e189c6d9a00a-scripts\") pod \"horizon-797b4b5c88-m9tdj\" (UID: \"3ff4ae7d-bc42-404d-ab53-e189c6d9a00a\") " pod="openstack/horizon-797b4b5c88-m9tdj" Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.317921 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b49dfcdb-910b-48e0-91bb-a426980dc277-scripts\") pod \"horizon-7f78d9658d-xl5nq\" (UID: \"b49dfcdb-910b-48e0-91bb-a426980dc277\") " pod="openstack/horizon-7f78d9658d-xl5nq" Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.319806 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b49dfcdb-910b-48e0-91bb-a426980dc277-config-data\") pod \"horizon-7f78d9658d-xl5nq\" (UID: \"b49dfcdb-910b-48e0-91bb-a426980dc277\") " pod="openstack/horizon-7f78d9658d-xl5nq" Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.320143 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b49dfcdb-910b-48e0-91bb-a426980dc277-logs\") pod \"horizon-7f78d9658d-xl5nq\" (UID: \"b49dfcdb-910b-48e0-91bb-a426980dc277\") " pod="openstack/horizon-7f78d9658d-xl5nq" Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.323622 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b49dfcdb-910b-48e0-91bb-a426980dc277-combined-ca-bundle\") pod \"horizon-7f78d9658d-xl5nq\" (UID: \"b49dfcdb-910b-48e0-91bb-a426980dc277\") " pod="openstack/horizon-7f78d9658d-xl5nq" Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.323831 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b49dfcdb-910b-48e0-91bb-a426980dc277-horizon-tls-certs\") pod \"horizon-7f78d9658d-xl5nq\" (UID: \"b49dfcdb-910b-48e0-91bb-a426980dc277\") " pod="openstack/horizon-7f78d9658d-xl5nq" Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.334639 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b49dfcdb-910b-48e0-91bb-a426980dc277-horizon-secret-key\") pod \"horizon-7f78d9658d-xl5nq\" (UID: \"b49dfcdb-910b-48e0-91bb-a426980dc277\") " pod="openstack/horizon-7f78d9658d-xl5nq" Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.342545 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcn9w\" (UniqueName: \"kubernetes.io/projected/b49dfcdb-910b-48e0-91bb-a426980dc277-kube-api-access-wcn9w\") pod \"horizon-7f78d9658d-xl5nq\" (UID: \"b49dfcdb-910b-48e0-91bb-a426980dc277\") " pod="openstack/horizon-7f78d9658d-xl5nq" Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.403050 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7f78d9658d-xl5nq" Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.421337 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d84j6\" (UniqueName: \"kubernetes.io/projected/3ff4ae7d-bc42-404d-ab53-e189c6d9a00a-kube-api-access-d84j6\") pod \"horizon-797b4b5c88-m9tdj\" (UID: \"3ff4ae7d-bc42-404d-ab53-e189c6d9a00a\") " pod="openstack/horizon-797b4b5c88-m9tdj" Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.421395 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ff4ae7d-bc42-404d-ab53-e189c6d9a00a-config-data\") pod \"horizon-797b4b5c88-m9tdj\" (UID: \"3ff4ae7d-bc42-404d-ab53-e189c6d9a00a\") " pod="openstack/horizon-797b4b5c88-m9tdj" Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.421457 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ff4ae7d-bc42-404d-ab53-e189c6d9a00a-horizon-tls-certs\") pod \"horizon-797b4b5c88-m9tdj\" (UID: \"3ff4ae7d-bc42-404d-ab53-e189c6d9a00a\") " pod="openstack/horizon-797b4b5c88-m9tdj" Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.421491 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ff4ae7d-bc42-404d-ab53-e189c6d9a00a-combined-ca-bundle\") pod \"horizon-797b4b5c88-m9tdj\" (UID: \"3ff4ae7d-bc42-404d-ab53-e189c6d9a00a\") " pod="openstack/horizon-797b4b5c88-m9tdj" Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.421518 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ff4ae7d-bc42-404d-ab53-e189c6d9a00a-logs\") pod \"horizon-797b4b5c88-m9tdj\" (UID: \"3ff4ae7d-bc42-404d-ab53-e189c6d9a00a\") " pod="openstack/horizon-797b4b5c88-m9tdj" Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.421540 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3ff4ae7d-bc42-404d-ab53-e189c6d9a00a-horizon-secret-key\") pod \"horizon-797b4b5c88-m9tdj\" (UID: \"3ff4ae7d-bc42-404d-ab53-e189c6d9a00a\") " pod="openstack/horizon-797b4b5c88-m9tdj" Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.421569 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3ff4ae7d-bc42-404d-ab53-e189c6d9a00a-scripts\") pod \"horizon-797b4b5c88-m9tdj\" (UID: \"3ff4ae7d-bc42-404d-ab53-e189c6d9a00a\") " pod="openstack/horizon-797b4b5c88-m9tdj" Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.422604 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/3ff4ae7d-bc42-404d-ab53-e189c6d9a00a-scripts\") pod \"horizon-797b4b5c88-m9tdj\" (UID: \"3ff4ae7d-bc42-404d-ab53-e189c6d9a00a\") " pod="openstack/horizon-797b4b5c88-m9tdj" Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.425020 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/3ff4ae7d-bc42-404d-ab53-e189c6d9a00a-config-data\") pod \"horizon-797b4b5c88-m9tdj\" (UID: \"3ff4ae7d-bc42-404d-ab53-e189c6d9a00a\") " pod="openstack/horizon-797b4b5c88-m9tdj" Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.425422 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/3ff4ae7d-bc42-404d-ab53-e189c6d9a00a-logs\") pod \"horizon-797b4b5c88-m9tdj\" (UID: \"3ff4ae7d-bc42-404d-ab53-e189c6d9a00a\") " pod="openstack/horizon-797b4b5c88-m9tdj" Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.428714 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/3ff4ae7d-bc42-404d-ab53-e189c6d9a00a-horizon-secret-key\") pod \"horizon-797b4b5c88-m9tdj\" (UID: \"3ff4ae7d-bc42-404d-ab53-e189c6d9a00a\") " pod="openstack/horizon-797b4b5c88-m9tdj" Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.430181 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/3ff4ae7d-bc42-404d-ab53-e189c6d9a00a-combined-ca-bundle\") pod \"horizon-797b4b5c88-m9tdj\" (UID: \"3ff4ae7d-bc42-404d-ab53-e189c6d9a00a\") " pod="openstack/horizon-797b4b5c88-m9tdj" Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.433622 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/3ff4ae7d-bc42-404d-ab53-e189c6d9a00a-horizon-tls-certs\") pod \"horizon-797b4b5c88-m9tdj\" (UID: \"3ff4ae7d-bc42-404d-ab53-e189c6d9a00a\") " pod="openstack/horizon-797b4b5c88-m9tdj" Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.440924 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d84j6\" (UniqueName: \"kubernetes.io/projected/3ff4ae7d-bc42-404d-ab53-e189c6d9a00a-kube-api-access-d84j6\") pod \"horizon-797b4b5c88-m9tdj\" (UID: \"3ff4ae7d-bc42-404d-ab53-e189c6d9a00a\") " pod="openstack/horizon-797b4b5c88-m9tdj" Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.491533 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-797b4b5c88-m9tdj" Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.596670 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-srpdf" Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.726847 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5qdh6\" (UniqueName: \"kubernetes.io/projected/4fbb69f3-8bbc-4acd-97eb-b67dcd314f2d-kube-api-access-5qdh6\") pod \"4fbb69f3-8bbc-4acd-97eb-b67dcd314f2d\" (UID: \"4fbb69f3-8bbc-4acd-97eb-b67dcd314f2d\") " Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.733509 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4fbb69f3-8bbc-4acd-97eb-b67dcd314f2d-kube-api-access-5qdh6" (OuterVolumeSpecName: "kube-api-access-5qdh6") pod "4fbb69f3-8bbc-4acd-97eb-b67dcd314f2d" (UID: "4fbb69f3-8bbc-4acd-97eb-b67dcd314f2d"). InnerVolumeSpecName "kube-api-access-5qdh6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:35:33 crc kubenswrapper[4949]: I1001 16:35:33.831871 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5qdh6\" (UniqueName: \"kubernetes.io/projected/4fbb69f3-8bbc-4acd-97eb-b67dcd314f2d-kube-api-access-5qdh6\") on node \"crc\" DevicePath \"\"" Oct 01 16:35:34 crc kubenswrapper[4949]: I1001 16:35:34.137245 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-7f78d9658d-xl5nq"] Oct 01 16:35:34 crc kubenswrapper[4949]: I1001 16:35:34.150359 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/horizon-797b4b5c88-m9tdj"] Oct 01 16:35:34 crc kubenswrapper[4949]: I1001 16:35:34.249982 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-create-srpdf" event={"ID":"4fbb69f3-8bbc-4acd-97eb-b67dcd314f2d","Type":"ContainerDied","Data":"9f73ad09e7514b6833a0fd571bf7e85c9da213a76987d31fe6793e6f6593d1b3"} Oct 01 16:35:34 crc kubenswrapper[4949]: I1001 16:35:34.250028 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9f73ad09e7514b6833a0fd571bf7e85c9da213a76987d31fe6793e6f6593d1b3" Oct 01 16:35:34 crc kubenswrapper[4949]: I1001 16:35:34.250083 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-create-srpdf" Oct 01 16:35:34 crc kubenswrapper[4949]: I1001 16:35:34.279899 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f78d9658d-xl5nq" event={"ID":"b49dfcdb-910b-48e0-91bb-a426980dc277","Type":"ContainerStarted","Data":"301323ff76bbfd53336b665902f54c0ceb204f65c6902d0b61dde6f4dfcbff27"} Oct 01 16:35:34 crc kubenswrapper[4949]: I1001 16:35:34.294694 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-797b4b5c88-m9tdj" event={"ID":"3ff4ae7d-bc42-404d-ab53-e189c6d9a00a","Type":"ContainerStarted","Data":"f7d757d5ad14fbb9d6759851732587c2580311ff29dceaaa29b350e8cfb22792"} Oct 01 16:35:34 crc kubenswrapper[4949]: I1001 16:35:34.302850 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fa66b61c-a20e-48af-9432-8a980d60b47c","Type":"ContainerStarted","Data":"6c210d6d50f17a5fb5b20cbdaa3c99913eb91c33dfdde78193fee9f589d137c0"} Oct 01 16:35:34 crc kubenswrapper[4949]: I1001 16:35:34.303012 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="fa66b61c-a20e-48af-9432-8a980d60b47c" containerName="glance-log" containerID="cri-o://ffec770b09716c408b86ce284671dfa4ae5a6300e6da80906218ad22cfe81c00" gracePeriod=30 Oct 01 16:35:34 crc kubenswrapper[4949]: I1001 16:35:34.303305 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-external-api-0" podUID="fa66b61c-a20e-48af-9432-8a980d60b47c" containerName="glance-httpd" containerID="cri-o://6c210d6d50f17a5fb5b20cbdaa3c99913eb91c33dfdde78193fee9f589d137c0" gracePeriod=30 Oct 01 16:35:34 crc kubenswrapper[4949]: I1001 16:35:34.312534 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f2b3a7c7-2c23-4e95-a04f-4b8156bcedef","Type":"ContainerStarted","Data":"f6683be211c81c56a9fa29910455b99410c8adb17e2d877b006a25535442ab91"} Oct 01 16:35:34 crc kubenswrapper[4949]: I1001 16:35:34.313321 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f2b3a7c7-2c23-4e95-a04f-4b8156bcedef" containerName="glance-log" containerID="cri-o://6b7167e6d6353ef5fa2d5fd624b973a44fbb14eafdd511b2b558a60fa2faf78c" gracePeriod=30 Oct 01 16:35:34 crc kubenswrapper[4949]: I1001 16:35:34.313500 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/glance-default-internal-api-0" podUID="f2b3a7c7-2c23-4e95-a04f-4b8156bcedef" containerName="glance-httpd" containerID="cri-o://f6683be211c81c56a9fa29910455b99410c8adb17e2d877b006a25535442ab91" gracePeriod=30 Oct 01 16:35:34 crc kubenswrapper[4949]: I1001 16:35:34.361310 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=4.361272301 podStartE2EDuration="4.361272301s" podCreationTimestamp="2025-10-01 16:35:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 16:35:34.332681122 +0000 UTC m=+3233.638287313" watchObservedRunningTime="2025-10-01 16:35:34.361272301 +0000 UTC m=+3233.666878492" Oct 01 16:35:34 crc kubenswrapper[4949]: I1001 16:35:34.369395 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=4.369378745 podStartE2EDuration="4.369378745s" podCreationTimestamp="2025-10-01 16:35:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 16:35:34.357953949 +0000 UTC m=+3233.663560130" watchObservedRunningTime="2025-10-01 16:35:34.369378745 +0000 UTC m=+3233.674984936" Oct 01 16:35:34 crc kubenswrapper[4949]: I1001 16:35:34.961785 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-backup-0" Oct 01 16:35:34 crc kubenswrapper[4949]: I1001 16:35:34.989586 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.008686 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/cinder-volume-volume1-0" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.171045 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p95d6\" (UniqueName: \"kubernetes.io/projected/f2b3a7c7-2c23-4e95-a04f-4b8156bcedef-kube-api-access-p95d6\") pod \"f2b3a7c7-2c23-4e95-a04f-4b8156bcedef\" (UID: \"f2b3a7c7-2c23-4e95-a04f-4b8156bcedef\") " Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.171343 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"f2b3a7c7-2c23-4e95-a04f-4b8156bcedef\" (UID: \"f2b3a7c7-2c23-4e95-a04f-4b8156bcedef\") " Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.171371 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f2b3a7c7-2c23-4e95-a04f-4b8156bcedef-ceph\") pod \"f2b3a7c7-2c23-4e95-a04f-4b8156bcedef\" (UID: \"f2b3a7c7-2c23-4e95-a04f-4b8156bcedef\") " Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.171414 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2b3a7c7-2c23-4e95-a04f-4b8156bcedef-internal-tls-certs\") pod \"f2b3a7c7-2c23-4e95-a04f-4b8156bcedef\" (UID: \"f2b3a7c7-2c23-4e95-a04f-4b8156bcedef\") " Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.171457 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2b3a7c7-2c23-4e95-a04f-4b8156bcedef-combined-ca-bundle\") pod \"f2b3a7c7-2c23-4e95-a04f-4b8156bcedef\" (UID: \"f2b3a7c7-2c23-4e95-a04f-4b8156bcedef\") " Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.171480 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2b3a7c7-2c23-4e95-a04f-4b8156bcedef-scripts\") pod \"f2b3a7c7-2c23-4e95-a04f-4b8156bcedef\" (UID: \"f2b3a7c7-2c23-4e95-a04f-4b8156bcedef\") " Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.171507 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2b3a7c7-2c23-4e95-a04f-4b8156bcedef-config-data\") pod \"f2b3a7c7-2c23-4e95-a04f-4b8156bcedef\" (UID: \"f2b3a7c7-2c23-4e95-a04f-4b8156bcedef\") " Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.171555 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2b3a7c7-2c23-4e95-a04f-4b8156bcedef-logs\") pod \"f2b3a7c7-2c23-4e95-a04f-4b8156bcedef\" (UID: \"f2b3a7c7-2c23-4e95-a04f-4b8156bcedef\") " Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.171606 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2b3a7c7-2c23-4e95-a04f-4b8156bcedef-httpd-run\") pod \"f2b3a7c7-2c23-4e95-a04f-4b8156bcedef\" (UID: \"f2b3a7c7-2c23-4e95-a04f-4b8156bcedef\") " Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.174834 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2b3a7c7-2c23-4e95-a04f-4b8156bcedef-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f2b3a7c7-2c23-4e95-a04f-4b8156bcedef" (UID: "f2b3a7c7-2c23-4e95-a04f-4b8156bcedef"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.175443 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f2b3a7c7-2c23-4e95-a04f-4b8156bcedef-logs" (OuterVolumeSpecName: "logs") pod "f2b3a7c7-2c23-4e95-a04f-4b8156bcedef" (UID: "f2b3a7c7-2c23-4e95-a04f-4b8156bcedef"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.196912 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2b3a7c7-2c23-4e95-a04f-4b8156bcedef-kube-api-access-p95d6" (OuterVolumeSpecName: "kube-api-access-p95d6") pod "f2b3a7c7-2c23-4e95-a04f-4b8156bcedef" (UID: "f2b3a7c7-2c23-4e95-a04f-4b8156bcedef"). InnerVolumeSpecName "kube-api-access-p95d6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.198314 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "f2b3a7c7-2c23-4e95-a04f-4b8156bcedef" (UID: "f2b3a7c7-2c23-4e95-a04f-4b8156bcedef"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.204509 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2b3a7c7-2c23-4e95-a04f-4b8156bcedef-ceph" (OuterVolumeSpecName: "ceph") pod "f2b3a7c7-2c23-4e95-a04f-4b8156bcedef" (UID: "f2b3a7c7-2c23-4e95-a04f-4b8156bcedef"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.211254 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2b3a7c7-2c23-4e95-a04f-4b8156bcedef-scripts" (OuterVolumeSpecName: "scripts") pod "f2b3a7c7-2c23-4e95-a04f-4b8156bcedef" (UID: "f2b3a7c7-2c23-4e95-a04f-4b8156bcedef"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.214702 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2b3a7c7-2c23-4e95-a04f-4b8156bcedef-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f2b3a7c7-2c23-4e95-a04f-4b8156bcedef" (UID: "f2b3a7c7-2c23-4e95-a04f-4b8156bcedef"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.239990 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2b3a7c7-2c23-4e95-a04f-4b8156bcedef-config-data" (OuterVolumeSpecName: "config-data") pod "f2b3a7c7-2c23-4e95-a04f-4b8156bcedef" (UID: "f2b3a7c7-2c23-4e95-a04f-4b8156bcedef"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.249788 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2b3a7c7-2c23-4e95-a04f-4b8156bcedef-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "f2b3a7c7-2c23-4e95-a04f-4b8156bcedef" (UID: "f2b3a7c7-2c23-4e95-a04f-4b8156bcedef"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.275984 4949 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f2b3a7c7-2c23-4e95-a04f-4b8156bcedef-logs\") on node \"crc\" DevicePath \"\"" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.276012 4949 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f2b3a7c7-2c23-4e95-a04f-4b8156bcedef-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.276024 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p95d6\" (UniqueName: \"kubernetes.io/projected/f2b3a7c7-2c23-4e95-a04f-4b8156bcedef-kube-api-access-p95d6\") on node \"crc\" DevicePath \"\"" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.276057 4949 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.276068 4949 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/f2b3a7c7-2c23-4e95-a04f-4b8156bcedef-ceph\") on node \"crc\" DevicePath \"\"" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.276077 4949 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/f2b3a7c7-2c23-4e95-a04f-4b8156bcedef-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.276085 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f2b3a7c7-2c23-4e95-a04f-4b8156bcedef-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.276093 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f2b3a7c7-2c23-4e95-a04f-4b8156bcedef-scripts\") on node \"crc\" DevicePath \"\"" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.276100 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f2b3a7c7-2c23-4e95-a04f-4b8156bcedef-config-data\") on node \"crc\" DevicePath \"\"" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.297437 4949 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.334401 4949 generic.go:334] "Generic (PLEG): container finished" podID="fa66b61c-a20e-48af-9432-8a980d60b47c" containerID="6c210d6d50f17a5fb5b20cbdaa3c99913eb91c33dfdde78193fee9f589d137c0" exitCode=0 Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.334426 4949 generic.go:334] "Generic (PLEG): container finished" podID="fa66b61c-a20e-48af-9432-8a980d60b47c" containerID="ffec770b09716c408b86ce284671dfa4ae5a6300e6da80906218ad22cfe81c00" exitCode=143 Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.334461 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fa66b61c-a20e-48af-9432-8a980d60b47c","Type":"ContainerDied","Data":"6c210d6d50f17a5fb5b20cbdaa3c99913eb91c33dfdde78193fee9f589d137c0"} Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.334484 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fa66b61c-a20e-48af-9432-8a980d60b47c","Type":"ContainerDied","Data":"ffec770b09716c408b86ce284671dfa4ae5a6300e6da80906218ad22cfe81c00"} Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.337597 4949 generic.go:334] "Generic (PLEG): container finished" podID="f2b3a7c7-2c23-4e95-a04f-4b8156bcedef" containerID="f6683be211c81c56a9fa29910455b99410c8adb17e2d877b006a25535442ab91" exitCode=0 Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.337611 4949 generic.go:334] "Generic (PLEG): container finished" podID="f2b3a7c7-2c23-4e95-a04f-4b8156bcedef" containerID="6b7167e6d6353ef5fa2d5fd624b973a44fbb14eafdd511b2b558a60fa2faf78c" exitCode=143 Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.337651 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.337634 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f2b3a7c7-2c23-4e95-a04f-4b8156bcedef","Type":"ContainerDied","Data":"f6683be211c81c56a9fa29910455b99410c8adb17e2d877b006a25535442ab91"} Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.337754 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f2b3a7c7-2c23-4e95-a04f-4b8156bcedef","Type":"ContainerDied","Data":"6b7167e6d6353ef5fa2d5fd624b973a44fbb14eafdd511b2b558a60fa2faf78c"} Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.337767 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"f2b3a7c7-2c23-4e95-a04f-4b8156bcedef","Type":"ContainerDied","Data":"15e5861fc500f02fb10683a6f9ccd8da14ce0643e679019c2fac937168c10811"} Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.337782 4949 scope.go:117] "RemoveContainer" containerID="f6683be211c81c56a9fa29910455b99410c8adb17e2d877b006a25535442ab91" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.377782 4949 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.378513 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.391372 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.403272 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 01 16:35:35 crc kubenswrapper[4949]: E1001 16:35:35.403663 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2b3a7c7-2c23-4e95-a04f-4b8156bcedef" containerName="glance-log" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.403675 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2b3a7c7-2c23-4e95-a04f-4b8156bcedef" containerName="glance-log" Oct 01 16:35:35 crc kubenswrapper[4949]: E1001 16:35:35.403696 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4fbb69f3-8bbc-4acd-97eb-b67dcd314f2d" containerName="mariadb-database-create" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.403702 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="4fbb69f3-8bbc-4acd-97eb-b67dcd314f2d" containerName="mariadb-database-create" Oct 01 16:35:35 crc kubenswrapper[4949]: E1001 16:35:35.403719 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f2b3a7c7-2c23-4e95-a04f-4b8156bcedef" containerName="glance-httpd" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.403727 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2b3a7c7-2c23-4e95-a04f-4b8156bcedef" containerName="glance-httpd" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.403928 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="4fbb69f3-8bbc-4acd-97eb-b67dcd314f2d" containerName="mariadb-database-create" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.403940 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2b3a7c7-2c23-4e95-a04f-4b8156bcedef" containerName="glance-httpd" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.403958 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="f2b3a7c7-2c23-4e95-a04f-4b8156bcedef" containerName="glance-log" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.404906 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.407387 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-internal-config-data" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.407596 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-internal-svc" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.407791 4949 scope.go:117] "RemoveContainer" containerID="6b7167e6d6353ef5fa2d5fd624b973a44fbb14eafdd511b2b558a60fa2faf78c" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.414498 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.440705 4949 scope.go:117] "RemoveContainer" containerID="f6683be211c81c56a9fa29910455b99410c8adb17e2d877b006a25535442ab91" Oct 01 16:35:35 crc kubenswrapper[4949]: E1001 16:35:35.443732 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6683be211c81c56a9fa29910455b99410c8adb17e2d877b006a25535442ab91\": container with ID starting with f6683be211c81c56a9fa29910455b99410c8adb17e2d877b006a25535442ab91 not found: ID does not exist" containerID="f6683be211c81c56a9fa29910455b99410c8adb17e2d877b006a25535442ab91" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.443862 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6683be211c81c56a9fa29910455b99410c8adb17e2d877b006a25535442ab91"} err="failed to get container status \"f6683be211c81c56a9fa29910455b99410c8adb17e2d877b006a25535442ab91\": rpc error: code = NotFound desc = could not find container \"f6683be211c81c56a9fa29910455b99410c8adb17e2d877b006a25535442ab91\": container with ID starting with f6683be211c81c56a9fa29910455b99410c8adb17e2d877b006a25535442ab91 not found: ID does not exist" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.443953 4949 scope.go:117] "RemoveContainer" containerID="6b7167e6d6353ef5fa2d5fd624b973a44fbb14eafdd511b2b558a60fa2faf78c" Oct 01 16:35:35 crc kubenswrapper[4949]: E1001 16:35:35.444580 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b7167e6d6353ef5fa2d5fd624b973a44fbb14eafdd511b2b558a60fa2faf78c\": container with ID starting with 6b7167e6d6353ef5fa2d5fd624b973a44fbb14eafdd511b2b558a60fa2faf78c not found: ID does not exist" containerID="6b7167e6d6353ef5fa2d5fd624b973a44fbb14eafdd511b2b558a60fa2faf78c" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.444609 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b7167e6d6353ef5fa2d5fd624b973a44fbb14eafdd511b2b558a60fa2faf78c"} err="failed to get container status \"6b7167e6d6353ef5fa2d5fd624b973a44fbb14eafdd511b2b558a60fa2faf78c\": rpc error: code = NotFound desc = could not find container \"6b7167e6d6353ef5fa2d5fd624b973a44fbb14eafdd511b2b558a60fa2faf78c\": container with ID starting with 6b7167e6d6353ef5fa2d5fd624b973a44fbb14eafdd511b2b558a60fa2faf78c not found: ID does not exist" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.444622 4949 scope.go:117] "RemoveContainer" containerID="f6683be211c81c56a9fa29910455b99410c8adb17e2d877b006a25535442ab91" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.444995 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6683be211c81c56a9fa29910455b99410c8adb17e2d877b006a25535442ab91"} err="failed to get container status \"f6683be211c81c56a9fa29910455b99410c8adb17e2d877b006a25535442ab91\": rpc error: code = NotFound desc = could not find container \"f6683be211c81c56a9fa29910455b99410c8adb17e2d877b006a25535442ab91\": container with ID starting with f6683be211c81c56a9fa29910455b99410c8adb17e2d877b006a25535442ab91 not found: ID does not exist" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.445024 4949 scope.go:117] "RemoveContainer" containerID="6b7167e6d6353ef5fa2d5fd624b973a44fbb14eafdd511b2b558a60fa2faf78c" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.445381 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b7167e6d6353ef5fa2d5fd624b973a44fbb14eafdd511b2b558a60fa2faf78c"} err="failed to get container status \"6b7167e6d6353ef5fa2d5fd624b973a44fbb14eafdd511b2b558a60fa2faf78c\": rpc error: code = NotFound desc = could not find container \"6b7167e6d6353ef5fa2d5fd624b973a44fbb14eafdd511b2b558a60fa2faf78c\": container with ID starting with 6b7167e6d6353ef5fa2d5fd624b973a44fbb14eafdd511b2b558a60fa2faf78c not found: ID does not exist" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.479144 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7z47n\" (UniqueName: \"kubernetes.io/projected/29e77b45-e692-415f-8966-6e74d30b4d7b-kube-api-access-7z47n\") pod \"glance-default-internal-api-0\" (UID: \"29e77b45-e692-415f-8966-6e74d30b4d7b\") " pod="openstack/glance-default-internal-api-0" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.479220 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29e77b45-e692-415f-8966-6e74d30b4d7b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"29e77b45-e692-415f-8966-6e74d30b4d7b\") " pod="openstack/glance-default-internal-api-0" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.479247 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"29e77b45-e692-415f-8966-6e74d30b4d7b\") " pod="openstack/glance-default-internal-api-0" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.479263 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29e77b45-e692-415f-8966-6e74d30b4d7b-logs\") pod \"glance-default-internal-api-0\" (UID: \"29e77b45-e692-415f-8966-6e74d30b4d7b\") " pod="openstack/glance-default-internal-api-0" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.479315 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29e77b45-e692-415f-8966-6e74d30b4d7b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"29e77b45-e692-415f-8966-6e74d30b4d7b\") " pod="openstack/glance-default-internal-api-0" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.479357 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/29e77b45-e692-415f-8966-6e74d30b4d7b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"29e77b45-e692-415f-8966-6e74d30b4d7b\") " pod="openstack/glance-default-internal-api-0" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.479419 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29e77b45-e692-415f-8966-6e74d30b4d7b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"29e77b45-e692-415f-8966-6e74d30b4d7b\") " pod="openstack/glance-default-internal-api-0" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.479475 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/29e77b45-e692-415f-8966-6e74d30b4d7b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"29e77b45-e692-415f-8966-6e74d30b4d7b\") " pod="openstack/glance-default-internal-api-0" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.479512 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/29e77b45-e692-415f-8966-6e74d30b4d7b-ceph\") pod \"glance-default-internal-api-0\" (UID: \"29e77b45-e692-415f-8966-6e74d30b4d7b\") " pod="openstack/glance-default-internal-api-0" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.580751 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7z47n\" (UniqueName: \"kubernetes.io/projected/29e77b45-e692-415f-8966-6e74d30b4d7b-kube-api-access-7z47n\") pod \"glance-default-internal-api-0\" (UID: \"29e77b45-e692-415f-8966-6e74d30b4d7b\") " pod="openstack/glance-default-internal-api-0" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.580852 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29e77b45-e692-415f-8966-6e74d30b4d7b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"29e77b45-e692-415f-8966-6e74d30b4d7b\") " pod="openstack/glance-default-internal-api-0" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.580886 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"29e77b45-e692-415f-8966-6e74d30b4d7b\") " pod="openstack/glance-default-internal-api-0" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.580906 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29e77b45-e692-415f-8966-6e74d30b4d7b-logs\") pod \"glance-default-internal-api-0\" (UID: \"29e77b45-e692-415f-8966-6e74d30b4d7b\") " pod="openstack/glance-default-internal-api-0" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.581066 4949 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"29e77b45-e692-415f-8966-6e74d30b4d7b\") device mount path \"/mnt/openstack/pv01\"" pod="openstack/glance-default-internal-api-0" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.581375 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/29e77b45-e692-415f-8966-6e74d30b4d7b-logs\") pod \"glance-default-internal-api-0\" (UID: \"29e77b45-e692-415f-8966-6e74d30b4d7b\") " pod="openstack/glance-default-internal-api-0" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.585060 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29e77b45-e692-415f-8966-6e74d30b4d7b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"29e77b45-e692-415f-8966-6e74d30b4d7b\") " pod="openstack/glance-default-internal-api-0" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.586145 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/29e77b45-e692-415f-8966-6e74d30b4d7b-scripts\") pod \"glance-default-internal-api-0\" (UID: \"29e77b45-e692-415f-8966-6e74d30b4d7b\") " pod="openstack/glance-default-internal-api-0" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.587536 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/29e77b45-e692-415f-8966-6e74d30b4d7b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"29e77b45-e692-415f-8966-6e74d30b4d7b\") " pod="openstack/glance-default-internal-api-0" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.587760 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29e77b45-e692-415f-8966-6e74d30b4d7b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"29e77b45-e692-415f-8966-6e74d30b4d7b\") " pod="openstack/glance-default-internal-api-0" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.587901 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/29e77b45-e692-415f-8966-6e74d30b4d7b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"29e77b45-e692-415f-8966-6e74d30b4d7b\") " pod="openstack/glance-default-internal-api-0" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.587972 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/29e77b45-e692-415f-8966-6e74d30b4d7b-ceph\") pod \"glance-default-internal-api-0\" (UID: \"29e77b45-e692-415f-8966-6e74d30b4d7b\") " pod="openstack/glance-default-internal-api-0" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.588979 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/29e77b45-e692-415f-8966-6e74d30b4d7b-combined-ca-bundle\") pod \"glance-default-internal-api-0\" (UID: \"29e77b45-e692-415f-8966-6e74d30b4d7b\") " pod="openstack/glance-default-internal-api-0" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.589303 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/29e77b45-e692-415f-8966-6e74d30b4d7b-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"29e77b45-e692-415f-8966-6e74d30b4d7b\") " pod="openstack/glance-default-internal-api-0" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.593655 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/29e77b45-e692-415f-8966-6e74d30b4d7b-ceph\") pod \"glance-default-internal-api-0\" (UID: \"29e77b45-e692-415f-8966-6e74d30b4d7b\") " pod="openstack/glance-default-internal-api-0" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.594164 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/29e77b45-e692-415f-8966-6e74d30b4d7b-config-data\") pod \"glance-default-internal-api-0\" (UID: \"29e77b45-e692-415f-8966-6e74d30b4d7b\") " pod="openstack/glance-default-internal-api-0" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.597709 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/29e77b45-e692-415f-8966-6e74d30b4d7b-internal-tls-certs\") pod \"glance-default-internal-api-0\" (UID: \"29e77b45-e692-415f-8966-6e74d30b4d7b\") " pod="openstack/glance-default-internal-api-0" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.598080 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7z47n\" (UniqueName: \"kubernetes.io/projected/29e77b45-e692-415f-8966-6e74d30b4d7b-kube-api-access-7z47n\") pod \"glance-default-internal-api-0\" (UID: \"29e77b45-e692-415f-8966-6e74d30b4d7b\") " pod="openstack/glance-default-internal-api-0" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.609288 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.615598 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2b3a7c7-2c23-4e95-a04f-4b8156bcedef" path="/var/lib/kubelet/pods/f2b3a7c7-2c23-4e95-a04f-4b8156bcedef/volumes" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.637409 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-internal-api-0\" (UID: \"29e77b45-e692-415f-8966-6e74d30b4d7b\") " pod="openstack/glance-default-internal-api-0" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.690426 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fa66b61c-a20e-48af-9432-8a980d60b47c-ceph\") pod \"fa66b61c-a20e-48af-9432-8a980d60b47c\" (UID: \"fa66b61c-a20e-48af-9432-8a980d60b47c\") " Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.690487 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa66b61c-a20e-48af-9432-8a980d60b47c-scripts\") pod \"fa66b61c-a20e-48af-9432-8a980d60b47c\" (UID: \"fa66b61c-a20e-48af-9432-8a980d60b47c\") " Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.690631 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa66b61c-a20e-48af-9432-8a980d60b47c-httpd-run\") pod \"fa66b61c-a20e-48af-9432-8a980d60b47c\" (UID: \"fa66b61c-a20e-48af-9432-8a980d60b47c\") " Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.690682 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"fa66b61c-a20e-48af-9432-8a980d60b47c\" (UID: \"fa66b61c-a20e-48af-9432-8a980d60b47c\") " Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.690706 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa66b61c-a20e-48af-9432-8a980d60b47c-logs\") pod \"fa66b61c-a20e-48af-9432-8a980d60b47c\" (UID: \"fa66b61c-a20e-48af-9432-8a980d60b47c\") " Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.690798 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa66b61c-a20e-48af-9432-8a980d60b47c-combined-ca-bundle\") pod \"fa66b61c-a20e-48af-9432-8a980d60b47c\" (UID: \"fa66b61c-a20e-48af-9432-8a980d60b47c\") " Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.690826 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa66b61c-a20e-48af-9432-8a980d60b47c-public-tls-certs\") pod \"fa66b61c-a20e-48af-9432-8a980d60b47c\" (UID: \"fa66b61c-a20e-48af-9432-8a980d60b47c\") " Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.690841 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa66b61c-a20e-48af-9432-8a980d60b47c-config-data\") pod \"fa66b61c-a20e-48af-9432-8a980d60b47c\" (UID: \"fa66b61c-a20e-48af-9432-8a980d60b47c\") " Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.690865 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vz8lr\" (UniqueName: \"kubernetes.io/projected/fa66b61c-a20e-48af-9432-8a980d60b47c-kube-api-access-vz8lr\") pod \"fa66b61c-a20e-48af-9432-8a980d60b47c\" (UID: \"fa66b61c-a20e-48af-9432-8a980d60b47c\") " Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.691537 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa66b61c-a20e-48af-9432-8a980d60b47c-logs" (OuterVolumeSpecName: "logs") pod "fa66b61c-a20e-48af-9432-8a980d60b47c" (UID: "fa66b61c-a20e-48af-9432-8a980d60b47c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.691668 4949 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fa66b61c-a20e-48af-9432-8a980d60b47c-logs\") on node \"crc\" DevicePath \"\"" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.692668 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fa66b61c-a20e-48af-9432-8a980d60b47c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "fa66b61c-a20e-48af-9432-8a980d60b47c" (UID: "fa66b61c-a20e-48af-9432-8a980d60b47c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.698344 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa66b61c-a20e-48af-9432-8a980d60b47c-kube-api-access-vz8lr" (OuterVolumeSpecName: "kube-api-access-vz8lr") pod "fa66b61c-a20e-48af-9432-8a980d60b47c" (UID: "fa66b61c-a20e-48af-9432-8a980d60b47c"). InnerVolumeSpecName "kube-api-access-vz8lr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.698486 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fa66b61c-a20e-48af-9432-8a980d60b47c-ceph" (OuterVolumeSpecName: "ceph") pod "fa66b61c-a20e-48af-9432-8a980d60b47c" (UID: "fa66b61c-a20e-48af-9432-8a980d60b47c"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.700271 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "glance") pod "fa66b61c-a20e-48af-9432-8a980d60b47c" (UID: "fa66b61c-a20e-48af-9432-8a980d60b47c"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.703086 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa66b61c-a20e-48af-9432-8a980d60b47c-scripts" (OuterVolumeSpecName: "scripts") pod "fa66b61c-a20e-48af-9432-8a980d60b47c" (UID: "fa66b61c-a20e-48af-9432-8a980d60b47c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.725756 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa66b61c-a20e-48af-9432-8a980d60b47c-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "fa66b61c-a20e-48af-9432-8a980d60b47c" (UID: "fa66b61c-a20e-48af-9432-8a980d60b47c"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.746256 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-internal-api-0" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.750649 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa66b61c-a20e-48af-9432-8a980d60b47c-config-data" (OuterVolumeSpecName: "config-data") pod "fa66b61c-a20e-48af-9432-8a980d60b47c" (UID: "fa66b61c-a20e-48af-9432-8a980d60b47c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.758148 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fa66b61c-a20e-48af-9432-8a980d60b47c-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "fa66b61c-a20e-48af-9432-8a980d60b47c" (UID: "fa66b61c-a20e-48af-9432-8a980d60b47c"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.798542 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/fa66b61c-a20e-48af-9432-8a980d60b47c-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.798569 4949 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/fa66b61c-a20e-48af-9432-8a980d60b47c-public-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.798584 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fa66b61c-a20e-48af-9432-8a980d60b47c-config-data\") on node \"crc\" DevicePath \"\"" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.798622 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vz8lr\" (UniqueName: \"kubernetes.io/projected/fa66b61c-a20e-48af-9432-8a980d60b47c-kube-api-access-vz8lr\") on node \"crc\" DevicePath \"\"" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.798633 4949 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/fa66b61c-a20e-48af-9432-8a980d60b47c-ceph\") on node \"crc\" DevicePath \"\"" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.798643 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fa66b61c-a20e-48af-9432-8a980d60b47c-scripts\") on node \"crc\" DevicePath \"\"" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.798650 4949 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fa66b61c-a20e-48af-9432-8a980d60b47c-httpd-run\") on node \"crc\" DevicePath \"\"" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.798679 4949 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.818634 4949 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Oct 01 16:35:35 crc kubenswrapper[4949]: I1001 16:35:35.903364 4949 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Oct 01 16:35:36 crc kubenswrapper[4949]: I1001 16:35:36.379051 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-internal-api-0"] Oct 01 16:35:36 crc kubenswrapper[4949]: W1001 16:35:36.380633 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod29e77b45_e692_415f_8966_6e74d30b4d7b.slice/crio-9f18e5b6b5b1e752475aa556bc1da1dcd0a4ff5a081f1edd057fd4dcbebad818 WatchSource:0}: Error finding container 9f18e5b6b5b1e752475aa556bc1da1dcd0a4ff5a081f1edd057fd4dcbebad818: Status 404 returned error can't find the container with id 9f18e5b6b5b1e752475aa556bc1da1dcd0a4ff5a081f1edd057fd4dcbebad818 Oct 01 16:35:36 crc kubenswrapper[4949]: I1001 16:35:36.385192 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"fa66b61c-a20e-48af-9432-8a980d60b47c","Type":"ContainerDied","Data":"e9d93511dddd9198c329d746bf65a8599027be770dbff5811ba579b28bb68ca0"} Oct 01 16:35:36 crc kubenswrapper[4949]: I1001 16:35:36.385243 4949 scope.go:117] "RemoveContainer" containerID="6c210d6d50f17a5fb5b20cbdaa3c99913eb91c33dfdde78193fee9f589d137c0" Oct 01 16:35:36 crc kubenswrapper[4949]: I1001 16:35:36.385353 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 01 16:35:36 crc kubenswrapper[4949]: I1001 16:35:36.475910 4949 scope.go:117] "RemoveContainer" containerID="ffec770b09716c408b86ce284671dfa4ae5a6300e6da80906218ad22cfe81c00" Oct 01 16:35:36 crc kubenswrapper[4949]: I1001 16:35:36.481477 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 01 16:35:36 crc kubenswrapper[4949]: I1001 16:35:36.498367 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 01 16:35:36 crc kubenswrapper[4949]: I1001 16:35:36.506302 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/glance-default-external-api-0"] Oct 01 16:35:36 crc kubenswrapper[4949]: E1001 16:35:36.506734 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa66b61c-a20e-48af-9432-8a980d60b47c" containerName="glance-log" Oct 01 16:35:36 crc kubenswrapper[4949]: I1001 16:35:36.506747 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa66b61c-a20e-48af-9432-8a980d60b47c" containerName="glance-log" Oct 01 16:35:36 crc kubenswrapper[4949]: E1001 16:35:36.506763 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fa66b61c-a20e-48af-9432-8a980d60b47c" containerName="glance-httpd" Oct 01 16:35:36 crc kubenswrapper[4949]: I1001 16:35:36.506769 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="fa66b61c-a20e-48af-9432-8a980d60b47c" containerName="glance-httpd" Oct 01 16:35:36 crc kubenswrapper[4949]: I1001 16:35:36.506941 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa66b61c-a20e-48af-9432-8a980d60b47c" containerName="glance-log" Oct 01 16:35:36 crc kubenswrapper[4949]: I1001 16:35:36.506957 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="fa66b61c-a20e-48af-9432-8a980d60b47c" containerName="glance-httpd" Oct 01 16:35:36 crc kubenswrapper[4949]: I1001 16:35:36.507888 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 01 16:35:36 crc kubenswrapper[4949]: I1001 16:35:36.512195 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-glance-default-public-svc" Oct 01 16:35:36 crc kubenswrapper[4949]: I1001 16:35:36.512410 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"glance-default-external-config-data" Oct 01 16:35:36 crc kubenswrapper[4949]: I1001 16:35:36.548805 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 01 16:35:36 crc kubenswrapper[4949]: I1001 16:35:36.623951 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"b2d87d62-452d-44c4-8cfd-5cfaa8bc1157\") " pod="openstack/glance-default-external-api-0" Oct 01 16:35:36 crc kubenswrapper[4949]: I1001 16:35:36.623995 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2d87d62-452d-44c4-8cfd-5cfaa8bc1157-scripts\") pod \"glance-default-external-api-0\" (UID: \"b2d87d62-452d-44c4-8cfd-5cfaa8bc1157\") " pod="openstack/glance-default-external-api-0" Oct 01 16:35:36 crc kubenswrapper[4949]: I1001 16:35:36.624020 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b2d87d62-452d-44c4-8cfd-5cfaa8bc1157-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b2d87d62-452d-44c4-8cfd-5cfaa8bc1157\") " pod="openstack/glance-default-external-api-0" Oct 01 16:35:36 crc kubenswrapper[4949]: I1001 16:35:36.624069 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2d87d62-452d-44c4-8cfd-5cfaa8bc1157-config-data\") pod \"glance-default-external-api-0\" (UID: \"b2d87d62-452d-44c4-8cfd-5cfaa8bc1157\") " pod="openstack/glance-default-external-api-0" Oct 01 16:35:36 crc kubenswrapper[4949]: I1001 16:35:36.624093 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2d87d62-452d-44c4-8cfd-5cfaa8bc1157-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b2d87d62-452d-44c4-8cfd-5cfaa8bc1157\") " pod="openstack/glance-default-external-api-0" Oct 01 16:35:36 crc kubenswrapper[4949]: I1001 16:35:36.624116 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rprff\" (UniqueName: \"kubernetes.io/projected/b2d87d62-452d-44c4-8cfd-5cfaa8bc1157-kube-api-access-rprff\") pod \"glance-default-external-api-0\" (UID: \"b2d87d62-452d-44c4-8cfd-5cfaa8bc1157\") " pod="openstack/glance-default-external-api-0" Oct 01 16:35:36 crc kubenswrapper[4949]: I1001 16:35:36.624181 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b2d87d62-452d-44c4-8cfd-5cfaa8bc1157-ceph\") pod \"glance-default-external-api-0\" (UID: \"b2d87d62-452d-44c4-8cfd-5cfaa8bc1157\") " pod="openstack/glance-default-external-api-0" Oct 01 16:35:36 crc kubenswrapper[4949]: I1001 16:35:36.624196 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2d87d62-452d-44c4-8cfd-5cfaa8bc1157-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b2d87d62-452d-44c4-8cfd-5cfaa8bc1157\") " pod="openstack/glance-default-external-api-0" Oct 01 16:35:36 crc kubenswrapper[4949]: I1001 16:35:36.624219 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2d87d62-452d-44c4-8cfd-5cfaa8bc1157-logs\") pod \"glance-default-external-api-0\" (UID: \"b2d87d62-452d-44c4-8cfd-5cfaa8bc1157\") " pod="openstack/glance-default-external-api-0" Oct 01 16:35:36 crc kubenswrapper[4949]: I1001 16:35:36.726358 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b2d87d62-452d-44c4-8cfd-5cfaa8bc1157-ceph\") pod \"glance-default-external-api-0\" (UID: \"b2d87d62-452d-44c4-8cfd-5cfaa8bc1157\") " pod="openstack/glance-default-external-api-0" Oct 01 16:35:36 crc kubenswrapper[4949]: I1001 16:35:36.726408 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2d87d62-452d-44c4-8cfd-5cfaa8bc1157-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b2d87d62-452d-44c4-8cfd-5cfaa8bc1157\") " pod="openstack/glance-default-external-api-0" Oct 01 16:35:36 crc kubenswrapper[4949]: I1001 16:35:36.726437 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2d87d62-452d-44c4-8cfd-5cfaa8bc1157-logs\") pod \"glance-default-external-api-0\" (UID: \"b2d87d62-452d-44c4-8cfd-5cfaa8bc1157\") " pod="openstack/glance-default-external-api-0" Oct 01 16:35:36 crc kubenswrapper[4949]: I1001 16:35:36.726530 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"b2d87d62-452d-44c4-8cfd-5cfaa8bc1157\") " pod="openstack/glance-default-external-api-0" Oct 01 16:35:36 crc kubenswrapper[4949]: I1001 16:35:36.726549 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2d87d62-452d-44c4-8cfd-5cfaa8bc1157-scripts\") pod \"glance-default-external-api-0\" (UID: \"b2d87d62-452d-44c4-8cfd-5cfaa8bc1157\") " pod="openstack/glance-default-external-api-0" Oct 01 16:35:36 crc kubenswrapper[4949]: I1001 16:35:36.726568 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b2d87d62-452d-44c4-8cfd-5cfaa8bc1157-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b2d87d62-452d-44c4-8cfd-5cfaa8bc1157\") " pod="openstack/glance-default-external-api-0" Oct 01 16:35:36 crc kubenswrapper[4949]: I1001 16:35:36.726641 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2d87d62-452d-44c4-8cfd-5cfaa8bc1157-config-data\") pod \"glance-default-external-api-0\" (UID: \"b2d87d62-452d-44c4-8cfd-5cfaa8bc1157\") " pod="openstack/glance-default-external-api-0" Oct 01 16:35:36 crc kubenswrapper[4949]: I1001 16:35:36.726665 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2d87d62-452d-44c4-8cfd-5cfaa8bc1157-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b2d87d62-452d-44c4-8cfd-5cfaa8bc1157\") " pod="openstack/glance-default-external-api-0" Oct 01 16:35:36 crc kubenswrapper[4949]: I1001 16:35:36.726690 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rprff\" (UniqueName: \"kubernetes.io/projected/b2d87d62-452d-44c4-8cfd-5cfaa8bc1157-kube-api-access-rprff\") pod \"glance-default-external-api-0\" (UID: \"b2d87d62-452d-44c4-8cfd-5cfaa8bc1157\") " pod="openstack/glance-default-external-api-0" Oct 01 16:35:36 crc kubenswrapper[4949]: I1001 16:35:36.730477 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b2d87d62-452d-44c4-8cfd-5cfaa8bc1157-logs\") pod \"glance-default-external-api-0\" (UID: \"b2d87d62-452d-44c4-8cfd-5cfaa8bc1157\") " pod="openstack/glance-default-external-api-0" Oct 01 16:35:36 crc kubenswrapper[4949]: I1001 16:35:36.730988 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b2d87d62-452d-44c4-8cfd-5cfaa8bc1157-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"b2d87d62-452d-44c4-8cfd-5cfaa8bc1157\") " pod="openstack/glance-default-external-api-0" Oct 01 16:35:36 crc kubenswrapper[4949]: I1001 16:35:36.731285 4949 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"b2d87d62-452d-44c4-8cfd-5cfaa8bc1157\") device mount path \"/mnt/openstack/pv10\"" pod="openstack/glance-default-external-api-0" Oct 01 16:35:36 crc kubenswrapper[4949]: I1001 16:35:36.734692 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/b2d87d62-452d-44c4-8cfd-5cfaa8bc1157-ceph\") pod \"glance-default-external-api-0\" (UID: \"b2d87d62-452d-44c4-8cfd-5cfaa8bc1157\") " pod="openstack/glance-default-external-api-0" Oct 01 16:35:36 crc kubenswrapper[4949]: I1001 16:35:36.736196 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b2d87d62-452d-44c4-8cfd-5cfaa8bc1157-config-data\") pod \"glance-default-external-api-0\" (UID: \"b2d87d62-452d-44c4-8cfd-5cfaa8bc1157\") " pod="openstack/glance-default-external-api-0" Oct 01 16:35:36 crc kubenswrapper[4949]: I1001 16:35:36.741705 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/b2d87d62-452d-44c4-8cfd-5cfaa8bc1157-public-tls-certs\") pod \"glance-default-external-api-0\" (UID: \"b2d87d62-452d-44c4-8cfd-5cfaa8bc1157\") " pod="openstack/glance-default-external-api-0" Oct 01 16:35:36 crc kubenswrapper[4949]: I1001 16:35:36.742487 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b2d87d62-452d-44c4-8cfd-5cfaa8bc1157-combined-ca-bundle\") pod \"glance-default-external-api-0\" (UID: \"b2d87d62-452d-44c4-8cfd-5cfaa8bc1157\") " pod="openstack/glance-default-external-api-0" Oct 01 16:35:36 crc kubenswrapper[4949]: I1001 16:35:36.744723 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b2d87d62-452d-44c4-8cfd-5cfaa8bc1157-scripts\") pod \"glance-default-external-api-0\" (UID: \"b2d87d62-452d-44c4-8cfd-5cfaa8bc1157\") " pod="openstack/glance-default-external-api-0" Oct 01 16:35:36 crc kubenswrapper[4949]: I1001 16:35:36.745396 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rprff\" (UniqueName: \"kubernetes.io/projected/b2d87d62-452d-44c4-8cfd-5cfaa8bc1157-kube-api-access-rprff\") pod \"glance-default-external-api-0\" (UID: \"b2d87d62-452d-44c4-8cfd-5cfaa8bc1157\") " pod="openstack/glance-default-external-api-0" Oct 01 16:35:36 crc kubenswrapper[4949]: I1001 16:35:36.768148 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"glance-default-external-api-0\" (UID: \"b2d87d62-452d-44c4-8cfd-5cfaa8bc1157\") " pod="openstack/glance-default-external-api-0" Oct 01 16:35:36 crc kubenswrapper[4949]: I1001 16:35:36.891504 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/glance-default-external-api-0" Oct 01 16:35:37 crc kubenswrapper[4949]: I1001 16:35:37.396359 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"29e77b45-e692-415f-8966-6e74d30b4d7b","Type":"ContainerStarted","Data":"ea556683c4a56a2ba0a22a98422be43255e08ed284c338809edadbfcdeb2c359"} Oct 01 16:35:37 crc kubenswrapper[4949]: I1001 16:35:37.397063 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"29e77b45-e692-415f-8966-6e74d30b4d7b","Type":"ContainerStarted","Data":"9f18e5b6b5b1e752475aa556bc1da1dcd0a4ff5a081f1edd057fd4dcbebad818"} Oct 01 16:35:37 crc kubenswrapper[4949]: I1001 16:35:37.612828 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fa66b61c-a20e-48af-9432-8a980d60b47c" path="/var/lib/kubelet/pods/fa66b61c-a20e-48af-9432-8a980d60b47c/volumes" Oct 01 16:35:40 crc kubenswrapper[4949]: I1001 16:35:40.181285 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-backup-0" Oct 01 16:35:40 crc kubenswrapper[4949]: I1001 16:35:40.236013 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/cinder-volume-volume1-0" Oct 01 16:35:40 crc kubenswrapper[4949]: I1001 16:35:40.432942 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-97c5-account-create-f68gb"] Oct 01 16:35:40 crc kubenswrapper[4949]: I1001 16:35:40.438796 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-97c5-account-create-f68gb" Oct 01 16:35:40 crc kubenswrapper[4949]: I1001 16:35:40.442904 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Oct 01 16:35:40 crc kubenswrapper[4949]: I1001 16:35:40.449281 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-97c5-account-create-f68gb"] Oct 01 16:35:40 crc kubenswrapper[4949]: I1001 16:35:40.514838 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b64dr\" (UniqueName: \"kubernetes.io/projected/837eef8c-5b04-4afe-90a1-05f19f168bba-kube-api-access-b64dr\") pod \"manila-97c5-account-create-f68gb\" (UID: \"837eef8c-5b04-4afe-90a1-05f19f168bba\") " pod="openstack/manila-97c5-account-create-f68gb" Oct 01 16:35:40 crc kubenswrapper[4949]: I1001 16:35:40.616856 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b64dr\" (UniqueName: \"kubernetes.io/projected/837eef8c-5b04-4afe-90a1-05f19f168bba-kube-api-access-b64dr\") pod \"manila-97c5-account-create-f68gb\" (UID: \"837eef8c-5b04-4afe-90a1-05f19f168bba\") " pod="openstack/manila-97c5-account-create-f68gb" Oct 01 16:35:40 crc kubenswrapper[4949]: I1001 16:35:40.657930 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b64dr\" (UniqueName: \"kubernetes.io/projected/837eef8c-5b04-4afe-90a1-05f19f168bba-kube-api-access-b64dr\") pod \"manila-97c5-account-create-f68gb\" (UID: \"837eef8c-5b04-4afe-90a1-05f19f168bba\") " pod="openstack/manila-97c5-account-create-f68gb" Oct 01 16:35:40 crc kubenswrapper[4949]: I1001 16:35:40.764693 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-97c5-account-create-f68gb" Oct 01 16:35:42 crc kubenswrapper[4949]: I1001 16:35:42.643279 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/glance-default-external-api-0"] Oct 01 16:35:42 crc kubenswrapper[4949]: I1001 16:35:42.655847 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-97c5-account-create-f68gb"] Oct 01 16:35:42 crc kubenswrapper[4949]: W1001 16:35:42.660834 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb2d87d62_452d_44c4_8cfd_5cfaa8bc1157.slice/crio-3bc1b7f77f947c0ebf38770861abd85edd512edaf91d8f114c5375f04e843004 WatchSource:0}: Error finding container 3bc1b7f77f947c0ebf38770861abd85edd512edaf91d8f114c5375f04e843004: Status 404 returned error can't find the container with id 3bc1b7f77f947c0ebf38770861abd85edd512edaf91d8f114c5375f04e843004 Oct 01 16:35:42 crc kubenswrapper[4949]: I1001 16:35:42.673421 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-db-secret" Oct 01 16:35:43 crc kubenswrapper[4949]: I1001 16:35:43.476935 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-788cb9bcc-8rj25" event={"ID":"51ce1d58-e3c1-4eff-819b-b4f6f19e3498","Type":"ContainerStarted","Data":"f333c87ed382609f35357c9bab12b2a2bbedb3aa93010da4be8fe4f75495a1a2"} Oct 01 16:35:43 crc kubenswrapper[4949]: I1001 16:35:43.477555 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-788cb9bcc-8rj25" event={"ID":"51ce1d58-e3c1-4eff-819b-b4f6f19e3498","Type":"ContainerStarted","Data":"d35151c9be56ead7878ee7e525c027e34484acf7aa004d71616be7a8330e4f27"} Oct 01 16:35:43 crc kubenswrapper[4949]: I1001 16:35:43.477232 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-788cb9bcc-8rj25" podUID="51ce1d58-e3c1-4eff-819b-b4f6f19e3498" containerName="horizon-log" containerID="cri-o://d35151c9be56ead7878ee7e525c027e34484acf7aa004d71616be7a8330e4f27" gracePeriod=30 Oct 01 16:35:43 crc kubenswrapper[4949]: I1001 16:35:43.477653 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-788cb9bcc-8rj25" podUID="51ce1d58-e3c1-4eff-819b-b4f6f19e3498" containerName="horizon" containerID="cri-o://f333c87ed382609f35357c9bab12b2a2bbedb3aa93010da4be8fe4f75495a1a2" gracePeriod=30 Oct 01 16:35:43 crc kubenswrapper[4949]: I1001 16:35:43.481394 4949 generic.go:334] "Generic (PLEG): container finished" podID="837eef8c-5b04-4afe-90a1-05f19f168bba" containerID="515e3b028d6e494898ffce45e312069a6c7492536b2a441f16660fa62487d4f0" exitCode=0 Oct 01 16:35:43 crc kubenswrapper[4949]: I1001 16:35:43.481539 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-97c5-account-create-f68gb" event={"ID":"837eef8c-5b04-4afe-90a1-05f19f168bba","Type":"ContainerDied","Data":"515e3b028d6e494898ffce45e312069a6c7492536b2a441f16660fa62487d4f0"} Oct 01 16:35:43 crc kubenswrapper[4949]: I1001 16:35:43.481569 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-97c5-account-create-f68gb" event={"ID":"837eef8c-5b04-4afe-90a1-05f19f168bba","Type":"ContainerStarted","Data":"e0b2a468a797955abce88db4fc0e87daeb84c5bdcc6fa84671d0018bf98b3bd5"} Oct 01 16:35:43 crc kubenswrapper[4949]: I1001 16:35:43.484861 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f78d9658d-xl5nq" event={"ID":"b49dfcdb-910b-48e0-91bb-a426980dc277","Type":"ContainerStarted","Data":"7d34943c16dbb9818f91ad5b0f6b4a227edbc27415577f841aed76aad83b78e1"} Oct 01 16:35:43 crc kubenswrapper[4949]: I1001 16:35:43.484907 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f78d9658d-xl5nq" event={"ID":"b49dfcdb-910b-48e0-91bb-a426980dc277","Type":"ContainerStarted","Data":"2b7f23ac7a5568b87ae50dd77d191ecb1fb638e0f0f34ff46fbcb56839f9fb46"} Oct 01 16:35:43 crc kubenswrapper[4949]: I1001 16:35:43.487873 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b2d87d62-452d-44c4-8cfd-5cfaa8bc1157","Type":"ContainerStarted","Data":"c4eb231313b57be369c6d11d29fccad1a900115a23da38069189e433b4159074"} Oct 01 16:35:43 crc kubenswrapper[4949]: I1001 16:35:43.487939 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b2d87d62-452d-44c4-8cfd-5cfaa8bc1157","Type":"ContainerStarted","Data":"3bc1b7f77f947c0ebf38770861abd85edd512edaf91d8f114c5375f04e843004"} Oct 01 16:35:43 crc kubenswrapper[4949]: I1001 16:35:43.490388 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-internal-api-0" event={"ID":"29e77b45-e692-415f-8966-6e74d30b4d7b","Type":"ContainerStarted","Data":"7764b03f70abc681adb3666728001b35bbb306c5b856f2f2089e9cc374450018"} Oct 01 16:35:43 crc kubenswrapper[4949]: I1001 16:35:43.505318 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-797b4b5c88-m9tdj" event={"ID":"3ff4ae7d-bc42-404d-ab53-e189c6d9a00a","Type":"ContainerStarted","Data":"5b5ceb6fa691302c72665c7715438e4de9ac03f263a01f33223793864ef2ba5d"} Oct 01 16:35:43 crc kubenswrapper[4949]: I1001 16:35:43.505373 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-797b4b5c88-m9tdj" event={"ID":"3ff4ae7d-bc42-404d-ab53-e189c6d9a00a","Type":"ContainerStarted","Data":"4918efa17930a76508bfe509d37719b32345e60dc26e15c9ba54a0d919647942"} Oct 01 16:35:43 crc kubenswrapper[4949]: I1001 16:35:43.510481 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7756fd4cb7-fb6wc" event={"ID":"a2eee1a1-972c-4bf6-9dab-d55162980ed9","Type":"ContainerStarted","Data":"4c05e1c34bacf35ea67c373df5f4fe8e4317af2fd01832ea8d64aea9c5c0ebde"} Oct 01 16:35:43 crc kubenswrapper[4949]: I1001 16:35:43.510530 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7756fd4cb7-fb6wc" event={"ID":"a2eee1a1-972c-4bf6-9dab-d55162980ed9","Type":"ContainerStarted","Data":"e527636871b09b9ce51622266540b5954657713feac9dccb570e0f16c92d7ffa"} Oct 01 16:35:43 crc kubenswrapper[4949]: I1001 16:35:43.510618 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7756fd4cb7-fb6wc" podUID="a2eee1a1-972c-4bf6-9dab-d55162980ed9" containerName="horizon-log" containerID="cri-o://e527636871b09b9ce51622266540b5954657713feac9dccb570e0f16c92d7ffa" gracePeriod=30 Oct 01 16:35:43 crc kubenswrapper[4949]: I1001 16:35:43.510657 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7756fd4cb7-fb6wc" podUID="a2eee1a1-972c-4bf6-9dab-d55162980ed9" containerName="horizon" containerID="cri-o://4c05e1c34bacf35ea67c373df5f4fe8e4317af2fd01832ea8d64aea9c5c0ebde" gracePeriod=30 Oct 01 16:35:43 crc kubenswrapper[4949]: I1001 16:35:43.538775 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-internal-api-0" podStartSLOduration=8.538753324 podStartE2EDuration="8.538753324s" podCreationTimestamp="2025-10-01 16:35:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 16:35:43.532849441 +0000 UTC m=+3242.838455632" watchObservedRunningTime="2025-10-01 16:35:43.538753324 +0000 UTC m=+3242.844359515" Oct 01 16:35:43 crc kubenswrapper[4949]: I1001 16:35:43.538945 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-788cb9bcc-8rj25" podStartSLOduration=2.517242157 podStartE2EDuration="13.53893732s" podCreationTimestamp="2025-10-01 16:35:30 +0000 UTC" firstStartedPulling="2025-10-01 16:35:31.260342707 +0000 UTC m=+3230.565948888" lastFinishedPulling="2025-10-01 16:35:42.28203786 +0000 UTC m=+3241.587644051" observedRunningTime="2025-10-01 16:35:43.504389295 +0000 UTC m=+3242.809995496" watchObservedRunningTime="2025-10-01 16:35:43.53893732 +0000 UTC m=+3242.844543511" Oct 01 16:35:43 crc kubenswrapper[4949]: I1001 16:35:43.574472 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7f78d9658d-xl5nq" podStartSLOduration=2.445233718 podStartE2EDuration="10.574446469s" podCreationTimestamp="2025-10-01 16:35:33 +0000 UTC" firstStartedPulling="2025-10-01 16:35:34.189520891 +0000 UTC m=+3233.495127082" lastFinishedPulling="2025-10-01 16:35:42.318733642 +0000 UTC m=+3241.624339833" observedRunningTime="2025-10-01 16:35:43.567264371 +0000 UTC m=+3242.872870562" watchObservedRunningTime="2025-10-01 16:35:43.574446469 +0000 UTC m=+3242.880052680" Oct 01 16:35:43 crc kubenswrapper[4949]: I1001 16:35:43.606315 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-7756fd4cb7-fb6wc" podStartSLOduration=2.521816843 podStartE2EDuration="13.606294919s" podCreationTimestamp="2025-10-01 16:35:30 +0000 UTC" firstStartedPulling="2025-10-01 16:35:31.159085672 +0000 UTC m=+3230.464691863" lastFinishedPulling="2025-10-01 16:35:42.243563748 +0000 UTC m=+3241.549169939" observedRunningTime="2025-10-01 16:35:43.594148923 +0000 UTC m=+3242.899755114" watchObservedRunningTime="2025-10-01 16:35:43.606294919 +0000 UTC m=+3242.911901130" Oct 01 16:35:43 crc kubenswrapper[4949]: I1001 16:35:43.616922 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/horizon-797b4b5c88-m9tdj" podStartSLOduration=2.562792053 podStartE2EDuration="10.61686708s" podCreationTimestamp="2025-10-01 16:35:33 +0000 UTC" firstStartedPulling="2025-10-01 16:35:34.189519271 +0000 UTC m=+3233.495125472" lastFinishedPulling="2025-10-01 16:35:42.243594288 +0000 UTC m=+3241.549200499" observedRunningTime="2025-10-01 16:35:43.614504574 +0000 UTC m=+3242.920110775" watchObservedRunningTime="2025-10-01 16:35:43.61686708 +0000 UTC m=+3242.922473271" Oct 01 16:35:44 crc kubenswrapper[4949]: I1001 16:35:44.523802 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/glance-default-external-api-0" event={"ID":"b2d87d62-452d-44c4-8cfd-5cfaa8bc1157","Type":"ContainerStarted","Data":"5314fd135235efffe18a0124ad1a129861ac8669c8c2edff8469777ec40fb146"} Oct 01 16:35:44 crc kubenswrapper[4949]: I1001 16:35:44.551220 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/glance-default-external-api-0" podStartSLOduration=8.551115785 podStartE2EDuration="8.551115785s" podCreationTimestamp="2025-10-01 16:35:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 16:35:44.54477935 +0000 UTC m=+3243.850385561" watchObservedRunningTime="2025-10-01 16:35:44.551115785 +0000 UTC m=+3243.856721976" Oct 01 16:35:44 crc kubenswrapper[4949]: I1001 16:35:44.933090 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-97c5-account-create-f68gb" Oct 01 16:35:45 crc kubenswrapper[4949]: I1001 16:35:45.118294 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b64dr\" (UniqueName: \"kubernetes.io/projected/837eef8c-5b04-4afe-90a1-05f19f168bba-kube-api-access-b64dr\") pod \"837eef8c-5b04-4afe-90a1-05f19f168bba\" (UID: \"837eef8c-5b04-4afe-90a1-05f19f168bba\") " Oct 01 16:35:45 crc kubenswrapper[4949]: I1001 16:35:45.123951 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/837eef8c-5b04-4afe-90a1-05f19f168bba-kube-api-access-b64dr" (OuterVolumeSpecName: "kube-api-access-b64dr") pod "837eef8c-5b04-4afe-90a1-05f19f168bba" (UID: "837eef8c-5b04-4afe-90a1-05f19f168bba"). InnerVolumeSpecName "kube-api-access-b64dr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:35:45 crc kubenswrapper[4949]: I1001 16:35:45.221608 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b64dr\" (UniqueName: \"kubernetes.io/projected/837eef8c-5b04-4afe-90a1-05f19f168bba-kube-api-access-b64dr\") on node \"crc\" DevicePath \"\"" Oct 01 16:35:45 crc kubenswrapper[4949]: I1001 16:35:45.540491 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-97c5-account-create-f68gb" Oct 01 16:35:45 crc kubenswrapper[4949]: I1001 16:35:45.541350 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-97c5-account-create-f68gb" event={"ID":"837eef8c-5b04-4afe-90a1-05f19f168bba","Type":"ContainerDied","Data":"e0b2a468a797955abce88db4fc0e87daeb84c5bdcc6fa84671d0018bf98b3bd5"} Oct 01 16:35:45 crc kubenswrapper[4949]: I1001 16:35:45.541387 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0b2a468a797955abce88db4fc0e87daeb84c5bdcc6fa84671d0018bf98b3bd5" Oct 01 16:35:45 crc kubenswrapper[4949]: I1001 16:35:45.747690 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 01 16:35:45 crc kubenswrapper[4949]: I1001 16:35:45.747742 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-internal-api-0" Oct 01 16:35:45 crc kubenswrapper[4949]: I1001 16:35:45.801963 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 01 16:35:45 crc kubenswrapper[4949]: I1001 16:35:45.802343 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-internal-api-0" Oct 01 16:35:46 crc kubenswrapper[4949]: I1001 16:35:46.554605 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 01 16:35:46 crc kubenswrapper[4949]: I1001 16:35:46.554665 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-internal-api-0" Oct 01 16:35:46 crc kubenswrapper[4949]: I1001 16:35:46.892670 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 01 16:35:46 crc kubenswrapper[4949]: I1001 16:35:46.892994 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/glance-default-external-api-0" Oct 01 16:35:46 crc kubenswrapper[4949]: I1001 16:35:46.931115 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 01 16:35:46 crc kubenswrapper[4949]: I1001 16:35:46.941417 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/glance-default-external-api-0" Oct 01 16:35:47 crc kubenswrapper[4949]: I1001 16:35:47.562691 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 01 16:35:47 crc kubenswrapper[4949]: I1001 16:35:47.562724 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/glance-default-external-api-0" Oct 01 16:35:48 crc kubenswrapper[4949]: I1001 16:35:48.837255 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 01 16:35:48 crc kubenswrapper[4949]: I1001 16:35:48.841012 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-internal-api-0" Oct 01 16:35:50 crc kubenswrapper[4949]: I1001 16:35:50.554512 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 01 16:35:50 crc kubenswrapper[4949]: I1001 16:35:50.659991 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-788cb9bcc-8rj25" Oct 01 16:35:50 crc kubenswrapper[4949]: I1001 16:35:50.663789 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/glance-default-external-api-0" Oct 01 16:35:50 crc kubenswrapper[4949]: I1001 16:35:50.739556 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7756fd4cb7-fb6wc" Oct 01 16:35:50 crc kubenswrapper[4949]: I1001 16:35:50.797172 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-db-sync-c2td2"] Oct 01 16:35:50 crc kubenswrapper[4949]: E1001 16:35:50.797772 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="837eef8c-5b04-4afe-90a1-05f19f168bba" containerName="mariadb-account-create" Oct 01 16:35:50 crc kubenswrapper[4949]: I1001 16:35:50.797846 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="837eef8c-5b04-4afe-90a1-05f19f168bba" containerName="mariadb-account-create" Oct 01 16:35:50 crc kubenswrapper[4949]: I1001 16:35:50.798222 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="837eef8c-5b04-4afe-90a1-05f19f168bba" containerName="mariadb-account-create" Oct 01 16:35:50 crc kubenswrapper[4949]: I1001 16:35:50.798883 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-c2td2" Oct 01 16:35:50 crc kubenswrapper[4949]: I1001 16:35:50.800842 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-8d4hg" Oct 01 16:35:50 crc kubenswrapper[4949]: I1001 16:35:50.801466 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Oct 01 16:35:50 crc kubenswrapper[4949]: I1001 16:35:50.816734 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-c2td2"] Oct 01 16:35:50 crc kubenswrapper[4949]: I1001 16:35:50.952218 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b775d20-0507-4612-af5a-3400bd30e637-config-data\") pod \"manila-db-sync-c2td2\" (UID: \"6b775d20-0507-4612-af5a-3400bd30e637\") " pod="openstack/manila-db-sync-c2td2" Oct 01 16:35:50 crc kubenswrapper[4949]: I1001 16:35:50.952294 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96xxj\" (UniqueName: \"kubernetes.io/projected/6b775d20-0507-4612-af5a-3400bd30e637-kube-api-access-96xxj\") pod \"manila-db-sync-c2td2\" (UID: \"6b775d20-0507-4612-af5a-3400bd30e637\") " pod="openstack/manila-db-sync-c2td2" Oct 01 16:35:50 crc kubenswrapper[4949]: I1001 16:35:50.952570 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b775d20-0507-4612-af5a-3400bd30e637-combined-ca-bundle\") pod \"manila-db-sync-c2td2\" (UID: \"6b775d20-0507-4612-af5a-3400bd30e637\") " pod="openstack/manila-db-sync-c2td2" Oct 01 16:35:50 crc kubenswrapper[4949]: I1001 16:35:50.952878 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/6b775d20-0507-4612-af5a-3400bd30e637-job-config-data\") pod \"manila-db-sync-c2td2\" (UID: \"6b775d20-0507-4612-af5a-3400bd30e637\") " pod="openstack/manila-db-sync-c2td2" Oct 01 16:35:51 crc kubenswrapper[4949]: I1001 16:35:51.054460 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96xxj\" (UniqueName: \"kubernetes.io/projected/6b775d20-0507-4612-af5a-3400bd30e637-kube-api-access-96xxj\") pod \"manila-db-sync-c2td2\" (UID: \"6b775d20-0507-4612-af5a-3400bd30e637\") " pod="openstack/manila-db-sync-c2td2" Oct 01 16:35:51 crc kubenswrapper[4949]: I1001 16:35:51.054644 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b775d20-0507-4612-af5a-3400bd30e637-combined-ca-bundle\") pod \"manila-db-sync-c2td2\" (UID: \"6b775d20-0507-4612-af5a-3400bd30e637\") " pod="openstack/manila-db-sync-c2td2" Oct 01 16:35:51 crc kubenswrapper[4949]: I1001 16:35:51.054891 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/6b775d20-0507-4612-af5a-3400bd30e637-job-config-data\") pod \"manila-db-sync-c2td2\" (UID: \"6b775d20-0507-4612-af5a-3400bd30e637\") " pod="openstack/manila-db-sync-c2td2" Oct 01 16:35:51 crc kubenswrapper[4949]: I1001 16:35:51.054975 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b775d20-0507-4612-af5a-3400bd30e637-config-data\") pod \"manila-db-sync-c2td2\" (UID: \"6b775d20-0507-4612-af5a-3400bd30e637\") " pod="openstack/manila-db-sync-c2td2" Oct 01 16:35:51 crc kubenswrapper[4949]: I1001 16:35:51.064408 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b775d20-0507-4612-af5a-3400bd30e637-config-data\") pod \"manila-db-sync-c2td2\" (UID: \"6b775d20-0507-4612-af5a-3400bd30e637\") " pod="openstack/manila-db-sync-c2td2" Oct 01 16:35:51 crc kubenswrapper[4949]: I1001 16:35:51.064467 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b775d20-0507-4612-af5a-3400bd30e637-combined-ca-bundle\") pod \"manila-db-sync-c2td2\" (UID: \"6b775d20-0507-4612-af5a-3400bd30e637\") " pod="openstack/manila-db-sync-c2td2" Oct 01 16:35:51 crc kubenswrapper[4949]: I1001 16:35:51.071635 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/6b775d20-0507-4612-af5a-3400bd30e637-job-config-data\") pod \"manila-db-sync-c2td2\" (UID: \"6b775d20-0507-4612-af5a-3400bd30e637\") " pod="openstack/manila-db-sync-c2td2" Oct 01 16:35:51 crc kubenswrapper[4949]: I1001 16:35:51.075703 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96xxj\" (UniqueName: \"kubernetes.io/projected/6b775d20-0507-4612-af5a-3400bd30e637-kube-api-access-96xxj\") pod \"manila-db-sync-c2td2\" (UID: \"6b775d20-0507-4612-af5a-3400bd30e637\") " pod="openstack/manila-db-sync-c2td2" Oct 01 16:35:51 crc kubenswrapper[4949]: I1001 16:35:51.121544 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-c2td2" Oct 01 16:35:51 crc kubenswrapper[4949]: I1001 16:35:51.692196 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-db-sync-c2td2"] Oct 01 16:35:51 crc kubenswrapper[4949]: W1001 16:35:51.696445 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6b775d20_0507_4612_af5a_3400bd30e637.slice/crio-5ddc0aa9a320c252bb03b568241d6bb6477493ad8921eb1f58903f7b0918b56b WatchSource:0}: Error finding container 5ddc0aa9a320c252bb03b568241d6bb6477493ad8921eb1f58903f7b0918b56b: Status 404 returned error can't find the container with id 5ddc0aa9a320c252bb03b568241d6bb6477493ad8921eb1f58903f7b0918b56b Oct 01 16:35:52 crc kubenswrapper[4949]: I1001 16:35:52.624209 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-c2td2" event={"ID":"6b775d20-0507-4612-af5a-3400bd30e637","Type":"ContainerStarted","Data":"5ddc0aa9a320c252bb03b568241d6bb6477493ad8921eb1f58903f7b0918b56b"} Oct 01 16:35:53 crc kubenswrapper[4949]: I1001 16:35:53.403456 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7f78d9658d-xl5nq" Oct 01 16:35:53 crc kubenswrapper[4949]: I1001 16:35:53.403756 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-7f78d9658d-xl5nq" Oct 01 16:35:53 crc kubenswrapper[4949]: I1001 16:35:53.405591 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-7f78d9658d-xl5nq" podUID="b49dfcdb-910b-48e0-91bb-a426980dc277" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.254:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.254:8443: connect: connection refused" Oct 01 16:35:53 crc kubenswrapper[4949]: I1001 16:35:53.492703 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-797b4b5c88-m9tdj" Oct 01 16:35:53 crc kubenswrapper[4949]: I1001 16:35:53.492758 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/horizon-797b4b5c88-m9tdj" Oct 01 16:35:53 crc kubenswrapper[4949]: I1001 16:35:53.495494 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openstack/horizon-797b4b5c88-m9tdj" podUID="3ff4ae7d-bc42-404d-ab53-e189c6d9a00a" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.255:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.255:8443: connect: connection refused" Oct 01 16:35:56 crc kubenswrapper[4949]: I1001 16:35:56.666489 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-c2td2" event={"ID":"6b775d20-0507-4612-af5a-3400bd30e637","Type":"ContainerStarted","Data":"3242fd9f074592e47af74d4ab46e9b01ec7b0182fbe1e5c565b422005acde5d2"} Oct 01 16:35:56 crc kubenswrapper[4949]: I1001 16:35:56.695830 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-db-sync-c2td2" podStartSLOduration=2.261121506 podStartE2EDuration="6.695812001s" podCreationTimestamp="2025-10-01 16:35:50 +0000 UTC" firstStartedPulling="2025-10-01 16:35:51.698470987 +0000 UTC m=+3251.004077178" lastFinishedPulling="2025-10-01 16:35:56.133161482 +0000 UTC m=+3255.438767673" observedRunningTime="2025-10-01 16:35:56.690306509 +0000 UTC m=+3255.995912700" watchObservedRunningTime="2025-10-01 16:35:56.695812001 +0000 UTC m=+3256.001418182" Oct 01 16:36:05 crc kubenswrapper[4949]: I1001 16:36:05.677775 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-7f78d9658d-xl5nq" Oct 01 16:36:05 crc kubenswrapper[4949]: I1001 16:36:05.680184 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/horizon-797b4b5c88-m9tdj" Oct 01 16:36:06 crc kubenswrapper[4949]: I1001 16:36:06.764694 4949 generic.go:334] "Generic (PLEG): container finished" podID="6b775d20-0507-4612-af5a-3400bd30e637" containerID="3242fd9f074592e47af74d4ab46e9b01ec7b0182fbe1e5c565b422005acde5d2" exitCode=0 Oct 01 16:36:06 crc kubenswrapper[4949]: I1001 16:36:06.765022 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-c2td2" event={"ID":"6b775d20-0507-4612-af5a-3400bd30e637","Type":"ContainerDied","Data":"3242fd9f074592e47af74d4ab46e9b01ec7b0182fbe1e5c565b422005acde5d2"} Oct 01 16:36:07 crc kubenswrapper[4949]: I1001 16:36:07.363239 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-797b4b5c88-m9tdj" Oct 01 16:36:07 crc kubenswrapper[4949]: I1001 16:36:07.412678 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/horizon-7f78d9658d-xl5nq" Oct 01 16:36:07 crc kubenswrapper[4949]: I1001 16:36:07.485306 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7f78d9658d-xl5nq"] Oct 01 16:36:07 crc kubenswrapper[4949]: I1001 16:36:07.773300 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7f78d9658d-xl5nq" podUID="b49dfcdb-910b-48e0-91bb-a426980dc277" containerName="horizon" containerID="cri-o://7d34943c16dbb9818f91ad5b0f6b4a227edbc27415577f841aed76aad83b78e1" gracePeriod=30 Oct 01 16:36:07 crc kubenswrapper[4949]: I1001 16:36:07.773300 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/horizon-7f78d9658d-xl5nq" podUID="b49dfcdb-910b-48e0-91bb-a426980dc277" containerName="horizon-log" containerID="cri-o://2b7f23ac7a5568b87ae50dd77d191ecb1fb638e0f0f34ff46fbcb56839f9fb46" gracePeriod=30 Oct 01 16:36:08 crc kubenswrapper[4949]: I1001 16:36:08.169945 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-c2td2" Oct 01 16:36:08 crc kubenswrapper[4949]: I1001 16:36:08.223341 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/6b775d20-0507-4612-af5a-3400bd30e637-job-config-data\") pod \"6b775d20-0507-4612-af5a-3400bd30e637\" (UID: \"6b775d20-0507-4612-af5a-3400bd30e637\") " Oct 01 16:36:08 crc kubenswrapper[4949]: I1001 16:36:08.223424 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b775d20-0507-4612-af5a-3400bd30e637-config-data\") pod \"6b775d20-0507-4612-af5a-3400bd30e637\" (UID: \"6b775d20-0507-4612-af5a-3400bd30e637\") " Oct 01 16:36:08 crc kubenswrapper[4949]: I1001 16:36:08.223531 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-96xxj\" (UniqueName: \"kubernetes.io/projected/6b775d20-0507-4612-af5a-3400bd30e637-kube-api-access-96xxj\") pod \"6b775d20-0507-4612-af5a-3400bd30e637\" (UID: \"6b775d20-0507-4612-af5a-3400bd30e637\") " Oct 01 16:36:08 crc kubenswrapper[4949]: I1001 16:36:08.223632 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b775d20-0507-4612-af5a-3400bd30e637-combined-ca-bundle\") pod \"6b775d20-0507-4612-af5a-3400bd30e637\" (UID: \"6b775d20-0507-4612-af5a-3400bd30e637\") " Oct 01 16:36:08 crc kubenswrapper[4949]: I1001 16:36:08.230061 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b775d20-0507-4612-af5a-3400bd30e637-job-config-data" (OuterVolumeSpecName: "job-config-data") pod "6b775d20-0507-4612-af5a-3400bd30e637" (UID: "6b775d20-0507-4612-af5a-3400bd30e637"). InnerVolumeSpecName "job-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:36:08 crc kubenswrapper[4949]: I1001 16:36:08.230640 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b775d20-0507-4612-af5a-3400bd30e637-kube-api-access-96xxj" (OuterVolumeSpecName: "kube-api-access-96xxj") pod "6b775d20-0507-4612-af5a-3400bd30e637" (UID: "6b775d20-0507-4612-af5a-3400bd30e637"). InnerVolumeSpecName "kube-api-access-96xxj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:36:08 crc kubenswrapper[4949]: I1001 16:36:08.238021 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b775d20-0507-4612-af5a-3400bd30e637-config-data" (OuterVolumeSpecName: "config-data") pod "6b775d20-0507-4612-af5a-3400bd30e637" (UID: "6b775d20-0507-4612-af5a-3400bd30e637"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:36:08 crc kubenswrapper[4949]: I1001 16:36:08.268069 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b775d20-0507-4612-af5a-3400bd30e637-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "6b775d20-0507-4612-af5a-3400bd30e637" (UID: "6b775d20-0507-4612-af5a-3400bd30e637"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:36:08 crc kubenswrapper[4949]: I1001 16:36:08.325921 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-96xxj\" (UniqueName: \"kubernetes.io/projected/6b775d20-0507-4612-af5a-3400bd30e637-kube-api-access-96xxj\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:08 crc kubenswrapper[4949]: I1001 16:36:08.326223 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/6b775d20-0507-4612-af5a-3400bd30e637-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:08 crc kubenswrapper[4949]: I1001 16:36:08.326233 4949 reconciler_common.go:293] "Volume detached for volume \"job-config-data\" (UniqueName: \"kubernetes.io/secret/6b775d20-0507-4612-af5a-3400bd30e637-job-config-data\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:08 crc kubenswrapper[4949]: I1001 16:36:08.326241 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6b775d20-0507-4612-af5a-3400bd30e637-config-data\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:08 crc kubenswrapper[4949]: I1001 16:36:08.783876 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-db-sync-c2td2" event={"ID":"6b775d20-0507-4612-af5a-3400bd30e637","Type":"ContainerDied","Data":"5ddc0aa9a320c252bb03b568241d6bb6477493ad8921eb1f58903f7b0918b56b"} Oct 01 16:36:08 crc kubenswrapper[4949]: I1001 16:36:08.783918 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5ddc0aa9a320c252bb03b568241d6bb6477493ad8921eb1f58903f7b0918b56b" Oct 01 16:36:08 crc kubenswrapper[4949]: I1001 16:36:08.783937 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-db-sync-c2td2" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.148607 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Oct 01 16:36:09 crc kubenswrapper[4949]: E1001 16:36:09.158635 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b775d20-0507-4612-af5a-3400bd30e637" containerName="manila-db-sync" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.158668 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b775d20-0507-4612-af5a-3400bd30e637" containerName="manila-db-sync" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.159011 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b775d20-0507-4612-af5a-3400bd30e637" containerName="manila-db-sync" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.160344 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.163265 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scripts" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.164002 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-config-data" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.164679 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.164793 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-manila-dockercfg-8d4hg" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.173706 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.208190 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.214086 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.218860 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.232668 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.244334 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t64j7\" (UniqueName: \"kubernetes.io/projected/d3c0089c-0268-4b0b-a77d-fd339ae87759-kube-api-access-t64j7\") pod \"manila-scheduler-0\" (UID: \"d3c0089c-0268-4b0b-a77d-fd339ae87759\") " pod="openstack/manila-scheduler-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.244411 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3c0089c-0268-4b0b-a77d-fd339ae87759-scripts\") pod \"manila-scheduler-0\" (UID: \"d3c0089c-0268-4b0b-a77d-fd339ae87759\") " pod="openstack/manila-scheduler-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.244433 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d3c0089c-0268-4b0b-a77d-fd339ae87759-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"d3c0089c-0268-4b0b-a77d-fd339ae87759\") " pod="openstack/manila-scheduler-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.244454 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d3c0089c-0268-4b0b-a77d-fd339ae87759-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"d3c0089c-0268-4b0b-a77d-fd339ae87759\") " pod="openstack/manila-scheduler-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.244564 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3c0089c-0268-4b0b-a77d-fd339ae87759-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"d3c0089c-0268-4b0b-a77d-fd339ae87759\") " pod="openstack/manila-scheduler-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.244586 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3c0089c-0268-4b0b-a77d-fd339ae87759-config-data\") pod \"manila-scheduler-0\" (UID: \"d3c0089c-0268-4b0b-a77d-fd339ae87759\") " pod="openstack/manila-scheduler-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.332331 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/dnsmasq-dns-76b5fdb995-xzkvd"] Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.334983 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76b5fdb995-xzkvd" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.347414 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d3c0089c-0268-4b0b-a77d-fd339ae87759-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"d3c0089c-0268-4b0b-a77d-fd339ae87759\") " pod="openstack/manila-scheduler-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.347666 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9hf7\" (UniqueName: \"kubernetes.io/projected/7cc67aa3-197f-42f8-9c2f-8461e871faa5-kube-api-access-s9hf7\") pod \"manila-share-share1-0\" (UID: \"7cc67aa3-197f-42f8-9c2f-8461e871faa5\") " pod="openstack/manila-share-share1-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.347773 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7cc67aa3-197f-42f8-9c2f-8461e871faa5-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"7cc67aa3-197f-42f8-9c2f-8461e871faa5\") " pod="openstack/manila-share-share1-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.347878 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cc67aa3-197f-42f8-9c2f-8461e871faa5-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"7cc67aa3-197f-42f8-9c2f-8461e871faa5\") " pod="openstack/manila-share-share1-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.347989 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3c0089c-0268-4b0b-a77d-fd339ae87759-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"d3c0089c-0268-4b0b-a77d-fd339ae87759\") " pod="openstack/manila-scheduler-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.348087 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3c0089c-0268-4b0b-a77d-fd339ae87759-config-data\") pod \"manila-scheduler-0\" (UID: \"d3c0089c-0268-4b0b-a77d-fd339ae87759\") " pod="openstack/manila-scheduler-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.348239 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/7cc67aa3-197f-42f8-9c2f-8461e871faa5-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"7cc67aa3-197f-42f8-9c2f-8461e871faa5\") " pod="openstack/manila-share-share1-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.348332 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7cc67aa3-197f-42f8-9c2f-8461e871faa5-ceph\") pod \"manila-share-share1-0\" (UID: \"7cc67aa3-197f-42f8-9c2f-8461e871faa5\") " pod="openstack/manila-share-share1-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.348409 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cc67aa3-197f-42f8-9c2f-8461e871faa5-config-data\") pod \"manila-share-share1-0\" (UID: \"7cc67aa3-197f-42f8-9c2f-8461e871faa5\") " pod="openstack/manila-share-share1-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.348494 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7cc67aa3-197f-42f8-9c2f-8461e871faa5-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"7cc67aa3-197f-42f8-9c2f-8461e871faa5\") " pod="openstack/manila-share-share1-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.348595 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cc67aa3-197f-42f8-9c2f-8461e871faa5-scripts\") pod \"manila-share-share1-0\" (UID: \"7cc67aa3-197f-42f8-9c2f-8461e871faa5\") " pod="openstack/manila-share-share1-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.348694 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t64j7\" (UniqueName: \"kubernetes.io/projected/d3c0089c-0268-4b0b-a77d-fd339ae87759-kube-api-access-t64j7\") pod \"manila-scheduler-0\" (UID: \"d3c0089c-0268-4b0b-a77d-fd339ae87759\") " pod="openstack/manila-scheduler-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.348822 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3c0089c-0268-4b0b-a77d-fd339ae87759-scripts\") pod \"manila-scheduler-0\" (UID: \"d3c0089c-0268-4b0b-a77d-fd339ae87759\") " pod="openstack/manila-scheduler-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.348924 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d3c0089c-0268-4b0b-a77d-fd339ae87759-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"d3c0089c-0268-4b0b-a77d-fd339ae87759\") " pod="openstack/manila-scheduler-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.349058 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d3c0089c-0268-4b0b-a77d-fd339ae87759-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"d3c0089c-0268-4b0b-a77d-fd339ae87759\") " pod="openstack/manila-scheduler-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.357437 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3c0089c-0268-4b0b-a77d-fd339ae87759-config-data\") pod \"manila-scheduler-0\" (UID: \"d3c0089c-0268-4b0b-a77d-fd339ae87759\") " pod="openstack/manila-scheduler-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.358623 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d3c0089c-0268-4b0b-a77d-fd339ae87759-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"d3c0089c-0268-4b0b-a77d-fd339ae87759\") " pod="openstack/manila-scheduler-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.359193 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3c0089c-0268-4b0b-a77d-fd339ae87759-scripts\") pod \"manila-scheduler-0\" (UID: \"d3c0089c-0268-4b0b-a77d-fd339ae87759\") " pod="openstack/manila-scheduler-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.365585 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76b5fdb995-xzkvd"] Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.385419 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3c0089c-0268-4b0b-a77d-fd339ae87759-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"d3c0089c-0268-4b0b-a77d-fd339ae87759\") " pod="openstack/manila-scheduler-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.399352 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t64j7\" (UniqueName: \"kubernetes.io/projected/d3c0089c-0268-4b0b-a77d-fd339ae87759-kube-api-access-t64j7\") pod \"manila-scheduler-0\" (UID: \"d3c0089c-0268-4b0b-a77d-fd339ae87759\") " pod="openstack/manila-scheduler-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.408783 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.410398 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.413149 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.429597 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.451083 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8455\" (UniqueName: \"kubernetes.io/projected/9ccacc95-2698-4d71-9762-9b7eaadb1a81-kube-api-access-k8455\") pod \"manila-api-0\" (UID: \"9ccacc95-2698-4d71-9762-9b7eaadb1a81\") " pod="openstack/manila-api-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.451150 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/251bdf57-7cc2-4c4d-b6d7-d5579e3e9341-config\") pod \"dnsmasq-dns-76b5fdb995-xzkvd\" (UID: \"251bdf57-7cc2-4c4d-b6d7-d5579e3e9341\") " pod="openstack/dnsmasq-dns-76b5fdb995-xzkvd" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.451173 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/251bdf57-7cc2-4c4d-b6d7-d5579e3e9341-ovsdbserver-nb\") pod \"dnsmasq-dns-76b5fdb995-xzkvd\" (UID: \"251bdf57-7cc2-4c4d-b6d7-d5579e3e9341\") " pod="openstack/dnsmasq-dns-76b5fdb995-xzkvd" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.451233 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9hf7\" (UniqueName: \"kubernetes.io/projected/7cc67aa3-197f-42f8-9c2f-8461e871faa5-kube-api-access-s9hf7\") pod \"manila-share-share1-0\" (UID: \"7cc67aa3-197f-42f8-9c2f-8461e871faa5\") " pod="openstack/manila-share-share1-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.451255 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7cc67aa3-197f-42f8-9c2f-8461e871faa5-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"7cc67aa3-197f-42f8-9c2f-8461e871faa5\") " pod="openstack/manila-share-share1-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.451292 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cc67aa3-197f-42f8-9c2f-8461e871faa5-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"7cc67aa3-197f-42f8-9c2f-8461e871faa5\") " pod="openstack/manila-share-share1-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.451322 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ccacc95-2698-4d71-9762-9b7eaadb1a81-logs\") pod \"manila-api-0\" (UID: \"9ccacc95-2698-4d71-9762-9b7eaadb1a81\") " pod="openstack/manila-api-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.451340 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/251bdf57-7cc2-4c4d-b6d7-d5579e3e9341-dns-svc\") pod \"dnsmasq-dns-76b5fdb995-xzkvd\" (UID: \"251bdf57-7cc2-4c4d-b6d7-d5579e3e9341\") " pod="openstack/dnsmasq-dns-76b5fdb995-xzkvd" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.451365 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/7cc67aa3-197f-42f8-9c2f-8461e871faa5-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"7cc67aa3-197f-42f8-9c2f-8461e871faa5\") " pod="openstack/manila-share-share1-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.451383 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/251bdf57-7cc2-4c4d-b6d7-d5579e3e9341-openstack-edpm-ipam\") pod \"dnsmasq-dns-76b5fdb995-xzkvd\" (UID: \"251bdf57-7cc2-4c4d-b6d7-d5579e3e9341\") " pod="openstack/dnsmasq-dns-76b5fdb995-xzkvd" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.451397 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7cc67aa3-197f-42f8-9c2f-8461e871faa5-ceph\") pod \"manila-share-share1-0\" (UID: \"7cc67aa3-197f-42f8-9c2f-8461e871faa5\") " pod="openstack/manila-share-share1-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.451412 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d5vd\" (UniqueName: \"kubernetes.io/projected/251bdf57-7cc2-4c4d-b6d7-d5579e3e9341-kube-api-access-9d5vd\") pod \"dnsmasq-dns-76b5fdb995-xzkvd\" (UID: \"251bdf57-7cc2-4c4d-b6d7-d5579e3e9341\") " pod="openstack/dnsmasq-dns-76b5fdb995-xzkvd" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.451432 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/251bdf57-7cc2-4c4d-b6d7-d5579e3e9341-ovsdbserver-sb\") pod \"dnsmasq-dns-76b5fdb995-xzkvd\" (UID: \"251bdf57-7cc2-4c4d-b6d7-d5579e3e9341\") " pod="openstack/dnsmasq-dns-76b5fdb995-xzkvd" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.451450 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9ccacc95-2698-4d71-9762-9b7eaadb1a81-etc-machine-id\") pod \"manila-api-0\" (UID: \"9ccacc95-2698-4d71-9762-9b7eaadb1a81\") " pod="openstack/manila-api-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.451819 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cc67aa3-197f-42f8-9c2f-8461e871faa5-config-data\") pod \"manila-share-share1-0\" (UID: \"7cc67aa3-197f-42f8-9c2f-8461e871faa5\") " pod="openstack/manila-share-share1-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.451851 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ccacc95-2698-4d71-9762-9b7eaadb1a81-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"9ccacc95-2698-4d71-9762-9b7eaadb1a81\") " pod="openstack/manila-api-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.451868 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ccacc95-2698-4d71-9762-9b7eaadb1a81-config-data\") pod \"manila-api-0\" (UID: \"9ccacc95-2698-4d71-9762-9b7eaadb1a81\") " pod="openstack/manila-api-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.451883 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ccacc95-2698-4d71-9762-9b7eaadb1a81-config-data-custom\") pod \"manila-api-0\" (UID: \"9ccacc95-2698-4d71-9762-9b7eaadb1a81\") " pod="openstack/manila-api-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.451902 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7cc67aa3-197f-42f8-9c2f-8461e871faa5-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"7cc67aa3-197f-42f8-9c2f-8461e871faa5\") " pod="openstack/manila-share-share1-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.451927 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ccacc95-2698-4d71-9762-9b7eaadb1a81-scripts\") pod \"manila-api-0\" (UID: \"9ccacc95-2698-4d71-9762-9b7eaadb1a81\") " pod="openstack/manila-api-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.451954 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cc67aa3-197f-42f8-9c2f-8461e871faa5-scripts\") pod \"manila-share-share1-0\" (UID: \"7cc67aa3-197f-42f8-9c2f-8461e871faa5\") " pod="openstack/manila-share-share1-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.459347 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7cc67aa3-197f-42f8-9c2f-8461e871faa5-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"7cc67aa3-197f-42f8-9c2f-8461e871faa5\") " pod="openstack/manila-share-share1-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.459737 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/7cc67aa3-197f-42f8-9c2f-8461e871faa5-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"7cc67aa3-197f-42f8-9c2f-8461e871faa5\") " pod="openstack/manila-share-share1-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.460830 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cc67aa3-197f-42f8-9c2f-8461e871faa5-scripts\") pod \"manila-share-share1-0\" (UID: \"7cc67aa3-197f-42f8-9c2f-8461e871faa5\") " pod="openstack/manila-share-share1-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.461175 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7cc67aa3-197f-42f8-9c2f-8461e871faa5-ceph\") pod \"manila-share-share1-0\" (UID: \"7cc67aa3-197f-42f8-9c2f-8461e871faa5\") " pod="openstack/manila-share-share1-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.462049 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7cc67aa3-197f-42f8-9c2f-8461e871faa5-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"7cc67aa3-197f-42f8-9c2f-8461e871faa5\") " pod="openstack/manila-share-share1-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.467915 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cc67aa3-197f-42f8-9c2f-8461e871faa5-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"7cc67aa3-197f-42f8-9c2f-8461e871faa5\") " pod="openstack/manila-share-share1-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.469937 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cc67aa3-197f-42f8-9c2f-8461e871faa5-config-data\") pod \"manila-share-share1-0\" (UID: \"7cc67aa3-197f-42f8-9c2f-8461e871faa5\") " pod="openstack/manila-share-share1-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.479647 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9hf7\" (UniqueName: \"kubernetes.io/projected/7cc67aa3-197f-42f8-9c2f-8461e871faa5-kube-api-access-s9hf7\") pod \"manila-share-share1-0\" (UID: \"7cc67aa3-197f-42f8-9c2f-8461e871faa5\") " pod="openstack/manila-share-share1-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.484612 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.542812 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.553670 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ccacc95-2698-4d71-9762-9b7eaadb1a81-logs\") pod \"manila-api-0\" (UID: \"9ccacc95-2698-4d71-9762-9b7eaadb1a81\") " pod="openstack/manila-api-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.553703 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/251bdf57-7cc2-4c4d-b6d7-d5579e3e9341-dns-svc\") pod \"dnsmasq-dns-76b5fdb995-xzkvd\" (UID: \"251bdf57-7cc2-4c4d-b6d7-d5579e3e9341\") " pod="openstack/dnsmasq-dns-76b5fdb995-xzkvd" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.553736 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/251bdf57-7cc2-4c4d-b6d7-d5579e3e9341-openstack-edpm-ipam\") pod \"dnsmasq-dns-76b5fdb995-xzkvd\" (UID: \"251bdf57-7cc2-4c4d-b6d7-d5579e3e9341\") " pod="openstack/dnsmasq-dns-76b5fdb995-xzkvd" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.553757 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9d5vd\" (UniqueName: \"kubernetes.io/projected/251bdf57-7cc2-4c4d-b6d7-d5579e3e9341-kube-api-access-9d5vd\") pod \"dnsmasq-dns-76b5fdb995-xzkvd\" (UID: \"251bdf57-7cc2-4c4d-b6d7-d5579e3e9341\") " pod="openstack/dnsmasq-dns-76b5fdb995-xzkvd" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.553775 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/251bdf57-7cc2-4c4d-b6d7-d5579e3e9341-ovsdbserver-sb\") pod \"dnsmasq-dns-76b5fdb995-xzkvd\" (UID: \"251bdf57-7cc2-4c4d-b6d7-d5579e3e9341\") " pod="openstack/dnsmasq-dns-76b5fdb995-xzkvd" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.553792 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9ccacc95-2698-4d71-9762-9b7eaadb1a81-etc-machine-id\") pod \"manila-api-0\" (UID: \"9ccacc95-2698-4d71-9762-9b7eaadb1a81\") " pod="openstack/manila-api-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.553831 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ccacc95-2698-4d71-9762-9b7eaadb1a81-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"9ccacc95-2698-4d71-9762-9b7eaadb1a81\") " pod="openstack/manila-api-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.553851 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ccacc95-2698-4d71-9762-9b7eaadb1a81-config-data\") pod \"manila-api-0\" (UID: \"9ccacc95-2698-4d71-9762-9b7eaadb1a81\") " pod="openstack/manila-api-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.553870 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ccacc95-2698-4d71-9762-9b7eaadb1a81-config-data-custom\") pod \"manila-api-0\" (UID: \"9ccacc95-2698-4d71-9762-9b7eaadb1a81\") " pod="openstack/manila-api-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.553906 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ccacc95-2698-4d71-9762-9b7eaadb1a81-scripts\") pod \"manila-api-0\" (UID: \"9ccacc95-2698-4d71-9762-9b7eaadb1a81\") " pod="openstack/manila-api-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.553949 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8455\" (UniqueName: \"kubernetes.io/projected/9ccacc95-2698-4d71-9762-9b7eaadb1a81-kube-api-access-k8455\") pod \"manila-api-0\" (UID: \"9ccacc95-2698-4d71-9762-9b7eaadb1a81\") " pod="openstack/manila-api-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.553976 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/251bdf57-7cc2-4c4d-b6d7-d5579e3e9341-config\") pod \"dnsmasq-dns-76b5fdb995-xzkvd\" (UID: \"251bdf57-7cc2-4c4d-b6d7-d5579e3e9341\") " pod="openstack/dnsmasq-dns-76b5fdb995-xzkvd" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.553995 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/251bdf57-7cc2-4c4d-b6d7-d5579e3e9341-ovsdbserver-nb\") pod \"dnsmasq-dns-76b5fdb995-xzkvd\" (UID: \"251bdf57-7cc2-4c4d-b6d7-d5579e3e9341\") " pod="openstack/dnsmasq-dns-76b5fdb995-xzkvd" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.554477 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9ccacc95-2698-4d71-9762-9b7eaadb1a81-etc-machine-id\") pod \"manila-api-0\" (UID: \"9ccacc95-2698-4d71-9762-9b7eaadb1a81\") " pod="openstack/manila-api-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.554960 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/251bdf57-7cc2-4c4d-b6d7-d5579e3e9341-dns-svc\") pod \"dnsmasq-dns-76b5fdb995-xzkvd\" (UID: \"251bdf57-7cc2-4c4d-b6d7-d5579e3e9341\") " pod="openstack/dnsmasq-dns-76b5fdb995-xzkvd" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.554964 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ccacc95-2698-4d71-9762-9b7eaadb1a81-logs\") pod \"manila-api-0\" (UID: \"9ccacc95-2698-4d71-9762-9b7eaadb1a81\") " pod="openstack/manila-api-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.555436 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/251bdf57-7cc2-4c4d-b6d7-d5579e3e9341-openstack-edpm-ipam\") pod \"dnsmasq-dns-76b5fdb995-xzkvd\" (UID: \"251bdf57-7cc2-4c4d-b6d7-d5579e3e9341\") " pod="openstack/dnsmasq-dns-76b5fdb995-xzkvd" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.555617 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/251bdf57-7cc2-4c4d-b6d7-d5579e3e9341-ovsdbserver-nb\") pod \"dnsmasq-dns-76b5fdb995-xzkvd\" (UID: \"251bdf57-7cc2-4c4d-b6d7-d5579e3e9341\") " pod="openstack/dnsmasq-dns-76b5fdb995-xzkvd" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.555739 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/251bdf57-7cc2-4c4d-b6d7-d5579e3e9341-config\") pod \"dnsmasq-dns-76b5fdb995-xzkvd\" (UID: \"251bdf57-7cc2-4c4d-b6d7-d5579e3e9341\") " pod="openstack/dnsmasq-dns-76b5fdb995-xzkvd" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.555737 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/251bdf57-7cc2-4c4d-b6d7-d5579e3e9341-ovsdbserver-sb\") pod \"dnsmasq-dns-76b5fdb995-xzkvd\" (UID: \"251bdf57-7cc2-4c4d-b6d7-d5579e3e9341\") " pod="openstack/dnsmasq-dns-76b5fdb995-xzkvd" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.559676 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ccacc95-2698-4d71-9762-9b7eaadb1a81-config-data-custom\") pod \"manila-api-0\" (UID: \"9ccacc95-2698-4d71-9762-9b7eaadb1a81\") " pod="openstack/manila-api-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.563952 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ccacc95-2698-4d71-9762-9b7eaadb1a81-config-data\") pod \"manila-api-0\" (UID: \"9ccacc95-2698-4d71-9762-9b7eaadb1a81\") " pod="openstack/manila-api-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.568633 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ccacc95-2698-4d71-9762-9b7eaadb1a81-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"9ccacc95-2698-4d71-9762-9b7eaadb1a81\") " pod="openstack/manila-api-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.575502 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ccacc95-2698-4d71-9762-9b7eaadb1a81-scripts\") pod \"manila-api-0\" (UID: \"9ccacc95-2698-4d71-9762-9b7eaadb1a81\") " pod="openstack/manila-api-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.576077 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9d5vd\" (UniqueName: \"kubernetes.io/projected/251bdf57-7cc2-4c4d-b6d7-d5579e3e9341-kube-api-access-9d5vd\") pod \"dnsmasq-dns-76b5fdb995-xzkvd\" (UID: \"251bdf57-7cc2-4c4d-b6d7-d5579e3e9341\") " pod="openstack/dnsmasq-dns-76b5fdb995-xzkvd" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.579686 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8455\" (UniqueName: \"kubernetes.io/projected/9ccacc95-2698-4d71-9762-9b7eaadb1a81-kube-api-access-k8455\") pod \"manila-api-0\" (UID: \"9ccacc95-2698-4d71-9762-9b7eaadb1a81\") " pod="openstack/manila-api-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.635657 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-76b5fdb995-xzkvd" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.656512 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Oct 01 16:36:09 crc kubenswrapper[4949]: I1001 16:36:09.829205 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Oct 01 16:36:11 crc kubenswrapper[4949]: I1001 16:36:10.182732 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/dnsmasq-dns-76b5fdb995-xzkvd"] Oct 01 16:36:11 crc kubenswrapper[4949]: W1001 16:36:10.186195 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod251bdf57_7cc2_4c4d_b6d7_d5579e3e9341.slice/crio-4be6b291f08f29257f81a80fe810ebda9575d3a2e22893bb146ad21e41930954 WatchSource:0}: Error finding container 4be6b291f08f29257f81a80fe810ebda9575d3a2e22893bb146ad21e41930954: Status 404 returned error can't find the container with id 4be6b291f08f29257f81a80fe810ebda9575d3a2e22893bb146ad21e41930954 Oct 01 16:36:11 crc kubenswrapper[4949]: I1001 16:36:10.207019 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Oct 01 16:36:11 crc kubenswrapper[4949]: I1001 16:36:10.341774 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Oct 01 16:36:11 crc kubenswrapper[4949]: I1001 16:36:10.810824 4949 generic.go:334] "Generic (PLEG): container finished" podID="251bdf57-7cc2-4c4d-b6d7-d5579e3e9341" containerID="25997d45bf9b030d27f466d8cbd24d59f6874a6f5eccc2a3bcb0e04d3ce2677a" exitCode=0 Oct 01 16:36:11 crc kubenswrapper[4949]: I1001 16:36:10.811063 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5fdb995-xzkvd" event={"ID":"251bdf57-7cc2-4c4d-b6d7-d5579e3e9341","Type":"ContainerDied","Data":"25997d45bf9b030d27f466d8cbd24d59f6874a6f5eccc2a3bcb0e04d3ce2677a"} Oct 01 16:36:11 crc kubenswrapper[4949]: I1001 16:36:10.811088 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5fdb995-xzkvd" event={"ID":"251bdf57-7cc2-4c4d-b6d7-d5579e3e9341","Type":"ContainerStarted","Data":"4be6b291f08f29257f81a80fe810ebda9575d3a2e22893bb146ad21e41930954"} Oct 01 16:36:11 crc kubenswrapper[4949]: I1001 16:36:10.815252 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"d3c0089c-0268-4b0b-a77d-fd339ae87759","Type":"ContainerStarted","Data":"aec54d6f4e530a9ce7c354d41b6b81c16eb63dae85213d89fd1de00a7ffd5e64"} Oct 01 16:36:11 crc kubenswrapper[4949]: I1001 16:36:10.818915 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"7cc67aa3-197f-42f8-9c2f-8461e871faa5","Type":"ContainerStarted","Data":"179d9554d581bb2b22e954712dacc231f04516fe77ba7455c3948d1816f68187"} Oct 01 16:36:11 crc kubenswrapper[4949]: I1001 16:36:10.821450 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"9ccacc95-2698-4d71-9762-9b7eaadb1a81","Type":"ContainerStarted","Data":"db066f5b66b9fb33d2373cb48707bfbcba2169f1466c6983d5080eedb0ea9085"} Oct 01 16:36:11 crc kubenswrapper[4949]: I1001 16:36:11.833236 4949 generic.go:334] "Generic (PLEG): container finished" podID="b49dfcdb-910b-48e0-91bb-a426980dc277" containerID="7d34943c16dbb9818f91ad5b0f6b4a227edbc27415577f841aed76aad83b78e1" exitCode=0 Oct 01 16:36:11 crc kubenswrapper[4949]: I1001 16:36:11.833324 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f78d9658d-xl5nq" event={"ID":"b49dfcdb-910b-48e0-91bb-a426980dc277","Type":"ContainerDied","Data":"7d34943c16dbb9818f91ad5b0f6b4a227edbc27415577f841aed76aad83b78e1"} Oct 01 16:36:11 crc kubenswrapper[4949]: I1001 16:36:11.836434 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"d3c0089c-0268-4b0b-a77d-fd339ae87759","Type":"ContainerStarted","Data":"7b15a83f40fe48732b8dd314f63ad53f32b0e253875ffeabe8aed95a4a3046ad"} Oct 01 16:36:11 crc kubenswrapper[4949]: I1001 16:36:11.836479 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"d3c0089c-0268-4b0b-a77d-fd339ae87759","Type":"ContainerStarted","Data":"f97825bc260248f60e8fc6fe8a5251d3e8c6975b37b9d08dcc9b85510a361b05"} Oct 01 16:36:11 crc kubenswrapper[4949]: I1001 16:36:11.841899 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"9ccacc95-2698-4d71-9762-9b7eaadb1a81","Type":"ContainerStarted","Data":"225d308166be410daf7a1efe1db51b85c372977980a3d70cc488bd001d3a35b1"} Oct 01 16:36:11 crc kubenswrapper[4949]: I1001 16:36:11.841960 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"9ccacc95-2698-4d71-9762-9b7eaadb1a81","Type":"ContainerStarted","Data":"f0a0522de844f7ec0b016f9c50b1cb29fb9f61e71226149524cba4d4b2d2a56b"} Oct 01 16:36:11 crc kubenswrapper[4949]: I1001 16:36:11.841980 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Oct 01 16:36:11 crc kubenswrapper[4949]: I1001 16:36:11.847391 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-76b5fdb995-xzkvd" event={"ID":"251bdf57-7cc2-4c4d-b6d7-d5579e3e9341","Type":"ContainerStarted","Data":"7402fea987f2c661d09229c97adaaa8c06be8493d3b1a6079a20787266164246"} Oct 01 16:36:11 crc kubenswrapper[4949]: I1001 16:36:11.847660 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/dnsmasq-dns-76b5fdb995-xzkvd" Oct 01 16:36:11 crc kubenswrapper[4949]: I1001 16:36:11.869473 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=2.131330082 podStartE2EDuration="2.869453564s" podCreationTimestamp="2025-10-01 16:36:09 +0000 UTC" firstStartedPulling="2025-10-01 16:36:09.84825295 +0000 UTC m=+3269.153859141" lastFinishedPulling="2025-10-01 16:36:10.586376432 +0000 UTC m=+3269.891982623" observedRunningTime="2025-10-01 16:36:11.8570031 +0000 UTC m=+3271.162609311" watchObservedRunningTime="2025-10-01 16:36:11.869453564 +0000 UTC m=+3271.175059755" Oct 01 16:36:11 crc kubenswrapper[4949]: I1001 16:36:11.888082 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/dnsmasq-dns-76b5fdb995-xzkvd" podStartSLOduration=2.8880574169999997 podStartE2EDuration="2.888057417s" podCreationTimestamp="2025-10-01 16:36:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 16:36:11.881782644 +0000 UTC m=+3271.187388885" watchObservedRunningTime="2025-10-01 16:36:11.888057417 +0000 UTC m=+3271.193663608" Oct 01 16:36:11 crc kubenswrapper[4949]: I1001 16:36:11.922433 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=2.922386024 podStartE2EDuration="2.922386024s" podCreationTimestamp="2025-10-01 16:36:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 16:36:11.905728015 +0000 UTC m=+3271.211334196" watchObservedRunningTime="2025-10-01 16:36:11.922386024 +0000 UTC m=+3271.227992215" Oct 01 16:36:11 crc kubenswrapper[4949]: I1001 16:36:11.987766 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Oct 01 16:36:13 crc kubenswrapper[4949]: I1001 16:36:13.404476 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7f78d9658d-xl5nq" podUID="b49dfcdb-910b-48e0-91bb-a426980dc277" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.254:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.254:8443: connect: connection refused" Oct 01 16:36:13 crc kubenswrapper[4949]: I1001 16:36:13.874505 4949 generic.go:334] "Generic (PLEG): container finished" podID="a2eee1a1-972c-4bf6-9dab-d55162980ed9" containerID="4c05e1c34bacf35ea67c373df5f4fe8e4317af2fd01832ea8d64aea9c5c0ebde" exitCode=137 Oct 01 16:36:13 crc kubenswrapper[4949]: I1001 16:36:13.874824 4949 generic.go:334] "Generic (PLEG): container finished" podID="a2eee1a1-972c-4bf6-9dab-d55162980ed9" containerID="e527636871b09b9ce51622266540b5954657713feac9dccb570e0f16c92d7ffa" exitCode=137 Oct 01 16:36:13 crc kubenswrapper[4949]: I1001 16:36:13.874604 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7756fd4cb7-fb6wc" event={"ID":"a2eee1a1-972c-4bf6-9dab-d55162980ed9","Type":"ContainerDied","Data":"4c05e1c34bacf35ea67c373df5f4fe8e4317af2fd01832ea8d64aea9c5c0ebde"} Oct 01 16:36:13 crc kubenswrapper[4949]: I1001 16:36:13.874884 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7756fd4cb7-fb6wc" event={"ID":"a2eee1a1-972c-4bf6-9dab-d55162980ed9","Type":"ContainerDied","Data":"e527636871b09b9ce51622266540b5954657713feac9dccb570e0f16c92d7ffa"} Oct 01 16:36:13 crc kubenswrapper[4949]: I1001 16:36:13.889676 4949 generic.go:334] "Generic (PLEG): container finished" podID="51ce1d58-e3c1-4eff-819b-b4f6f19e3498" containerID="f333c87ed382609f35357c9bab12b2a2bbedb3aa93010da4be8fe4f75495a1a2" exitCode=137 Oct 01 16:36:13 crc kubenswrapper[4949]: I1001 16:36:13.889709 4949 generic.go:334] "Generic (PLEG): container finished" podID="51ce1d58-e3c1-4eff-819b-b4f6f19e3498" containerID="d35151c9be56ead7878ee7e525c027e34484acf7aa004d71616be7a8330e4f27" exitCode=137 Oct 01 16:36:13 crc kubenswrapper[4949]: I1001 16:36:13.889746 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-788cb9bcc-8rj25" event={"ID":"51ce1d58-e3c1-4eff-819b-b4f6f19e3498","Type":"ContainerDied","Data":"f333c87ed382609f35357c9bab12b2a2bbedb3aa93010da4be8fe4f75495a1a2"} Oct 01 16:36:13 crc kubenswrapper[4949]: I1001 16:36:13.889797 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-788cb9bcc-8rj25" event={"ID":"51ce1d58-e3c1-4eff-819b-b4f6f19e3498","Type":"ContainerDied","Data":"d35151c9be56ead7878ee7e525c027e34484acf7aa004d71616be7a8330e4f27"} Oct 01 16:36:13 crc kubenswrapper[4949]: I1001 16:36:13.889962 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="9ccacc95-2698-4d71-9762-9b7eaadb1a81" containerName="manila-api-log" containerID="cri-o://f0a0522de844f7ec0b016f9c50b1cb29fb9f61e71226149524cba4d4b2d2a56b" gracePeriod=30 Oct 01 16:36:13 crc kubenswrapper[4949]: I1001 16:36:13.890010 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-api-0" podUID="9ccacc95-2698-4d71-9762-9b7eaadb1a81" containerName="manila-api" containerID="cri-o://225d308166be410daf7a1efe1db51b85c372977980a3d70cc488bd001d3a35b1" gracePeriod=30 Oct 01 16:36:14 crc kubenswrapper[4949]: I1001 16:36:14.241178 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-788cb9bcc-8rj25" Oct 01 16:36:14 crc kubenswrapper[4949]: I1001 16:36:14.350641 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/51ce1d58-e3c1-4eff-819b-b4f6f19e3498-config-data\") pod \"51ce1d58-e3c1-4eff-819b-b4f6f19e3498\" (UID: \"51ce1d58-e3c1-4eff-819b-b4f6f19e3498\") " Oct 01 16:36:14 crc kubenswrapper[4949]: I1001 16:36:14.350687 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/51ce1d58-e3c1-4eff-819b-b4f6f19e3498-horizon-secret-key\") pod \"51ce1d58-e3c1-4eff-819b-b4f6f19e3498\" (UID: \"51ce1d58-e3c1-4eff-819b-b4f6f19e3498\") " Oct 01 16:36:14 crc kubenswrapper[4949]: I1001 16:36:14.350713 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51ce1d58-e3c1-4eff-819b-b4f6f19e3498-logs\") pod \"51ce1d58-e3c1-4eff-819b-b4f6f19e3498\" (UID: \"51ce1d58-e3c1-4eff-819b-b4f6f19e3498\") " Oct 01 16:36:14 crc kubenswrapper[4949]: I1001 16:36:14.350746 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/51ce1d58-e3c1-4eff-819b-b4f6f19e3498-scripts\") pod \"51ce1d58-e3c1-4eff-819b-b4f6f19e3498\" (UID: \"51ce1d58-e3c1-4eff-819b-b4f6f19e3498\") " Oct 01 16:36:14 crc kubenswrapper[4949]: I1001 16:36:14.350797 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-flsp2\" (UniqueName: \"kubernetes.io/projected/51ce1d58-e3c1-4eff-819b-b4f6f19e3498-kube-api-access-flsp2\") pod \"51ce1d58-e3c1-4eff-819b-b4f6f19e3498\" (UID: \"51ce1d58-e3c1-4eff-819b-b4f6f19e3498\") " Oct 01 16:36:14 crc kubenswrapper[4949]: I1001 16:36:14.351522 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/51ce1d58-e3c1-4eff-819b-b4f6f19e3498-logs" (OuterVolumeSpecName: "logs") pod "51ce1d58-e3c1-4eff-819b-b4f6f19e3498" (UID: "51ce1d58-e3c1-4eff-819b-b4f6f19e3498"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:36:14 crc kubenswrapper[4949]: I1001 16:36:14.356335 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/51ce1d58-e3c1-4eff-819b-b4f6f19e3498-kube-api-access-flsp2" (OuterVolumeSpecName: "kube-api-access-flsp2") pod "51ce1d58-e3c1-4eff-819b-b4f6f19e3498" (UID: "51ce1d58-e3c1-4eff-819b-b4f6f19e3498"). InnerVolumeSpecName "kube-api-access-flsp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:36:14 crc kubenswrapper[4949]: I1001 16:36:14.356803 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/51ce1d58-e3c1-4eff-819b-b4f6f19e3498-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "51ce1d58-e3c1-4eff-819b-b4f6f19e3498" (UID: "51ce1d58-e3c1-4eff-819b-b4f6f19e3498"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:36:14 crc kubenswrapper[4949]: I1001 16:36:14.375908 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51ce1d58-e3c1-4eff-819b-b4f6f19e3498-scripts" (OuterVolumeSpecName: "scripts") pod "51ce1d58-e3c1-4eff-819b-b4f6f19e3498" (UID: "51ce1d58-e3c1-4eff-819b-b4f6f19e3498"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 16:36:14 crc kubenswrapper[4949]: I1001 16:36:14.376176 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/51ce1d58-e3c1-4eff-819b-b4f6f19e3498-config-data" (OuterVolumeSpecName: "config-data") pod "51ce1d58-e3c1-4eff-819b-b4f6f19e3498" (UID: "51ce1d58-e3c1-4eff-819b-b4f6f19e3498"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 16:36:14 crc kubenswrapper[4949]: I1001 16:36:14.452963 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/51ce1d58-e3c1-4eff-819b-b4f6f19e3498-config-data\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:14 crc kubenswrapper[4949]: I1001 16:36:14.453009 4949 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/51ce1d58-e3c1-4eff-819b-b4f6f19e3498-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:14 crc kubenswrapper[4949]: I1001 16:36:14.453025 4949 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/51ce1d58-e3c1-4eff-819b-b4f6f19e3498-logs\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:14 crc kubenswrapper[4949]: I1001 16:36:14.453036 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/51ce1d58-e3c1-4eff-819b-b4f6f19e3498-scripts\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:14 crc kubenswrapper[4949]: I1001 16:36:14.453049 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-flsp2\" (UniqueName: \"kubernetes.io/projected/51ce1d58-e3c1-4eff-819b-b4f6f19e3498-kube-api-access-flsp2\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:14 crc kubenswrapper[4949]: I1001 16:36:14.908345 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-788cb9bcc-8rj25" event={"ID":"51ce1d58-e3c1-4eff-819b-b4f6f19e3498","Type":"ContainerDied","Data":"6d19bbc71b970035c754ff116fd49a27a4ccbbed800f90d3279e8d134fc0b1ec"} Oct 01 16:36:14 crc kubenswrapper[4949]: I1001 16:36:14.908354 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-788cb9bcc-8rj25" Oct 01 16:36:14 crc kubenswrapper[4949]: I1001 16:36:14.908400 4949 scope.go:117] "RemoveContainer" containerID="f333c87ed382609f35357c9bab12b2a2bbedb3aa93010da4be8fe4f75495a1a2" Oct 01 16:36:14 crc kubenswrapper[4949]: I1001 16:36:14.911774 4949 generic.go:334] "Generic (PLEG): container finished" podID="9ccacc95-2698-4d71-9762-9b7eaadb1a81" containerID="225d308166be410daf7a1efe1db51b85c372977980a3d70cc488bd001d3a35b1" exitCode=0 Oct 01 16:36:14 crc kubenswrapper[4949]: I1001 16:36:14.911805 4949 generic.go:334] "Generic (PLEG): container finished" podID="9ccacc95-2698-4d71-9762-9b7eaadb1a81" containerID="f0a0522de844f7ec0b016f9c50b1cb29fb9f61e71226149524cba4d4b2d2a56b" exitCode=143 Oct 01 16:36:14 crc kubenswrapper[4949]: I1001 16:36:14.911824 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"9ccacc95-2698-4d71-9762-9b7eaadb1a81","Type":"ContainerDied","Data":"225d308166be410daf7a1efe1db51b85c372977980a3d70cc488bd001d3a35b1"} Oct 01 16:36:14 crc kubenswrapper[4949]: I1001 16:36:14.911848 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"9ccacc95-2698-4d71-9762-9b7eaadb1a81","Type":"ContainerDied","Data":"f0a0522de844f7ec0b016f9c50b1cb29fb9f61e71226149524cba4d4b2d2a56b"} Oct 01 16:36:14 crc kubenswrapper[4949]: I1001 16:36:14.976424 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-788cb9bcc-8rj25"] Oct 01 16:36:14 crc kubenswrapper[4949]: I1001 16:36:14.986089 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-788cb9bcc-8rj25"] Oct 01 16:36:15 crc kubenswrapper[4949]: I1001 16:36:15.614975 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="51ce1d58-e3c1-4eff-819b-b4f6f19e3498" path="/var/lib/kubelet/pods/51ce1d58-e3c1-4eff-819b-b4f6f19e3498/volumes" Oct 01 16:36:16 crc kubenswrapper[4949]: I1001 16:36:16.057696 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 01 16:36:16 crc kubenswrapper[4949]: I1001 16:36:16.058295 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f7a262ff-0c09-4792-a7e7-e8fb709aa971" containerName="ceilometer-central-agent" containerID="cri-o://946a6b626d30f73202269e09687cc84f40b3410711e2e3bd0f9193b7a401a14e" gracePeriod=30 Oct 01 16:36:16 crc kubenswrapper[4949]: I1001 16:36:16.058760 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f7a262ff-0c09-4792-a7e7-e8fb709aa971" containerName="proxy-httpd" containerID="cri-o://5fb65314d9da2accfa5168bf188a43f203c5777662b29a4aaa382fd2b953230b" gracePeriod=30 Oct 01 16:36:16 crc kubenswrapper[4949]: I1001 16:36:16.058820 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f7a262ff-0c09-4792-a7e7-e8fb709aa971" containerName="sg-core" containerID="cri-o://0c91a6d2b2323683370ce4b166f8316585c1db48b081a1cfc6659d754f9be901" gracePeriod=30 Oct 01 16:36:16 crc kubenswrapper[4949]: I1001 16:36:16.058866 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="f7a262ff-0c09-4792-a7e7-e8fb709aa971" containerName="ceilometer-notification-agent" containerID="cri-o://d2a043ada7ee2b29c171a5df0dec7188fa155611656b0510de9231338296fe57" gracePeriod=30 Oct 01 16:36:16 crc kubenswrapper[4949]: I1001 16:36:16.939685 4949 generic.go:334] "Generic (PLEG): container finished" podID="f7a262ff-0c09-4792-a7e7-e8fb709aa971" containerID="5fb65314d9da2accfa5168bf188a43f203c5777662b29a4aaa382fd2b953230b" exitCode=0 Oct 01 16:36:16 crc kubenswrapper[4949]: I1001 16:36:16.939720 4949 generic.go:334] "Generic (PLEG): container finished" podID="f7a262ff-0c09-4792-a7e7-e8fb709aa971" containerID="0c91a6d2b2323683370ce4b166f8316585c1db48b081a1cfc6659d754f9be901" exitCode=2 Oct 01 16:36:16 crc kubenswrapper[4949]: I1001 16:36:16.939730 4949 generic.go:334] "Generic (PLEG): container finished" podID="f7a262ff-0c09-4792-a7e7-e8fb709aa971" containerID="946a6b626d30f73202269e09687cc84f40b3410711e2e3bd0f9193b7a401a14e" exitCode=0 Oct 01 16:36:16 crc kubenswrapper[4949]: I1001 16:36:16.939752 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7a262ff-0c09-4792-a7e7-e8fb709aa971","Type":"ContainerDied","Data":"5fb65314d9da2accfa5168bf188a43f203c5777662b29a4aaa382fd2b953230b"} Oct 01 16:36:16 crc kubenswrapper[4949]: I1001 16:36:16.939782 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7a262ff-0c09-4792-a7e7-e8fb709aa971","Type":"ContainerDied","Data":"0c91a6d2b2323683370ce4b166f8316585c1db48b081a1cfc6659d754f9be901"} Oct 01 16:36:16 crc kubenswrapper[4949]: I1001 16:36:16.939793 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7a262ff-0c09-4792-a7e7-e8fb709aa971","Type":"ContainerDied","Data":"946a6b626d30f73202269e09687cc84f40b3410711e2e3bd0f9193b7a401a14e"} Oct 01 16:36:19 crc kubenswrapper[4949]: I1001 16:36:19.485586 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Oct 01 16:36:19 crc kubenswrapper[4949]: I1001 16:36:19.639335 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/dnsmasq-dns-76b5fdb995-xzkvd" Oct 01 16:36:19 crc kubenswrapper[4949]: I1001 16:36:19.721307 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-5fmz9"] Oct 01 16:36:19 crc kubenswrapper[4949]: I1001 16:36:19.721617 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/dnsmasq-dns-864d5fc68c-5fmz9" podUID="62f9a851-7558-4b9f-86fe-a5412eaf318e" containerName="dnsmasq-dns" containerID="cri-o://f9c3396ebafdf1fcf35a308b53799fb788160492967e33ff8b2f4ac1b30813bd" gracePeriod=10 Oct 01 16:36:19 crc kubenswrapper[4949]: I1001 16:36:19.855153 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Oct 01 16:36:19 crc kubenswrapper[4949]: I1001 16:36:19.866029 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7756fd4cb7-fb6wc" Oct 01 16:36:19 crc kubenswrapper[4949]: I1001 16:36:19.950881 4949 scope.go:117] "RemoveContainer" containerID="d35151c9be56ead7878ee7e525c027e34484acf7aa004d71616be7a8330e4f27" Oct 01 16:36:19 crc kubenswrapper[4949]: I1001 16:36:19.978561 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ccacc95-2698-4d71-9762-9b7eaadb1a81-config-data\") pod \"9ccacc95-2698-4d71-9762-9b7eaadb1a81\" (UID: \"9ccacc95-2698-4d71-9762-9b7eaadb1a81\") " Oct 01 16:36:19 crc kubenswrapper[4949]: I1001 16:36:19.978613 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a2eee1a1-972c-4bf6-9dab-d55162980ed9-config-data\") pod \"a2eee1a1-972c-4bf6-9dab-d55162980ed9\" (UID: \"a2eee1a1-972c-4bf6-9dab-d55162980ed9\") " Oct 01 16:36:19 crc kubenswrapper[4949]: I1001 16:36:19.978669 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9ccacc95-2698-4d71-9762-9b7eaadb1a81-etc-machine-id\") pod \"9ccacc95-2698-4d71-9762-9b7eaadb1a81\" (UID: \"9ccacc95-2698-4d71-9762-9b7eaadb1a81\") " Oct 01 16:36:19 crc kubenswrapper[4949]: I1001 16:36:19.978746 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a2eee1a1-972c-4bf6-9dab-d55162980ed9-scripts\") pod \"a2eee1a1-972c-4bf6-9dab-d55162980ed9\" (UID: \"a2eee1a1-972c-4bf6-9dab-d55162980ed9\") " Oct 01 16:36:19 crc kubenswrapper[4949]: I1001 16:36:19.978780 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ccacc95-2698-4d71-9762-9b7eaadb1a81-config-data-custom\") pod \"9ccacc95-2698-4d71-9762-9b7eaadb1a81\" (UID: \"9ccacc95-2698-4d71-9762-9b7eaadb1a81\") " Oct 01 16:36:19 crc kubenswrapper[4949]: I1001 16:36:19.978873 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ccacc95-2698-4d71-9762-9b7eaadb1a81-combined-ca-bundle\") pod \"9ccacc95-2698-4d71-9762-9b7eaadb1a81\" (UID: \"9ccacc95-2698-4d71-9762-9b7eaadb1a81\") " Oct 01 16:36:19 crc kubenswrapper[4949]: I1001 16:36:19.978908 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ccacc95-2698-4d71-9762-9b7eaadb1a81-logs\") pod \"9ccacc95-2698-4d71-9762-9b7eaadb1a81\" (UID: \"9ccacc95-2698-4d71-9762-9b7eaadb1a81\") " Oct 01 16:36:19 crc kubenswrapper[4949]: I1001 16:36:19.978924 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9ccacc95-2698-4d71-9762-9b7eaadb1a81-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "9ccacc95-2698-4d71-9762-9b7eaadb1a81" (UID: "9ccacc95-2698-4d71-9762-9b7eaadb1a81"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 01 16:36:19 crc kubenswrapper[4949]: I1001 16:36:19.978966 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8455\" (UniqueName: \"kubernetes.io/projected/9ccacc95-2698-4d71-9762-9b7eaadb1a81-kube-api-access-k8455\") pod \"9ccacc95-2698-4d71-9762-9b7eaadb1a81\" (UID: \"9ccacc95-2698-4d71-9762-9b7eaadb1a81\") " Oct 01 16:36:19 crc kubenswrapper[4949]: I1001 16:36:19.978989 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2eee1a1-972c-4bf6-9dab-d55162980ed9-logs\") pod \"a2eee1a1-972c-4bf6-9dab-d55162980ed9\" (UID: \"a2eee1a1-972c-4bf6-9dab-d55162980ed9\") " Oct 01 16:36:19 crc kubenswrapper[4949]: I1001 16:36:19.979012 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7vwkc\" (UniqueName: \"kubernetes.io/projected/a2eee1a1-972c-4bf6-9dab-d55162980ed9-kube-api-access-7vwkc\") pod \"a2eee1a1-972c-4bf6-9dab-d55162980ed9\" (UID: \"a2eee1a1-972c-4bf6-9dab-d55162980ed9\") " Oct 01 16:36:19 crc kubenswrapper[4949]: I1001 16:36:19.979038 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ccacc95-2698-4d71-9762-9b7eaadb1a81-scripts\") pod \"9ccacc95-2698-4d71-9762-9b7eaadb1a81\" (UID: \"9ccacc95-2698-4d71-9762-9b7eaadb1a81\") " Oct 01 16:36:19 crc kubenswrapper[4949]: I1001 16:36:19.979061 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a2eee1a1-972c-4bf6-9dab-d55162980ed9-horizon-secret-key\") pod \"a2eee1a1-972c-4bf6-9dab-d55162980ed9\" (UID: \"a2eee1a1-972c-4bf6-9dab-d55162980ed9\") " Oct 01 16:36:19 crc kubenswrapper[4949]: I1001 16:36:19.979518 4949 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/9ccacc95-2698-4d71-9762-9b7eaadb1a81-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:19 crc kubenswrapper[4949]: I1001 16:36:19.980434 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9ccacc95-2698-4d71-9762-9b7eaadb1a81-logs" (OuterVolumeSpecName: "logs") pod "9ccacc95-2698-4d71-9762-9b7eaadb1a81" (UID: "9ccacc95-2698-4d71-9762-9b7eaadb1a81"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:36:19 crc kubenswrapper[4949]: I1001 16:36:19.981500 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a2eee1a1-972c-4bf6-9dab-d55162980ed9-logs" (OuterVolumeSpecName: "logs") pod "a2eee1a1-972c-4bf6-9dab-d55162980ed9" (UID: "a2eee1a1-972c-4bf6-9dab-d55162980ed9"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:36:19 crc kubenswrapper[4949]: I1001 16:36:19.989442 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a2eee1a1-972c-4bf6-9dab-d55162980ed9-kube-api-access-7vwkc" (OuterVolumeSpecName: "kube-api-access-7vwkc") pod "a2eee1a1-972c-4bf6-9dab-d55162980ed9" (UID: "a2eee1a1-972c-4bf6-9dab-d55162980ed9"). InnerVolumeSpecName "kube-api-access-7vwkc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:36:19 crc kubenswrapper[4949]: I1001 16:36:19.989764 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7756fd4cb7-fb6wc" event={"ID":"a2eee1a1-972c-4bf6-9dab-d55162980ed9","Type":"ContainerDied","Data":"9e8c5d72acd6c4d43d59b2e3aa1b6db9c567c7b2f69a5d7769a53acc373825ef"} Oct 01 16:36:19 crc kubenswrapper[4949]: I1001 16:36:19.989795 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7756fd4cb7-fb6wc" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.028881 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ccacc95-2698-4d71-9762-9b7eaadb1a81-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "9ccacc95-2698-4d71-9762-9b7eaadb1a81" (UID: "9ccacc95-2698-4d71-9762-9b7eaadb1a81"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.028971 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ccacc95-2698-4d71-9762-9b7eaadb1a81-scripts" (OuterVolumeSpecName: "scripts") pod "9ccacc95-2698-4d71-9762-9b7eaadb1a81" (UID: "9ccacc95-2698-4d71-9762-9b7eaadb1a81"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.029031 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9ccacc95-2698-4d71-9762-9b7eaadb1a81-kube-api-access-k8455" (OuterVolumeSpecName: "kube-api-access-k8455") pod "9ccacc95-2698-4d71-9762-9b7eaadb1a81" (UID: "9ccacc95-2698-4d71-9762-9b7eaadb1a81"). InnerVolumeSpecName "kube-api-access-k8455". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.029897 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2eee1a1-972c-4bf6-9dab-d55162980ed9-scripts" (OuterVolumeSpecName: "scripts") pod "a2eee1a1-972c-4bf6-9dab-d55162980ed9" (UID: "a2eee1a1-972c-4bf6-9dab-d55162980ed9"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.033313 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a2eee1a1-972c-4bf6-9dab-d55162980ed9-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "a2eee1a1-972c-4bf6-9dab-d55162980ed9" (UID: "a2eee1a1-972c-4bf6-9dab-d55162980ed9"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.039046 4949 generic.go:334] "Generic (PLEG): container finished" podID="62f9a851-7558-4b9f-86fe-a5412eaf318e" containerID="f9c3396ebafdf1fcf35a308b53799fb788160492967e33ff8b2f4ac1b30813bd" exitCode=0 Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.039203 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-5fmz9" event={"ID":"62f9a851-7558-4b9f-86fe-a5412eaf318e","Type":"ContainerDied","Data":"f9c3396ebafdf1fcf35a308b53799fb788160492967e33ff8b2f4ac1b30813bd"} Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.056684 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a2eee1a1-972c-4bf6-9dab-d55162980ed9-config-data" (OuterVolumeSpecName: "config-data") pod "a2eee1a1-972c-4bf6-9dab-d55162980ed9" (UID: "a2eee1a1-972c-4bf6-9dab-d55162980ed9"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.061161 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"9ccacc95-2698-4d71-9762-9b7eaadb1a81","Type":"ContainerDied","Data":"db066f5b66b9fb33d2373cb48707bfbcba2169f1466c6983d5080eedb0ea9085"} Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.061496 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.067269 4949 generic.go:334] "Generic (PLEG): container finished" podID="f7a262ff-0c09-4792-a7e7-e8fb709aa971" containerID="d2a043ada7ee2b29c171a5df0dec7188fa155611656b0510de9231338296fe57" exitCode=0 Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.067302 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7a262ff-0c09-4792-a7e7-e8fb709aa971","Type":"ContainerDied","Data":"d2a043ada7ee2b29c171a5df0dec7188fa155611656b0510de9231338296fe57"} Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.082497 4949 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/9ccacc95-2698-4d71-9762-9b7eaadb1a81-logs\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.082537 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8455\" (UniqueName: \"kubernetes.io/projected/9ccacc95-2698-4d71-9762-9b7eaadb1a81-kube-api-access-k8455\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.082555 4949 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a2eee1a1-972c-4bf6-9dab-d55162980ed9-logs\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.082567 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7vwkc\" (UniqueName: \"kubernetes.io/projected/a2eee1a1-972c-4bf6-9dab-d55162980ed9-kube-api-access-7vwkc\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.082579 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/9ccacc95-2698-4d71-9762-9b7eaadb1a81-scripts\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.082592 4949 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/a2eee1a1-972c-4bf6-9dab-d55162980ed9-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.082603 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/a2eee1a1-972c-4bf6-9dab-d55162980ed9-config-data\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.082617 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/a2eee1a1-972c-4bf6-9dab-d55162980ed9-scripts\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.082628 4949 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/9ccacc95-2698-4d71-9762-9b7eaadb1a81-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.087300 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ccacc95-2698-4d71-9762-9b7eaadb1a81-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "9ccacc95-2698-4d71-9762-9b7eaadb1a81" (UID: "9ccacc95-2698-4d71-9762-9b7eaadb1a81"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.124236 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9ccacc95-2698-4d71-9762-9b7eaadb1a81-config-data" (OuterVolumeSpecName: "config-data") pod "9ccacc95-2698-4d71-9762-9b7eaadb1a81" (UID: "9ccacc95-2698-4d71-9762-9b7eaadb1a81"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.158525 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.180517 4949 scope.go:117] "RemoveContainer" containerID="4c05e1c34bacf35ea67c373df5f4fe8e4317af2fd01832ea8d64aea9c5c0ebde" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.190643 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7a262ff-0c09-4792-a7e7-e8fb709aa971-ceilometer-tls-certs\") pod \"f7a262ff-0c09-4792-a7e7-e8fb709aa971\" (UID: \"f7a262ff-0c09-4792-a7e7-e8fb709aa971\") " Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.190894 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f7a262ff-0c09-4792-a7e7-e8fb709aa971-sg-core-conf-yaml\") pod \"f7a262ff-0c09-4792-a7e7-e8fb709aa971\" (UID: \"f7a262ff-0c09-4792-a7e7-e8fb709aa971\") " Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.191411 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/9ccacc95-2698-4d71-9762-9b7eaadb1a81-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.191433 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/9ccacc95-2698-4d71-9762-9b7eaadb1a81-config-data\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.231718 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7a262ff-0c09-4792-a7e7-e8fb709aa971-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "f7a262ff-0c09-4792-a7e7-e8fb709aa971" (UID: "f7a262ff-0c09-4792-a7e7-e8fb709aa971"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.265807 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7a262ff-0c09-4792-a7e7-e8fb709aa971-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "f7a262ff-0c09-4792-a7e7-e8fb709aa971" (UID: "f7a262ff-0c09-4792-a7e7-e8fb709aa971"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.292969 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7a262ff-0c09-4792-a7e7-e8fb709aa971-combined-ca-bundle\") pod \"f7a262ff-0c09-4792-a7e7-e8fb709aa971\" (UID: \"f7a262ff-0c09-4792-a7e7-e8fb709aa971\") " Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.293053 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7a262ff-0c09-4792-a7e7-e8fb709aa971-config-data\") pod \"f7a262ff-0c09-4792-a7e7-e8fb709aa971\" (UID: \"f7a262ff-0c09-4792-a7e7-e8fb709aa971\") " Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.293084 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7a262ff-0c09-4792-a7e7-e8fb709aa971-scripts\") pod \"f7a262ff-0c09-4792-a7e7-e8fb709aa971\" (UID: \"f7a262ff-0c09-4792-a7e7-e8fb709aa971\") " Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.293115 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wshz\" (UniqueName: \"kubernetes.io/projected/f7a262ff-0c09-4792-a7e7-e8fb709aa971-kube-api-access-7wshz\") pod \"f7a262ff-0c09-4792-a7e7-e8fb709aa971\" (UID: \"f7a262ff-0c09-4792-a7e7-e8fb709aa971\") " Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.293771 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7a262ff-0c09-4792-a7e7-e8fb709aa971-log-httpd\") pod \"f7a262ff-0c09-4792-a7e7-e8fb709aa971\" (UID: \"f7a262ff-0c09-4792-a7e7-e8fb709aa971\") " Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.293849 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7a262ff-0c09-4792-a7e7-e8fb709aa971-run-httpd\") pod \"f7a262ff-0c09-4792-a7e7-e8fb709aa971\" (UID: \"f7a262ff-0c09-4792-a7e7-e8fb709aa971\") " Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.294320 4949 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/f7a262ff-0c09-4792-a7e7-e8fb709aa971-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.294359 4949 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/f7a262ff-0c09-4792-a7e7-e8fb709aa971-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.294770 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7a262ff-0c09-4792-a7e7-e8fb709aa971-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "f7a262ff-0c09-4792-a7e7-e8fb709aa971" (UID: "f7a262ff-0c09-4792-a7e7-e8fb709aa971"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.299470 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7a262ff-0c09-4792-a7e7-e8fb709aa971-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "f7a262ff-0c09-4792-a7e7-e8fb709aa971" (UID: "f7a262ff-0c09-4792-a7e7-e8fb709aa971"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.309328 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7a262ff-0c09-4792-a7e7-e8fb709aa971-kube-api-access-7wshz" (OuterVolumeSpecName: "kube-api-access-7wshz") pod "f7a262ff-0c09-4792-a7e7-e8fb709aa971" (UID: "f7a262ff-0c09-4792-a7e7-e8fb709aa971"). InnerVolumeSpecName "kube-api-access-7wshz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.309437 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7a262ff-0c09-4792-a7e7-e8fb709aa971-scripts" (OuterVolumeSpecName: "scripts") pod "f7a262ff-0c09-4792-a7e7-e8fb709aa971" (UID: "f7a262ff-0c09-4792-a7e7-e8fb709aa971"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.345199 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7756fd4cb7-fb6wc"] Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.362184 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7756fd4cb7-fb6wc"] Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.396874 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wshz\" (UniqueName: \"kubernetes.io/projected/f7a262ff-0c09-4792-a7e7-e8fb709aa971-kube-api-access-7wshz\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.396909 4949 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7a262ff-0c09-4792-a7e7-e8fb709aa971-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.396919 4949 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/f7a262ff-0c09-4792-a7e7-e8fb709aa971-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.396928 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f7a262ff-0c09-4792-a7e7-e8fb709aa971-scripts\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.406663 4949 scope.go:117] "RemoveContainer" containerID="e527636871b09b9ce51622266540b5954657713feac9dccb570e0f16c92d7ffa" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.408953 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7a262ff-0c09-4792-a7e7-e8fb709aa971-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "f7a262ff-0c09-4792-a7e7-e8fb709aa971" (UID: "f7a262ff-0c09-4792-a7e7-e8fb709aa971"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.412198 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-api-0"] Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.423602 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-api-0"] Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.450666 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-api-0"] Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.450754 4949 scope.go:117] "RemoveContainer" containerID="225d308166be410daf7a1efe1db51b85c372977980a3d70cc488bd001d3a35b1" Oct 01 16:36:20 crc kubenswrapper[4949]: E1001 16:36:20.451177 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7a262ff-0c09-4792-a7e7-e8fb709aa971" containerName="proxy-httpd" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.451204 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7a262ff-0c09-4792-a7e7-e8fb709aa971" containerName="proxy-httpd" Oct 01 16:36:20 crc kubenswrapper[4949]: E1001 16:36:20.451221 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51ce1d58-e3c1-4eff-819b-b4f6f19e3498" containerName="horizon" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.451229 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="51ce1d58-e3c1-4eff-819b-b4f6f19e3498" containerName="horizon" Oct 01 16:36:20 crc kubenswrapper[4949]: E1001 16:36:20.451244 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ccacc95-2698-4d71-9762-9b7eaadb1a81" containerName="manila-api-log" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.451252 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ccacc95-2698-4d71-9762-9b7eaadb1a81" containerName="manila-api-log" Oct 01 16:36:20 crc kubenswrapper[4949]: E1001 16:36:20.451268 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2eee1a1-972c-4bf6-9dab-d55162980ed9" containerName="horizon-log" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.451275 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2eee1a1-972c-4bf6-9dab-d55162980ed9" containerName="horizon-log" Oct 01 16:36:20 crc kubenswrapper[4949]: E1001 16:36:20.451299 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7a262ff-0c09-4792-a7e7-e8fb709aa971" containerName="ceilometer-central-agent" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.451308 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7a262ff-0c09-4792-a7e7-e8fb709aa971" containerName="ceilometer-central-agent" Oct 01 16:36:20 crc kubenswrapper[4949]: E1001 16:36:20.451326 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7a262ff-0c09-4792-a7e7-e8fb709aa971" containerName="ceilometer-notification-agent" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.451334 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7a262ff-0c09-4792-a7e7-e8fb709aa971" containerName="ceilometer-notification-agent" Oct 01 16:36:20 crc kubenswrapper[4949]: E1001 16:36:20.451353 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="51ce1d58-e3c1-4eff-819b-b4f6f19e3498" containerName="horizon-log" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.451361 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="51ce1d58-e3c1-4eff-819b-b4f6f19e3498" containerName="horizon-log" Oct 01 16:36:20 crc kubenswrapper[4949]: E1001 16:36:20.451372 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a2eee1a1-972c-4bf6-9dab-d55162980ed9" containerName="horizon" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.451381 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="a2eee1a1-972c-4bf6-9dab-d55162980ed9" containerName="horizon" Oct 01 16:36:20 crc kubenswrapper[4949]: E1001 16:36:20.451400 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f7a262ff-0c09-4792-a7e7-e8fb709aa971" containerName="sg-core" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.451407 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7a262ff-0c09-4792-a7e7-e8fb709aa971" containerName="sg-core" Oct 01 16:36:20 crc kubenswrapper[4949]: E1001 16:36:20.451424 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9ccacc95-2698-4d71-9762-9b7eaadb1a81" containerName="manila-api" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.451432 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="9ccacc95-2698-4d71-9762-9b7eaadb1a81" containerName="manila-api" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.451642 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="51ce1d58-e3c1-4eff-819b-b4f6f19e3498" containerName="horizon" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.451667 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2eee1a1-972c-4bf6-9dab-d55162980ed9" containerName="horizon" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.451682 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7a262ff-0c09-4792-a7e7-e8fb709aa971" containerName="ceilometer-notification-agent" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.451700 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7a262ff-0c09-4792-a7e7-e8fb709aa971" containerName="proxy-httpd" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.451713 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ccacc95-2698-4d71-9762-9b7eaadb1a81" containerName="manila-api-log" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.451729 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7a262ff-0c09-4792-a7e7-e8fb709aa971" containerName="sg-core" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.451741 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="a2eee1a1-972c-4bf6-9dab-d55162980ed9" containerName="horizon-log" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.451755 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="51ce1d58-e3c1-4eff-819b-b4f6f19e3498" containerName="horizon-log" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.451765 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="f7a262ff-0c09-4792-a7e7-e8fb709aa971" containerName="ceilometer-central-agent" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.451779 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="9ccacc95-2698-4d71-9762-9b7eaadb1a81" containerName="manila-api" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.452924 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.455105 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-internal-svc" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.455434 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-api-config-data" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.456957 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-manila-public-svc" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.460540 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7a262ff-0c09-4792-a7e7-e8fb709aa971-config-data" (OuterVolumeSpecName: "config-data") pod "f7a262ff-0c09-4792-a7e7-e8fb709aa971" (UID: "f7a262ff-0c09-4792-a7e7-e8fb709aa971"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.465031 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.467400 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-5fmz9" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.515268 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/510ed0b0-9c2c-4f54-8323-755cd65b4393-etc-machine-id\") pod \"manila-api-0\" (UID: \"510ed0b0-9c2c-4f54-8323-755cd65b4393\") " pod="openstack/manila-api-0" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.521326 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/510ed0b0-9c2c-4f54-8323-755cd65b4393-config-data\") pod \"manila-api-0\" (UID: \"510ed0b0-9c2c-4f54-8323-755cd65b4393\") " pod="openstack/manila-api-0" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.521376 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/510ed0b0-9c2c-4f54-8323-755cd65b4393-internal-tls-certs\") pod \"manila-api-0\" (UID: \"510ed0b0-9c2c-4f54-8323-755cd65b4393\") " pod="openstack/manila-api-0" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.521422 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvj2r\" (UniqueName: \"kubernetes.io/projected/510ed0b0-9c2c-4f54-8323-755cd65b4393-kube-api-access-rvj2r\") pod \"manila-api-0\" (UID: \"510ed0b0-9c2c-4f54-8323-755cd65b4393\") " pod="openstack/manila-api-0" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.521587 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/510ed0b0-9c2c-4f54-8323-755cd65b4393-config-data-custom\") pod \"manila-api-0\" (UID: \"510ed0b0-9c2c-4f54-8323-755cd65b4393\") " pod="openstack/manila-api-0" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.521644 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/510ed0b0-9c2c-4f54-8323-755cd65b4393-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"510ed0b0-9c2c-4f54-8323-755cd65b4393\") " pod="openstack/manila-api-0" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.521720 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/510ed0b0-9c2c-4f54-8323-755cd65b4393-public-tls-certs\") pod \"manila-api-0\" (UID: \"510ed0b0-9c2c-4f54-8323-755cd65b4393\") " pod="openstack/manila-api-0" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.521833 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/510ed0b0-9c2c-4f54-8323-755cd65b4393-scripts\") pod \"manila-api-0\" (UID: \"510ed0b0-9c2c-4f54-8323-755cd65b4393\") " pod="openstack/manila-api-0" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.522009 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/510ed0b0-9c2c-4f54-8323-755cd65b4393-logs\") pod \"manila-api-0\" (UID: \"510ed0b0-9c2c-4f54-8323-755cd65b4393\") " pod="openstack/manila-api-0" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.522529 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/f7a262ff-0c09-4792-a7e7-e8fb709aa971-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.523672 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f7a262ff-0c09-4792-a7e7-e8fb709aa971-config-data\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.533821 4949 scope.go:117] "RemoveContainer" containerID="f0a0522de844f7ec0b016f9c50b1cb29fb9f61e71226149524cba4d4b2d2a56b" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.625208 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/62f9a851-7558-4b9f-86fe-a5412eaf318e-ovsdbserver-sb\") pod \"62f9a851-7558-4b9f-86fe-a5412eaf318e\" (UID: \"62f9a851-7558-4b9f-86fe-a5412eaf318e\") " Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.625347 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62f9a851-7558-4b9f-86fe-a5412eaf318e-config\") pod \"62f9a851-7558-4b9f-86fe-a5412eaf318e\" (UID: \"62f9a851-7558-4b9f-86fe-a5412eaf318e\") " Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.625399 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9bkbn\" (UniqueName: \"kubernetes.io/projected/62f9a851-7558-4b9f-86fe-a5412eaf318e-kube-api-access-9bkbn\") pod \"62f9a851-7558-4b9f-86fe-a5412eaf318e\" (UID: \"62f9a851-7558-4b9f-86fe-a5412eaf318e\") " Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.625440 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62f9a851-7558-4b9f-86fe-a5412eaf318e-dns-svc\") pod \"62f9a851-7558-4b9f-86fe-a5412eaf318e\" (UID: \"62f9a851-7558-4b9f-86fe-a5412eaf318e\") " Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.625469 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/62f9a851-7558-4b9f-86fe-a5412eaf318e-openstack-edpm-ipam\") pod \"62f9a851-7558-4b9f-86fe-a5412eaf318e\" (UID: \"62f9a851-7558-4b9f-86fe-a5412eaf318e\") " Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.625538 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/62f9a851-7558-4b9f-86fe-a5412eaf318e-ovsdbserver-nb\") pod \"62f9a851-7558-4b9f-86fe-a5412eaf318e\" (UID: \"62f9a851-7558-4b9f-86fe-a5412eaf318e\") " Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.625785 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/510ed0b0-9c2c-4f54-8323-755cd65b4393-public-tls-certs\") pod \"manila-api-0\" (UID: \"510ed0b0-9c2c-4f54-8323-755cd65b4393\") " pod="openstack/manila-api-0" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.625843 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/510ed0b0-9c2c-4f54-8323-755cd65b4393-scripts\") pod \"manila-api-0\" (UID: \"510ed0b0-9c2c-4f54-8323-755cd65b4393\") " pod="openstack/manila-api-0" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.625909 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/510ed0b0-9c2c-4f54-8323-755cd65b4393-logs\") pod \"manila-api-0\" (UID: \"510ed0b0-9c2c-4f54-8323-755cd65b4393\") " pod="openstack/manila-api-0" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.625943 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/510ed0b0-9c2c-4f54-8323-755cd65b4393-etc-machine-id\") pod \"manila-api-0\" (UID: \"510ed0b0-9c2c-4f54-8323-755cd65b4393\") " pod="openstack/manila-api-0" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.625970 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/510ed0b0-9c2c-4f54-8323-755cd65b4393-config-data\") pod \"manila-api-0\" (UID: \"510ed0b0-9c2c-4f54-8323-755cd65b4393\") " pod="openstack/manila-api-0" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.625984 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/510ed0b0-9c2c-4f54-8323-755cd65b4393-internal-tls-certs\") pod \"manila-api-0\" (UID: \"510ed0b0-9c2c-4f54-8323-755cd65b4393\") " pod="openstack/manila-api-0" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.626003 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvj2r\" (UniqueName: \"kubernetes.io/projected/510ed0b0-9c2c-4f54-8323-755cd65b4393-kube-api-access-rvj2r\") pod \"manila-api-0\" (UID: \"510ed0b0-9c2c-4f54-8323-755cd65b4393\") " pod="openstack/manila-api-0" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.626033 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/510ed0b0-9c2c-4f54-8323-755cd65b4393-config-data-custom\") pod \"manila-api-0\" (UID: \"510ed0b0-9c2c-4f54-8323-755cd65b4393\") " pod="openstack/manila-api-0" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.626055 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/510ed0b0-9c2c-4f54-8323-755cd65b4393-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"510ed0b0-9c2c-4f54-8323-755cd65b4393\") " pod="openstack/manila-api-0" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.628477 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/510ed0b0-9c2c-4f54-8323-755cd65b4393-logs\") pod \"manila-api-0\" (UID: \"510ed0b0-9c2c-4f54-8323-755cd65b4393\") " pod="openstack/manila-api-0" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.629168 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/62f9a851-7558-4b9f-86fe-a5412eaf318e-kube-api-access-9bkbn" (OuterVolumeSpecName: "kube-api-access-9bkbn") pod "62f9a851-7558-4b9f-86fe-a5412eaf318e" (UID: "62f9a851-7558-4b9f-86fe-a5412eaf318e"). InnerVolumeSpecName "kube-api-access-9bkbn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.630729 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/510ed0b0-9c2c-4f54-8323-755cd65b4393-etc-machine-id\") pod \"manila-api-0\" (UID: \"510ed0b0-9c2c-4f54-8323-755cd65b4393\") " pod="openstack/manila-api-0" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.631951 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/510ed0b0-9c2c-4f54-8323-755cd65b4393-public-tls-certs\") pod \"manila-api-0\" (UID: \"510ed0b0-9c2c-4f54-8323-755cd65b4393\") " pod="openstack/manila-api-0" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.633288 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/510ed0b0-9c2c-4f54-8323-755cd65b4393-combined-ca-bundle\") pod \"manila-api-0\" (UID: \"510ed0b0-9c2c-4f54-8323-755cd65b4393\") " pod="openstack/manila-api-0" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.637296 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/510ed0b0-9c2c-4f54-8323-755cd65b4393-scripts\") pod \"manila-api-0\" (UID: \"510ed0b0-9c2c-4f54-8323-755cd65b4393\") " pod="openstack/manila-api-0" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.638176 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/510ed0b0-9c2c-4f54-8323-755cd65b4393-config-data-custom\") pod \"manila-api-0\" (UID: \"510ed0b0-9c2c-4f54-8323-755cd65b4393\") " pod="openstack/manila-api-0" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.645353 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/510ed0b0-9c2c-4f54-8323-755cd65b4393-config-data\") pod \"manila-api-0\" (UID: \"510ed0b0-9c2c-4f54-8323-755cd65b4393\") " pod="openstack/manila-api-0" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.648583 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvj2r\" (UniqueName: \"kubernetes.io/projected/510ed0b0-9c2c-4f54-8323-755cd65b4393-kube-api-access-rvj2r\") pod \"manila-api-0\" (UID: \"510ed0b0-9c2c-4f54-8323-755cd65b4393\") " pod="openstack/manila-api-0" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.653628 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/510ed0b0-9c2c-4f54-8323-755cd65b4393-internal-tls-certs\") pod \"manila-api-0\" (UID: \"510ed0b0-9c2c-4f54-8323-755cd65b4393\") " pod="openstack/manila-api-0" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.680179 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62f9a851-7558-4b9f-86fe-a5412eaf318e-ovsdbserver-sb" (OuterVolumeSpecName: "ovsdbserver-sb") pod "62f9a851-7558-4b9f-86fe-a5412eaf318e" (UID: "62f9a851-7558-4b9f-86fe-a5412eaf318e"). InnerVolumeSpecName "ovsdbserver-sb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.698403 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62f9a851-7558-4b9f-86fe-a5412eaf318e-config" (OuterVolumeSpecName: "config") pod "62f9a851-7558-4b9f-86fe-a5412eaf318e" (UID: "62f9a851-7558-4b9f-86fe-a5412eaf318e"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.704372 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62f9a851-7558-4b9f-86fe-a5412eaf318e-dns-svc" (OuterVolumeSpecName: "dns-svc") pod "62f9a851-7558-4b9f-86fe-a5412eaf318e" (UID: "62f9a851-7558-4b9f-86fe-a5412eaf318e"). InnerVolumeSpecName "dns-svc". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.707146 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62f9a851-7558-4b9f-86fe-a5412eaf318e-openstack-edpm-ipam" (OuterVolumeSpecName: "openstack-edpm-ipam") pod "62f9a851-7558-4b9f-86fe-a5412eaf318e" (UID: "62f9a851-7558-4b9f-86fe-a5412eaf318e"). InnerVolumeSpecName "openstack-edpm-ipam". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.708725 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/62f9a851-7558-4b9f-86fe-a5412eaf318e-ovsdbserver-nb" (OuterVolumeSpecName: "ovsdbserver-nb") pod "62f9a851-7558-4b9f-86fe-a5412eaf318e" (UID: "62f9a851-7558-4b9f-86fe-a5412eaf318e"). InnerVolumeSpecName "ovsdbserver-nb". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.727671 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-nb\" (UniqueName: \"kubernetes.io/configmap/62f9a851-7558-4b9f-86fe-a5412eaf318e-ovsdbserver-nb\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.727708 4949 reconciler_common.go:293] "Volume detached for volume \"ovsdbserver-sb\" (UniqueName: \"kubernetes.io/configmap/62f9a851-7558-4b9f-86fe-a5412eaf318e-ovsdbserver-sb\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.727721 4949 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/62f9a851-7558-4b9f-86fe-a5412eaf318e-config\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.727734 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9bkbn\" (UniqueName: \"kubernetes.io/projected/62f9a851-7558-4b9f-86fe-a5412eaf318e-kube-api-access-9bkbn\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.727747 4949 reconciler_common.go:293] "Volume detached for volume \"dns-svc\" (UniqueName: \"kubernetes.io/configmap/62f9a851-7558-4b9f-86fe-a5412eaf318e-dns-svc\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.727758 4949 reconciler_common.go:293] "Volume detached for volume \"openstack-edpm-ipam\" (UniqueName: \"kubernetes.io/configmap/62f9a851-7558-4b9f-86fe-a5412eaf318e-openstack-edpm-ipam\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:20 crc kubenswrapper[4949]: I1001 16:36:20.833786 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-api-0" Oct 01 16:36:21 crc kubenswrapper[4949]: I1001 16:36:21.098690 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"f7a262ff-0c09-4792-a7e7-e8fb709aa971","Type":"ContainerDied","Data":"581bd4750393be3da4103f4fff0134dd605b6310abb00213d8bd2cd2aa22e392"} Oct 01 16:36:21 crc kubenswrapper[4949]: I1001 16:36:21.099475 4949 scope.go:117] "RemoveContainer" containerID="5fb65314d9da2accfa5168bf188a43f203c5777662b29a4aaa382fd2b953230b" Oct 01 16:36:21 crc kubenswrapper[4949]: I1001 16:36:21.100451 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 01 16:36:21 crc kubenswrapper[4949]: I1001 16:36:21.125137 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/dnsmasq-dns-864d5fc68c-5fmz9" Oct 01 16:36:21 crc kubenswrapper[4949]: I1001 16:36:21.125182 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/dnsmasq-dns-864d5fc68c-5fmz9" event={"ID":"62f9a851-7558-4b9f-86fe-a5412eaf318e","Type":"ContainerDied","Data":"3163a78cb97b0a9c01fa2f0484b1539d2a018e3a05037644fc9d0df576f1f298"} Oct 01 16:36:21 crc kubenswrapper[4949]: I1001 16:36:21.130296 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"7cc67aa3-197f-42f8-9c2f-8461e871faa5","Type":"ContainerStarted","Data":"75596e7b489b14276d229078a8475ec781d821e33274b8f8911c5cc64214f52a"} Oct 01 16:36:21 crc kubenswrapper[4949]: I1001 16:36:21.178064 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 01 16:36:21 crc kubenswrapper[4949]: I1001 16:36:21.197878 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 01 16:36:21 crc kubenswrapper[4949]: E1001 16:36:21.204570 4949 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62f9a851_7558_4b9f_86fe_a5412eaf318e.slice\": RecentStats: unable to find data in memory cache]" Oct 01 16:36:21 crc kubenswrapper[4949]: I1001 16:36:21.204901 4949 scope.go:117] "RemoveContainer" containerID="0c91a6d2b2323683370ce4b166f8316585c1db48b081a1cfc6659d754f9be901" Oct 01 16:36:21 crc kubenswrapper[4949]: I1001 16:36:21.209531 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 01 16:36:21 crc kubenswrapper[4949]: E1001 16:36:21.210007 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62f9a851-7558-4b9f-86fe-a5412eaf318e" containerName="init" Oct 01 16:36:21 crc kubenswrapper[4949]: I1001 16:36:21.210072 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="62f9a851-7558-4b9f-86fe-a5412eaf318e" containerName="init" Oct 01 16:36:21 crc kubenswrapper[4949]: E1001 16:36:21.210194 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="62f9a851-7558-4b9f-86fe-a5412eaf318e" containerName="dnsmasq-dns" Oct 01 16:36:21 crc kubenswrapper[4949]: I1001 16:36:21.210272 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="62f9a851-7558-4b9f-86fe-a5412eaf318e" containerName="dnsmasq-dns" Oct 01 16:36:21 crc kubenswrapper[4949]: I1001 16:36:21.210499 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="62f9a851-7558-4b9f-86fe-a5412eaf318e" containerName="dnsmasq-dns" Oct 01 16:36:21 crc kubenswrapper[4949]: I1001 16:36:21.212264 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 01 16:36:21 crc kubenswrapper[4949]: I1001 16:36:21.223799 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 01 16:36:21 crc kubenswrapper[4949]: I1001 16:36:21.224774 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 01 16:36:21 crc kubenswrapper[4949]: I1001 16:36:21.224809 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 01 16:36:21 crc kubenswrapper[4949]: I1001 16:36:21.224979 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 01 16:36:21 crc kubenswrapper[4949]: I1001 16:36:21.244485 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-5fmz9"] Oct 01 16:36:21 crc kubenswrapper[4949]: I1001 16:36:21.261797 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/dnsmasq-dns-864d5fc68c-5fmz9"] Oct 01 16:36:21 crc kubenswrapper[4949]: I1001 16:36:21.262606 4949 scope.go:117] "RemoveContainer" containerID="d2a043ada7ee2b29c171a5df0dec7188fa155611656b0510de9231338296fe57" Oct 01 16:36:21 crc kubenswrapper[4949]: I1001 16:36:21.290880 4949 scope.go:117] "RemoveContainer" containerID="946a6b626d30f73202269e09687cc84f40b3410711e2e3bd0f9193b7a401a14e" Oct 01 16:36:21 crc kubenswrapper[4949]: I1001 16:36:21.342011 4949 scope.go:117] "RemoveContainer" containerID="f9c3396ebafdf1fcf35a308b53799fb788160492967e33ff8b2f4ac1b30813bd" Oct 01 16:36:21 crc kubenswrapper[4949]: I1001 16:36:21.346051 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/120f2b86-7848-4c46-953f-f326dadc89c7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"120f2b86-7848-4c46-953f-f326dadc89c7\") " pod="openstack/ceilometer-0" Oct 01 16:36:21 crc kubenswrapper[4949]: I1001 16:36:21.346110 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqptt\" (UniqueName: \"kubernetes.io/projected/120f2b86-7848-4c46-953f-f326dadc89c7-kube-api-access-lqptt\") pod \"ceilometer-0\" (UID: \"120f2b86-7848-4c46-953f-f326dadc89c7\") " pod="openstack/ceilometer-0" Oct 01 16:36:21 crc kubenswrapper[4949]: I1001 16:36:21.346252 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/120f2b86-7848-4c46-953f-f326dadc89c7-log-httpd\") pod \"ceilometer-0\" (UID: \"120f2b86-7848-4c46-953f-f326dadc89c7\") " pod="openstack/ceilometer-0" Oct 01 16:36:21 crc kubenswrapper[4949]: I1001 16:36:21.346278 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/120f2b86-7848-4c46-953f-f326dadc89c7-run-httpd\") pod \"ceilometer-0\" (UID: \"120f2b86-7848-4c46-953f-f326dadc89c7\") " pod="openstack/ceilometer-0" Oct 01 16:36:21 crc kubenswrapper[4949]: I1001 16:36:21.346311 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/120f2b86-7848-4c46-953f-f326dadc89c7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"120f2b86-7848-4c46-953f-f326dadc89c7\") " pod="openstack/ceilometer-0" Oct 01 16:36:21 crc kubenswrapper[4949]: I1001 16:36:21.346332 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/120f2b86-7848-4c46-953f-f326dadc89c7-config-data\") pod \"ceilometer-0\" (UID: \"120f2b86-7848-4c46-953f-f326dadc89c7\") " pod="openstack/ceilometer-0" Oct 01 16:36:21 crc kubenswrapper[4949]: I1001 16:36:21.346364 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/120f2b86-7848-4c46-953f-f326dadc89c7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"120f2b86-7848-4c46-953f-f326dadc89c7\") " pod="openstack/ceilometer-0" Oct 01 16:36:21 crc kubenswrapper[4949]: I1001 16:36:21.346397 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/120f2b86-7848-4c46-953f-f326dadc89c7-scripts\") pod \"ceilometer-0\" (UID: \"120f2b86-7848-4c46-953f-f326dadc89c7\") " pod="openstack/ceilometer-0" Oct 01 16:36:21 crc kubenswrapper[4949]: I1001 16:36:21.364907 4949 scope.go:117] "RemoveContainer" containerID="905c85410f10fc9e6e2dec532d30528a16c6dd291d540c84bddabeb77fb16c3f" Oct 01 16:36:21 crc kubenswrapper[4949]: I1001 16:36:21.435193 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-api-0"] Oct 01 16:36:21 crc kubenswrapper[4949]: W1001 16:36:21.443978 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod510ed0b0_9c2c_4f54_8323_755cd65b4393.slice/crio-6a30dae3498f92291158d2304470bb1c36452d3027a09dc03df2712c5ea2d9d8 WatchSource:0}: Error finding container 6a30dae3498f92291158d2304470bb1c36452d3027a09dc03df2712c5ea2d9d8: Status 404 returned error can't find the container with id 6a30dae3498f92291158d2304470bb1c36452d3027a09dc03df2712c5ea2d9d8 Oct 01 16:36:21 crc kubenswrapper[4949]: I1001 16:36:21.447597 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/120f2b86-7848-4c46-953f-f326dadc89c7-config-data\") pod \"ceilometer-0\" (UID: \"120f2b86-7848-4c46-953f-f326dadc89c7\") " pod="openstack/ceilometer-0" Oct 01 16:36:21 crc kubenswrapper[4949]: I1001 16:36:21.447655 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/120f2b86-7848-4c46-953f-f326dadc89c7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"120f2b86-7848-4c46-953f-f326dadc89c7\") " pod="openstack/ceilometer-0" Oct 01 16:36:21 crc kubenswrapper[4949]: I1001 16:36:21.447696 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/120f2b86-7848-4c46-953f-f326dadc89c7-scripts\") pod \"ceilometer-0\" (UID: \"120f2b86-7848-4c46-953f-f326dadc89c7\") " pod="openstack/ceilometer-0" Oct 01 16:36:21 crc kubenswrapper[4949]: I1001 16:36:21.447750 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/120f2b86-7848-4c46-953f-f326dadc89c7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"120f2b86-7848-4c46-953f-f326dadc89c7\") " pod="openstack/ceilometer-0" Oct 01 16:36:21 crc kubenswrapper[4949]: I1001 16:36:21.447790 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqptt\" (UniqueName: \"kubernetes.io/projected/120f2b86-7848-4c46-953f-f326dadc89c7-kube-api-access-lqptt\") pod \"ceilometer-0\" (UID: \"120f2b86-7848-4c46-953f-f326dadc89c7\") " pod="openstack/ceilometer-0" Oct 01 16:36:21 crc kubenswrapper[4949]: I1001 16:36:21.447861 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/120f2b86-7848-4c46-953f-f326dadc89c7-log-httpd\") pod \"ceilometer-0\" (UID: \"120f2b86-7848-4c46-953f-f326dadc89c7\") " pod="openstack/ceilometer-0" Oct 01 16:36:21 crc kubenswrapper[4949]: I1001 16:36:21.448231 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/120f2b86-7848-4c46-953f-f326dadc89c7-run-httpd\") pod \"ceilometer-0\" (UID: \"120f2b86-7848-4c46-953f-f326dadc89c7\") " pod="openstack/ceilometer-0" Oct 01 16:36:21 crc kubenswrapper[4949]: I1001 16:36:21.448621 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/120f2b86-7848-4c46-953f-f326dadc89c7-run-httpd\") pod \"ceilometer-0\" (UID: \"120f2b86-7848-4c46-953f-f326dadc89c7\") " pod="openstack/ceilometer-0" Oct 01 16:36:21 crc kubenswrapper[4949]: I1001 16:36:21.448690 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/120f2b86-7848-4c46-953f-f326dadc89c7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"120f2b86-7848-4c46-953f-f326dadc89c7\") " pod="openstack/ceilometer-0" Oct 01 16:36:21 crc kubenswrapper[4949]: I1001 16:36:21.448716 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/120f2b86-7848-4c46-953f-f326dadc89c7-log-httpd\") pod \"ceilometer-0\" (UID: \"120f2b86-7848-4c46-953f-f326dadc89c7\") " pod="openstack/ceilometer-0" Oct 01 16:36:21 crc kubenswrapper[4949]: I1001 16:36:21.452117 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/120f2b86-7848-4c46-953f-f326dadc89c7-scripts\") pod \"ceilometer-0\" (UID: \"120f2b86-7848-4c46-953f-f326dadc89c7\") " pod="openstack/ceilometer-0" Oct 01 16:36:21 crc kubenswrapper[4949]: I1001 16:36:21.452826 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/120f2b86-7848-4c46-953f-f326dadc89c7-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"120f2b86-7848-4c46-953f-f326dadc89c7\") " pod="openstack/ceilometer-0" Oct 01 16:36:21 crc kubenswrapper[4949]: I1001 16:36:21.454886 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/120f2b86-7848-4c46-953f-f326dadc89c7-config-data\") pod \"ceilometer-0\" (UID: \"120f2b86-7848-4c46-953f-f326dadc89c7\") " pod="openstack/ceilometer-0" Oct 01 16:36:21 crc kubenswrapper[4949]: I1001 16:36:21.456050 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/120f2b86-7848-4c46-953f-f326dadc89c7-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"120f2b86-7848-4c46-953f-f326dadc89c7\") " pod="openstack/ceilometer-0" Oct 01 16:36:21 crc kubenswrapper[4949]: I1001 16:36:21.456636 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/120f2b86-7848-4c46-953f-f326dadc89c7-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"120f2b86-7848-4c46-953f-f326dadc89c7\") " pod="openstack/ceilometer-0" Oct 01 16:36:21 crc kubenswrapper[4949]: I1001 16:36:21.463168 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqptt\" (UniqueName: \"kubernetes.io/projected/120f2b86-7848-4c46-953f-f326dadc89c7-kube-api-access-lqptt\") pod \"ceilometer-0\" (UID: \"120f2b86-7848-4c46-953f-f326dadc89c7\") " pod="openstack/ceilometer-0" Oct 01 16:36:21 crc kubenswrapper[4949]: I1001 16:36:21.549930 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 01 16:36:21 crc kubenswrapper[4949]: I1001 16:36:21.622254 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="62f9a851-7558-4b9f-86fe-a5412eaf318e" path="/var/lib/kubelet/pods/62f9a851-7558-4b9f-86fe-a5412eaf318e/volumes" Oct 01 16:36:21 crc kubenswrapper[4949]: I1001 16:36:21.623364 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9ccacc95-2698-4d71-9762-9b7eaadb1a81" path="/var/lib/kubelet/pods/9ccacc95-2698-4d71-9762-9b7eaadb1a81/volumes" Oct 01 16:36:21 crc kubenswrapper[4949]: I1001 16:36:21.628504 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a2eee1a1-972c-4bf6-9dab-d55162980ed9" path="/var/lib/kubelet/pods/a2eee1a1-972c-4bf6-9dab-d55162980ed9/volumes" Oct 01 16:36:21 crc kubenswrapper[4949]: I1001 16:36:21.629309 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7a262ff-0c09-4792-a7e7-e8fb709aa971" path="/var/lib/kubelet/pods/f7a262ff-0c09-4792-a7e7-e8fb709aa971/volumes" Oct 01 16:36:22 crc kubenswrapper[4949]: I1001 16:36:22.082866 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 01 16:36:22 crc kubenswrapper[4949]: I1001 16:36:22.147187 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"7cc67aa3-197f-42f8-9c2f-8461e871faa5","Type":"ContainerStarted","Data":"0071108268b8c2be0bbccd5d9f658c7ff06324e6bb59779ccb9bb72a68bb022c"} Oct 01 16:36:22 crc kubenswrapper[4949]: I1001 16:36:22.154247 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"510ed0b0-9c2c-4f54-8323-755cd65b4393","Type":"ContainerStarted","Data":"51cda65d66557d8133de96847f8b7020c6ceef5e06cff0f4091a4281991712ad"} Oct 01 16:36:22 crc kubenswrapper[4949]: I1001 16:36:22.154297 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"510ed0b0-9c2c-4f54-8323-755cd65b4393","Type":"ContainerStarted","Data":"6a30dae3498f92291158d2304470bb1c36452d3027a09dc03df2712c5ea2d9d8"} Oct 01 16:36:22 crc kubenswrapper[4949]: I1001 16:36:22.157539 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"120f2b86-7848-4c46-953f-f326dadc89c7","Type":"ContainerStarted","Data":"23d88bf183b091dd341910566df47a6414c90944e6b6e319a97e68f179c4491c"} Oct 01 16:36:22 crc kubenswrapper[4949]: I1001 16:36:22.169521 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=3.346071337 podStartE2EDuration="13.169506609s" podCreationTimestamp="2025-10-01 16:36:09 +0000 UTC" firstStartedPulling="2025-10-01 16:36:10.210281791 +0000 UTC m=+3269.515887982" lastFinishedPulling="2025-10-01 16:36:20.033717053 +0000 UTC m=+3279.339323254" observedRunningTime="2025-10-01 16:36:22.166910077 +0000 UTC m=+3281.472516268" watchObservedRunningTime="2025-10-01 16:36:22.169506609 +0000 UTC m=+3281.475112800" Oct 01 16:36:22 crc kubenswrapper[4949]: I1001 16:36:22.550119 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 01 16:36:23 crc kubenswrapper[4949]: I1001 16:36:23.173361 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"120f2b86-7848-4c46-953f-f326dadc89c7","Type":"ContainerStarted","Data":"a2b6472f302bf88e5b357b12552b01be7673b019e87645807f0c13aece9a3500"} Oct 01 16:36:23 crc kubenswrapper[4949]: I1001 16:36:23.177719 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-api-0" event={"ID":"510ed0b0-9c2c-4f54-8323-755cd65b4393","Type":"ContainerStarted","Data":"88fc010c384b55c74c73f871e3b886955bf6dff58720a21dea33c138ddd83929"} Oct 01 16:36:23 crc kubenswrapper[4949]: I1001 16:36:23.214116 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-api-0" podStartSLOduration=3.214094399 podStartE2EDuration="3.214094399s" podCreationTimestamp="2025-10-01 16:36:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 16:36:23.204954867 +0000 UTC m=+3282.510561078" watchObservedRunningTime="2025-10-01 16:36:23.214094399 +0000 UTC m=+3282.519700590" Oct 01 16:36:23 crc kubenswrapper[4949]: I1001 16:36:23.403717 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7f78d9658d-xl5nq" podUID="b49dfcdb-910b-48e0-91bb-a426980dc277" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.254:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.254:8443: connect: connection refused" Oct 01 16:36:24 crc kubenswrapper[4949]: I1001 16:36:24.200112 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"120f2b86-7848-4c46-953f-f326dadc89c7","Type":"ContainerStarted","Data":"d32e71a784aac80ad56051c2b91409f3b9efe0b550e7feaed0ec862fb9e2334e"} Oct 01 16:36:24 crc kubenswrapper[4949]: I1001 16:36:24.200442 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/manila-api-0" Oct 01 16:36:25 crc kubenswrapper[4949]: I1001 16:36:25.212324 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"120f2b86-7848-4c46-953f-f326dadc89c7","Type":"ContainerStarted","Data":"fbc119b87bb5dc07709de64bf5c24e2b5abb7aeadde780dbd8d8f7a0d1a84253"} Oct 01 16:36:27 crc kubenswrapper[4949]: I1001 16:36:27.237233 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"120f2b86-7848-4c46-953f-f326dadc89c7","Type":"ContainerStarted","Data":"2e17d12eb09b54c26ae7f405c1e950a04068d82700d9bbc08061dc7d524b8313"} Oct 01 16:36:27 crc kubenswrapper[4949]: I1001 16:36:27.237837 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="120f2b86-7848-4c46-953f-f326dadc89c7" containerName="ceilometer-central-agent" containerID="cri-o://a2b6472f302bf88e5b357b12552b01be7673b019e87645807f0c13aece9a3500" gracePeriod=30 Oct 01 16:36:27 crc kubenswrapper[4949]: I1001 16:36:27.238164 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 01 16:36:27 crc kubenswrapper[4949]: I1001 16:36:27.238445 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="120f2b86-7848-4c46-953f-f326dadc89c7" containerName="proxy-httpd" containerID="cri-o://2e17d12eb09b54c26ae7f405c1e950a04068d82700d9bbc08061dc7d524b8313" gracePeriod=30 Oct 01 16:36:27 crc kubenswrapper[4949]: I1001 16:36:27.238499 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="120f2b86-7848-4c46-953f-f326dadc89c7" containerName="sg-core" containerID="cri-o://fbc119b87bb5dc07709de64bf5c24e2b5abb7aeadde780dbd8d8f7a0d1a84253" gracePeriod=30 Oct 01 16:36:27 crc kubenswrapper[4949]: I1001 16:36:27.238542 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/ceilometer-0" podUID="120f2b86-7848-4c46-953f-f326dadc89c7" containerName="ceilometer-notification-agent" containerID="cri-o://d32e71a784aac80ad56051c2b91409f3b9efe0b550e7feaed0ec862fb9e2334e" gracePeriod=30 Oct 01 16:36:27 crc kubenswrapper[4949]: I1001 16:36:27.272846 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=1.5744640859999999 podStartE2EDuration="6.272824838s" podCreationTimestamp="2025-10-01 16:36:21 +0000 UTC" firstStartedPulling="2025-10-01 16:36:22.098760317 +0000 UTC m=+3281.404366508" lastFinishedPulling="2025-10-01 16:36:26.797121029 +0000 UTC m=+3286.102727260" observedRunningTime="2025-10-01 16:36:27.261587098 +0000 UTC m=+3286.567193299" watchObservedRunningTime="2025-10-01 16:36:27.272824838 +0000 UTC m=+3286.578431049" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.105508 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.194211 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqptt\" (UniqueName: \"kubernetes.io/projected/120f2b86-7848-4c46-953f-f326dadc89c7-kube-api-access-lqptt\") pod \"120f2b86-7848-4c46-953f-f326dadc89c7\" (UID: \"120f2b86-7848-4c46-953f-f326dadc89c7\") " Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.194291 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/120f2b86-7848-4c46-953f-f326dadc89c7-ceilometer-tls-certs\") pod \"120f2b86-7848-4c46-953f-f326dadc89c7\" (UID: \"120f2b86-7848-4c46-953f-f326dadc89c7\") " Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.194333 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/120f2b86-7848-4c46-953f-f326dadc89c7-scripts\") pod \"120f2b86-7848-4c46-953f-f326dadc89c7\" (UID: \"120f2b86-7848-4c46-953f-f326dadc89c7\") " Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.194497 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/120f2b86-7848-4c46-953f-f326dadc89c7-log-httpd\") pod \"120f2b86-7848-4c46-953f-f326dadc89c7\" (UID: \"120f2b86-7848-4c46-953f-f326dadc89c7\") " Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.194574 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/120f2b86-7848-4c46-953f-f326dadc89c7-combined-ca-bundle\") pod \"120f2b86-7848-4c46-953f-f326dadc89c7\" (UID: \"120f2b86-7848-4c46-953f-f326dadc89c7\") " Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.194622 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/120f2b86-7848-4c46-953f-f326dadc89c7-config-data\") pod \"120f2b86-7848-4c46-953f-f326dadc89c7\" (UID: \"120f2b86-7848-4c46-953f-f326dadc89c7\") " Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.194737 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/120f2b86-7848-4c46-953f-f326dadc89c7-sg-core-conf-yaml\") pod \"120f2b86-7848-4c46-953f-f326dadc89c7\" (UID: \"120f2b86-7848-4c46-953f-f326dadc89c7\") " Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.194783 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/120f2b86-7848-4c46-953f-f326dadc89c7-run-httpd\") pod \"120f2b86-7848-4c46-953f-f326dadc89c7\" (UID: \"120f2b86-7848-4c46-953f-f326dadc89c7\") " Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.195655 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/120f2b86-7848-4c46-953f-f326dadc89c7-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "120f2b86-7848-4c46-953f-f326dadc89c7" (UID: "120f2b86-7848-4c46-953f-f326dadc89c7"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.195840 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/120f2b86-7848-4c46-953f-f326dadc89c7-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "120f2b86-7848-4c46-953f-f326dadc89c7" (UID: "120f2b86-7848-4c46-953f-f326dadc89c7"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.199902 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/120f2b86-7848-4c46-953f-f326dadc89c7-kube-api-access-lqptt" (OuterVolumeSpecName: "kube-api-access-lqptt") pod "120f2b86-7848-4c46-953f-f326dadc89c7" (UID: "120f2b86-7848-4c46-953f-f326dadc89c7"). InnerVolumeSpecName "kube-api-access-lqptt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.202969 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/120f2b86-7848-4c46-953f-f326dadc89c7-scripts" (OuterVolumeSpecName: "scripts") pod "120f2b86-7848-4c46-953f-f326dadc89c7" (UID: "120f2b86-7848-4c46-953f-f326dadc89c7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.248497 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/120f2b86-7848-4c46-953f-f326dadc89c7-sg-core-conf-yaml" (OuterVolumeSpecName: "sg-core-conf-yaml") pod "120f2b86-7848-4c46-953f-f326dadc89c7" (UID: "120f2b86-7848-4c46-953f-f326dadc89c7"). InnerVolumeSpecName "sg-core-conf-yaml". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.249030 4949 generic.go:334] "Generic (PLEG): container finished" podID="120f2b86-7848-4c46-953f-f326dadc89c7" containerID="2e17d12eb09b54c26ae7f405c1e950a04068d82700d9bbc08061dc7d524b8313" exitCode=0 Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.249052 4949 generic.go:334] "Generic (PLEG): container finished" podID="120f2b86-7848-4c46-953f-f326dadc89c7" containerID="fbc119b87bb5dc07709de64bf5c24e2b5abb7aeadde780dbd8d8f7a0d1a84253" exitCode=2 Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.249061 4949 generic.go:334] "Generic (PLEG): container finished" podID="120f2b86-7848-4c46-953f-f326dadc89c7" containerID="d32e71a784aac80ad56051c2b91409f3b9efe0b550e7feaed0ec862fb9e2334e" exitCode=0 Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.249068 4949 generic.go:334] "Generic (PLEG): container finished" podID="120f2b86-7848-4c46-953f-f326dadc89c7" containerID="a2b6472f302bf88e5b357b12552b01be7673b019e87645807f0c13aece9a3500" exitCode=0 Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.249087 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"120f2b86-7848-4c46-953f-f326dadc89c7","Type":"ContainerDied","Data":"2e17d12eb09b54c26ae7f405c1e950a04068d82700d9bbc08061dc7d524b8313"} Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.249114 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"120f2b86-7848-4c46-953f-f326dadc89c7","Type":"ContainerDied","Data":"fbc119b87bb5dc07709de64bf5c24e2b5abb7aeadde780dbd8d8f7a0d1a84253"} Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.249138 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"120f2b86-7848-4c46-953f-f326dadc89c7","Type":"ContainerDied","Data":"d32e71a784aac80ad56051c2b91409f3b9efe0b550e7feaed0ec862fb9e2334e"} Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.249147 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"120f2b86-7848-4c46-953f-f326dadc89c7","Type":"ContainerDied","Data":"a2b6472f302bf88e5b357b12552b01be7673b019e87645807f0c13aece9a3500"} Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.249154 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"120f2b86-7848-4c46-953f-f326dadc89c7","Type":"ContainerDied","Data":"23d88bf183b091dd341910566df47a6414c90944e6b6e319a97e68f179c4491c"} Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.249169 4949 scope.go:117] "RemoveContainer" containerID="2e17d12eb09b54c26ae7f405c1e950a04068d82700d9bbc08061dc7d524b8313" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.249439 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.259353 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/120f2b86-7848-4c46-953f-f326dadc89c7-ceilometer-tls-certs" (OuterVolumeSpecName: "ceilometer-tls-certs") pod "120f2b86-7848-4c46-953f-f326dadc89c7" (UID: "120f2b86-7848-4c46-953f-f326dadc89c7"). InnerVolumeSpecName "ceilometer-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.289774 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/120f2b86-7848-4c46-953f-f326dadc89c7-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "120f2b86-7848-4c46-953f-f326dadc89c7" (UID: "120f2b86-7848-4c46-953f-f326dadc89c7"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.296974 4949 reconciler_common.go:293] "Volume detached for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/120f2b86-7848-4c46-953f-f326dadc89c7-sg-core-conf-yaml\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.297004 4949 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/120f2b86-7848-4c46-953f-f326dadc89c7-run-httpd\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.297016 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqptt\" (UniqueName: \"kubernetes.io/projected/120f2b86-7848-4c46-953f-f326dadc89c7-kube-api-access-lqptt\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.297026 4949 reconciler_common.go:293] "Volume detached for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/120f2b86-7848-4c46-953f-f326dadc89c7-ceilometer-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.297033 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/120f2b86-7848-4c46-953f-f326dadc89c7-scripts\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.297042 4949 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/120f2b86-7848-4c46-953f-f326dadc89c7-log-httpd\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.297050 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/120f2b86-7848-4c46-953f-f326dadc89c7-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.320446 4949 scope.go:117] "RemoveContainer" containerID="fbc119b87bb5dc07709de64bf5c24e2b5abb7aeadde780dbd8d8f7a0d1a84253" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.331910 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/120f2b86-7848-4c46-953f-f326dadc89c7-config-data" (OuterVolumeSpecName: "config-data") pod "120f2b86-7848-4c46-953f-f326dadc89c7" (UID: "120f2b86-7848-4c46-953f-f326dadc89c7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.340078 4949 scope.go:117] "RemoveContainer" containerID="d32e71a784aac80ad56051c2b91409f3b9efe0b550e7feaed0ec862fb9e2334e" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.359841 4949 scope.go:117] "RemoveContainer" containerID="a2b6472f302bf88e5b357b12552b01be7673b019e87645807f0c13aece9a3500" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.382867 4949 scope.go:117] "RemoveContainer" containerID="2e17d12eb09b54c26ae7f405c1e950a04068d82700d9bbc08061dc7d524b8313" Oct 01 16:36:28 crc kubenswrapper[4949]: E1001 16:36:28.383646 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2e17d12eb09b54c26ae7f405c1e950a04068d82700d9bbc08061dc7d524b8313\": container with ID starting with 2e17d12eb09b54c26ae7f405c1e950a04068d82700d9bbc08061dc7d524b8313 not found: ID does not exist" containerID="2e17d12eb09b54c26ae7f405c1e950a04068d82700d9bbc08061dc7d524b8313" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.383766 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e17d12eb09b54c26ae7f405c1e950a04068d82700d9bbc08061dc7d524b8313"} err="failed to get container status \"2e17d12eb09b54c26ae7f405c1e950a04068d82700d9bbc08061dc7d524b8313\": rpc error: code = NotFound desc = could not find container \"2e17d12eb09b54c26ae7f405c1e950a04068d82700d9bbc08061dc7d524b8313\": container with ID starting with 2e17d12eb09b54c26ae7f405c1e950a04068d82700d9bbc08061dc7d524b8313 not found: ID does not exist" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.383844 4949 scope.go:117] "RemoveContainer" containerID="fbc119b87bb5dc07709de64bf5c24e2b5abb7aeadde780dbd8d8f7a0d1a84253" Oct 01 16:36:28 crc kubenswrapper[4949]: E1001 16:36:28.384312 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fbc119b87bb5dc07709de64bf5c24e2b5abb7aeadde780dbd8d8f7a0d1a84253\": container with ID starting with fbc119b87bb5dc07709de64bf5c24e2b5abb7aeadde780dbd8d8f7a0d1a84253 not found: ID does not exist" containerID="fbc119b87bb5dc07709de64bf5c24e2b5abb7aeadde780dbd8d8f7a0d1a84253" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.384350 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbc119b87bb5dc07709de64bf5c24e2b5abb7aeadde780dbd8d8f7a0d1a84253"} err="failed to get container status \"fbc119b87bb5dc07709de64bf5c24e2b5abb7aeadde780dbd8d8f7a0d1a84253\": rpc error: code = NotFound desc = could not find container \"fbc119b87bb5dc07709de64bf5c24e2b5abb7aeadde780dbd8d8f7a0d1a84253\": container with ID starting with fbc119b87bb5dc07709de64bf5c24e2b5abb7aeadde780dbd8d8f7a0d1a84253 not found: ID does not exist" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.384378 4949 scope.go:117] "RemoveContainer" containerID="d32e71a784aac80ad56051c2b91409f3b9efe0b550e7feaed0ec862fb9e2334e" Oct 01 16:36:28 crc kubenswrapper[4949]: E1001 16:36:28.384672 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d32e71a784aac80ad56051c2b91409f3b9efe0b550e7feaed0ec862fb9e2334e\": container with ID starting with d32e71a784aac80ad56051c2b91409f3b9efe0b550e7feaed0ec862fb9e2334e not found: ID does not exist" containerID="d32e71a784aac80ad56051c2b91409f3b9efe0b550e7feaed0ec862fb9e2334e" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.384771 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d32e71a784aac80ad56051c2b91409f3b9efe0b550e7feaed0ec862fb9e2334e"} err="failed to get container status \"d32e71a784aac80ad56051c2b91409f3b9efe0b550e7feaed0ec862fb9e2334e\": rpc error: code = NotFound desc = could not find container \"d32e71a784aac80ad56051c2b91409f3b9efe0b550e7feaed0ec862fb9e2334e\": container with ID starting with d32e71a784aac80ad56051c2b91409f3b9efe0b550e7feaed0ec862fb9e2334e not found: ID does not exist" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.384802 4949 scope.go:117] "RemoveContainer" containerID="a2b6472f302bf88e5b357b12552b01be7673b019e87645807f0c13aece9a3500" Oct 01 16:36:28 crc kubenswrapper[4949]: E1001 16:36:28.385226 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2b6472f302bf88e5b357b12552b01be7673b019e87645807f0c13aece9a3500\": container with ID starting with a2b6472f302bf88e5b357b12552b01be7673b019e87645807f0c13aece9a3500 not found: ID does not exist" containerID="a2b6472f302bf88e5b357b12552b01be7673b019e87645807f0c13aece9a3500" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.385275 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2b6472f302bf88e5b357b12552b01be7673b019e87645807f0c13aece9a3500"} err="failed to get container status \"a2b6472f302bf88e5b357b12552b01be7673b019e87645807f0c13aece9a3500\": rpc error: code = NotFound desc = could not find container \"a2b6472f302bf88e5b357b12552b01be7673b019e87645807f0c13aece9a3500\": container with ID starting with a2b6472f302bf88e5b357b12552b01be7673b019e87645807f0c13aece9a3500 not found: ID does not exist" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.385304 4949 scope.go:117] "RemoveContainer" containerID="2e17d12eb09b54c26ae7f405c1e950a04068d82700d9bbc08061dc7d524b8313" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.385665 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e17d12eb09b54c26ae7f405c1e950a04068d82700d9bbc08061dc7d524b8313"} err="failed to get container status \"2e17d12eb09b54c26ae7f405c1e950a04068d82700d9bbc08061dc7d524b8313\": rpc error: code = NotFound desc = could not find container \"2e17d12eb09b54c26ae7f405c1e950a04068d82700d9bbc08061dc7d524b8313\": container with ID starting with 2e17d12eb09b54c26ae7f405c1e950a04068d82700d9bbc08061dc7d524b8313 not found: ID does not exist" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.385716 4949 scope.go:117] "RemoveContainer" containerID="fbc119b87bb5dc07709de64bf5c24e2b5abb7aeadde780dbd8d8f7a0d1a84253" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.386472 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbc119b87bb5dc07709de64bf5c24e2b5abb7aeadde780dbd8d8f7a0d1a84253"} err="failed to get container status \"fbc119b87bb5dc07709de64bf5c24e2b5abb7aeadde780dbd8d8f7a0d1a84253\": rpc error: code = NotFound desc = could not find container \"fbc119b87bb5dc07709de64bf5c24e2b5abb7aeadde780dbd8d8f7a0d1a84253\": container with ID starting with fbc119b87bb5dc07709de64bf5c24e2b5abb7aeadde780dbd8d8f7a0d1a84253 not found: ID does not exist" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.386515 4949 scope.go:117] "RemoveContainer" containerID="d32e71a784aac80ad56051c2b91409f3b9efe0b550e7feaed0ec862fb9e2334e" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.386796 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d32e71a784aac80ad56051c2b91409f3b9efe0b550e7feaed0ec862fb9e2334e"} err="failed to get container status \"d32e71a784aac80ad56051c2b91409f3b9efe0b550e7feaed0ec862fb9e2334e\": rpc error: code = NotFound desc = could not find container \"d32e71a784aac80ad56051c2b91409f3b9efe0b550e7feaed0ec862fb9e2334e\": container with ID starting with d32e71a784aac80ad56051c2b91409f3b9efe0b550e7feaed0ec862fb9e2334e not found: ID does not exist" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.386835 4949 scope.go:117] "RemoveContainer" containerID="a2b6472f302bf88e5b357b12552b01be7673b019e87645807f0c13aece9a3500" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.387096 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2b6472f302bf88e5b357b12552b01be7673b019e87645807f0c13aece9a3500"} err="failed to get container status \"a2b6472f302bf88e5b357b12552b01be7673b019e87645807f0c13aece9a3500\": rpc error: code = NotFound desc = could not find container \"a2b6472f302bf88e5b357b12552b01be7673b019e87645807f0c13aece9a3500\": container with ID starting with a2b6472f302bf88e5b357b12552b01be7673b019e87645807f0c13aece9a3500 not found: ID does not exist" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.387160 4949 scope.go:117] "RemoveContainer" containerID="2e17d12eb09b54c26ae7f405c1e950a04068d82700d9bbc08061dc7d524b8313" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.387877 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e17d12eb09b54c26ae7f405c1e950a04068d82700d9bbc08061dc7d524b8313"} err="failed to get container status \"2e17d12eb09b54c26ae7f405c1e950a04068d82700d9bbc08061dc7d524b8313\": rpc error: code = NotFound desc = could not find container \"2e17d12eb09b54c26ae7f405c1e950a04068d82700d9bbc08061dc7d524b8313\": container with ID starting with 2e17d12eb09b54c26ae7f405c1e950a04068d82700d9bbc08061dc7d524b8313 not found: ID does not exist" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.387922 4949 scope.go:117] "RemoveContainer" containerID="fbc119b87bb5dc07709de64bf5c24e2b5abb7aeadde780dbd8d8f7a0d1a84253" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.388238 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbc119b87bb5dc07709de64bf5c24e2b5abb7aeadde780dbd8d8f7a0d1a84253"} err="failed to get container status \"fbc119b87bb5dc07709de64bf5c24e2b5abb7aeadde780dbd8d8f7a0d1a84253\": rpc error: code = NotFound desc = could not find container \"fbc119b87bb5dc07709de64bf5c24e2b5abb7aeadde780dbd8d8f7a0d1a84253\": container with ID starting with fbc119b87bb5dc07709de64bf5c24e2b5abb7aeadde780dbd8d8f7a0d1a84253 not found: ID does not exist" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.388289 4949 scope.go:117] "RemoveContainer" containerID="d32e71a784aac80ad56051c2b91409f3b9efe0b550e7feaed0ec862fb9e2334e" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.388591 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d32e71a784aac80ad56051c2b91409f3b9efe0b550e7feaed0ec862fb9e2334e"} err="failed to get container status \"d32e71a784aac80ad56051c2b91409f3b9efe0b550e7feaed0ec862fb9e2334e\": rpc error: code = NotFound desc = could not find container \"d32e71a784aac80ad56051c2b91409f3b9efe0b550e7feaed0ec862fb9e2334e\": container with ID starting with d32e71a784aac80ad56051c2b91409f3b9efe0b550e7feaed0ec862fb9e2334e not found: ID does not exist" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.388640 4949 scope.go:117] "RemoveContainer" containerID="a2b6472f302bf88e5b357b12552b01be7673b019e87645807f0c13aece9a3500" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.388982 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2b6472f302bf88e5b357b12552b01be7673b019e87645807f0c13aece9a3500"} err="failed to get container status \"a2b6472f302bf88e5b357b12552b01be7673b019e87645807f0c13aece9a3500\": rpc error: code = NotFound desc = could not find container \"a2b6472f302bf88e5b357b12552b01be7673b019e87645807f0c13aece9a3500\": container with ID starting with a2b6472f302bf88e5b357b12552b01be7673b019e87645807f0c13aece9a3500 not found: ID does not exist" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.389056 4949 scope.go:117] "RemoveContainer" containerID="2e17d12eb09b54c26ae7f405c1e950a04068d82700d9bbc08061dc7d524b8313" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.389386 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2e17d12eb09b54c26ae7f405c1e950a04068d82700d9bbc08061dc7d524b8313"} err="failed to get container status \"2e17d12eb09b54c26ae7f405c1e950a04068d82700d9bbc08061dc7d524b8313\": rpc error: code = NotFound desc = could not find container \"2e17d12eb09b54c26ae7f405c1e950a04068d82700d9bbc08061dc7d524b8313\": container with ID starting with 2e17d12eb09b54c26ae7f405c1e950a04068d82700d9bbc08061dc7d524b8313 not found: ID does not exist" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.389435 4949 scope.go:117] "RemoveContainer" containerID="fbc119b87bb5dc07709de64bf5c24e2b5abb7aeadde780dbd8d8f7a0d1a84253" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.390198 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fbc119b87bb5dc07709de64bf5c24e2b5abb7aeadde780dbd8d8f7a0d1a84253"} err="failed to get container status \"fbc119b87bb5dc07709de64bf5c24e2b5abb7aeadde780dbd8d8f7a0d1a84253\": rpc error: code = NotFound desc = could not find container \"fbc119b87bb5dc07709de64bf5c24e2b5abb7aeadde780dbd8d8f7a0d1a84253\": container with ID starting with fbc119b87bb5dc07709de64bf5c24e2b5abb7aeadde780dbd8d8f7a0d1a84253 not found: ID does not exist" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.390242 4949 scope.go:117] "RemoveContainer" containerID="d32e71a784aac80ad56051c2b91409f3b9efe0b550e7feaed0ec862fb9e2334e" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.390492 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d32e71a784aac80ad56051c2b91409f3b9efe0b550e7feaed0ec862fb9e2334e"} err="failed to get container status \"d32e71a784aac80ad56051c2b91409f3b9efe0b550e7feaed0ec862fb9e2334e\": rpc error: code = NotFound desc = could not find container \"d32e71a784aac80ad56051c2b91409f3b9efe0b550e7feaed0ec862fb9e2334e\": container with ID starting with d32e71a784aac80ad56051c2b91409f3b9efe0b550e7feaed0ec862fb9e2334e not found: ID does not exist" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.390528 4949 scope.go:117] "RemoveContainer" containerID="a2b6472f302bf88e5b357b12552b01be7673b019e87645807f0c13aece9a3500" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.390785 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2b6472f302bf88e5b357b12552b01be7673b019e87645807f0c13aece9a3500"} err="failed to get container status \"a2b6472f302bf88e5b357b12552b01be7673b019e87645807f0c13aece9a3500\": rpc error: code = NotFound desc = could not find container \"a2b6472f302bf88e5b357b12552b01be7673b019e87645807f0c13aece9a3500\": container with ID starting with a2b6472f302bf88e5b357b12552b01be7673b019e87645807f0c13aece9a3500 not found: ID does not exist" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.398715 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/120f2b86-7848-4c46-953f-f326dadc89c7-config-data\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.609454 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/ceilometer-0"] Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.633294 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/ceilometer-0"] Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.650299 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/ceilometer-0"] Oct 01 16:36:28 crc kubenswrapper[4949]: E1001 16:36:28.651030 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="120f2b86-7848-4c46-953f-f326dadc89c7" containerName="proxy-httpd" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.651066 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="120f2b86-7848-4c46-953f-f326dadc89c7" containerName="proxy-httpd" Oct 01 16:36:28 crc kubenswrapper[4949]: E1001 16:36:28.651102 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="120f2b86-7848-4c46-953f-f326dadc89c7" containerName="sg-core" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.651115 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="120f2b86-7848-4c46-953f-f326dadc89c7" containerName="sg-core" Oct 01 16:36:28 crc kubenswrapper[4949]: E1001 16:36:28.651172 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="120f2b86-7848-4c46-953f-f326dadc89c7" containerName="ceilometer-central-agent" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.651185 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="120f2b86-7848-4c46-953f-f326dadc89c7" containerName="ceilometer-central-agent" Oct 01 16:36:28 crc kubenswrapper[4949]: E1001 16:36:28.651204 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="120f2b86-7848-4c46-953f-f326dadc89c7" containerName="ceilometer-notification-agent" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.651217 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="120f2b86-7848-4c46-953f-f326dadc89c7" containerName="ceilometer-notification-agent" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.651553 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="120f2b86-7848-4c46-953f-f326dadc89c7" containerName="proxy-httpd" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.651595 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="120f2b86-7848-4c46-953f-f326dadc89c7" containerName="sg-core" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.651629 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="120f2b86-7848-4c46-953f-f326dadc89c7" containerName="ceilometer-notification-agent" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.651654 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="120f2b86-7848-4c46-953f-f326dadc89c7" containerName="ceilometer-central-agent" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.655158 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.656790 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.659709 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-config-data" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.660031 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"ceilometer-scripts" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.660251 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"cert-ceilometer-internal-svc" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.704983 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8bc079a4-d954-4112-8a29-06b54b15b8a0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8bc079a4-d954-4112-8a29-06b54b15b8a0\") " pod="openstack/ceilometer-0" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.705254 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8bc079a4-d954-4112-8a29-06b54b15b8a0-log-httpd\") pod \"ceilometer-0\" (UID: \"8bc079a4-d954-4112-8a29-06b54b15b8a0\") " pod="openstack/ceilometer-0" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.705430 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8bc079a4-d954-4112-8a29-06b54b15b8a0-scripts\") pod \"ceilometer-0\" (UID: \"8bc079a4-d954-4112-8a29-06b54b15b8a0\") " pod="openstack/ceilometer-0" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.705580 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bc079a4-d954-4112-8a29-06b54b15b8a0-config-data\") pod \"ceilometer-0\" (UID: \"8bc079a4-d954-4112-8a29-06b54b15b8a0\") " pod="openstack/ceilometer-0" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.705716 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5l56\" (UniqueName: \"kubernetes.io/projected/8bc079a4-d954-4112-8a29-06b54b15b8a0-kube-api-access-z5l56\") pod \"ceilometer-0\" (UID: \"8bc079a4-d954-4112-8a29-06b54b15b8a0\") " pod="openstack/ceilometer-0" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.705886 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bc079a4-d954-4112-8a29-06b54b15b8a0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8bc079a4-d954-4112-8a29-06b54b15b8a0\") " pod="openstack/ceilometer-0" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.706034 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8bc079a4-d954-4112-8a29-06b54b15b8a0-run-httpd\") pod \"ceilometer-0\" (UID: \"8bc079a4-d954-4112-8a29-06b54b15b8a0\") " pod="openstack/ceilometer-0" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.706239 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bc079a4-d954-4112-8a29-06b54b15b8a0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8bc079a4-d954-4112-8a29-06b54b15b8a0\") " pod="openstack/ceilometer-0" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.809653 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bc079a4-d954-4112-8a29-06b54b15b8a0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8bc079a4-d954-4112-8a29-06b54b15b8a0\") " pod="openstack/ceilometer-0" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.809882 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8bc079a4-d954-4112-8a29-06b54b15b8a0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8bc079a4-d954-4112-8a29-06b54b15b8a0\") " pod="openstack/ceilometer-0" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.809919 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8bc079a4-d954-4112-8a29-06b54b15b8a0-log-httpd\") pod \"ceilometer-0\" (UID: \"8bc079a4-d954-4112-8a29-06b54b15b8a0\") " pod="openstack/ceilometer-0" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.809964 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8bc079a4-d954-4112-8a29-06b54b15b8a0-scripts\") pod \"ceilometer-0\" (UID: \"8bc079a4-d954-4112-8a29-06b54b15b8a0\") " pod="openstack/ceilometer-0" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.810018 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bc079a4-d954-4112-8a29-06b54b15b8a0-config-data\") pod \"ceilometer-0\" (UID: \"8bc079a4-d954-4112-8a29-06b54b15b8a0\") " pod="openstack/ceilometer-0" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.810049 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5l56\" (UniqueName: \"kubernetes.io/projected/8bc079a4-d954-4112-8a29-06b54b15b8a0-kube-api-access-z5l56\") pod \"ceilometer-0\" (UID: \"8bc079a4-d954-4112-8a29-06b54b15b8a0\") " pod="openstack/ceilometer-0" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.810200 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bc079a4-d954-4112-8a29-06b54b15b8a0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8bc079a4-d954-4112-8a29-06b54b15b8a0\") " pod="openstack/ceilometer-0" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.810251 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8bc079a4-d954-4112-8a29-06b54b15b8a0-run-httpd\") pod \"ceilometer-0\" (UID: \"8bc079a4-d954-4112-8a29-06b54b15b8a0\") " pod="openstack/ceilometer-0" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.811201 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8bc079a4-d954-4112-8a29-06b54b15b8a0-run-httpd\") pod \"ceilometer-0\" (UID: \"8bc079a4-d954-4112-8a29-06b54b15b8a0\") " pod="openstack/ceilometer-0" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.812709 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/8bc079a4-d954-4112-8a29-06b54b15b8a0-log-httpd\") pod \"ceilometer-0\" (UID: \"8bc079a4-d954-4112-8a29-06b54b15b8a0\") " pod="openstack/ceilometer-0" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.816807 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8bc079a4-d954-4112-8a29-06b54b15b8a0-scripts\") pod \"ceilometer-0\" (UID: \"8bc079a4-d954-4112-8a29-06b54b15b8a0\") " pod="openstack/ceilometer-0" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.817964 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/8bc079a4-d954-4112-8a29-06b54b15b8a0-combined-ca-bundle\") pod \"ceilometer-0\" (UID: \"8bc079a4-d954-4112-8a29-06b54b15b8a0\") " pod="openstack/ceilometer-0" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.818384 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-conf-yaml\" (UniqueName: \"kubernetes.io/secret/8bc079a4-d954-4112-8a29-06b54b15b8a0-sg-core-conf-yaml\") pod \"ceilometer-0\" (UID: \"8bc079a4-d954-4112-8a29-06b54b15b8a0\") " pod="openstack/ceilometer-0" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.819455 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8bc079a4-d954-4112-8a29-06b54b15b8a0-config-data\") pod \"ceilometer-0\" (UID: \"8bc079a4-d954-4112-8a29-06b54b15b8a0\") " pod="openstack/ceilometer-0" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.820535 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-tls-certs\" (UniqueName: \"kubernetes.io/secret/8bc079a4-d954-4112-8a29-06b54b15b8a0-ceilometer-tls-certs\") pod \"ceilometer-0\" (UID: \"8bc079a4-d954-4112-8a29-06b54b15b8a0\") " pod="openstack/ceilometer-0" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.828192 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5l56\" (UniqueName: \"kubernetes.io/projected/8bc079a4-d954-4112-8a29-06b54b15b8a0-kube-api-access-z5l56\") pod \"ceilometer-0\" (UID: \"8bc079a4-d954-4112-8a29-06b54b15b8a0\") " pod="openstack/ceilometer-0" Oct 01 16:36:28 crc kubenswrapper[4949]: I1001 16:36:28.977638 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/ceilometer-0" Oct 01 16:36:29 crc kubenswrapper[4949]: I1001 16:36:29.459178 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/ceilometer-0"] Oct 01 16:36:29 crc kubenswrapper[4949]: W1001 16:36:29.471541 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8bc079a4_d954_4112_8a29_06b54b15b8a0.slice/crio-68c1d837b75013cc7da786aea2e43ff425907f712f34d685d8a33f87face4b71 WatchSource:0}: Error finding container 68c1d837b75013cc7da786aea2e43ff425907f712f34d685d8a33f87face4b71: Status 404 returned error can't find the container with id 68c1d837b75013cc7da786aea2e43ff425907f712f34d685d8a33f87face4b71 Oct 01 16:36:29 crc kubenswrapper[4949]: I1001 16:36:29.543532 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Oct 01 16:36:29 crc kubenswrapper[4949]: I1001 16:36:29.619035 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="120f2b86-7848-4c46-953f-f326dadc89c7" path="/var/lib/kubelet/pods/120f2b86-7848-4c46-953f-f326dadc89c7/volumes" Oct 01 16:36:30 crc kubenswrapper[4949]: I1001 16:36:30.269252 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8bc079a4-d954-4112-8a29-06b54b15b8a0","Type":"ContainerStarted","Data":"694a94039a9f7632a32024f8d86741b6eac034d87544c8b59b15aa74cf6c7bd0"} Oct 01 16:36:30 crc kubenswrapper[4949]: I1001 16:36:30.269933 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8bc079a4-d954-4112-8a29-06b54b15b8a0","Type":"ContainerStarted","Data":"68c1d837b75013cc7da786aea2e43ff425907f712f34d685d8a33f87face4b71"} Oct 01 16:36:31 crc kubenswrapper[4949]: I1001 16:36:31.053500 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Oct 01 16:36:31 crc kubenswrapper[4949]: I1001 16:36:31.128223 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Oct 01 16:36:31 crc kubenswrapper[4949]: I1001 16:36:31.278455 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8bc079a4-d954-4112-8a29-06b54b15b8a0","Type":"ContainerStarted","Data":"79f7890c037f5ae07e7fcfca235749292f9ffa32f826d1c34696cfd5612d3e07"} Oct 01 16:36:31 crc kubenswrapper[4949]: I1001 16:36:31.278738 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="d3c0089c-0268-4b0b-a77d-fd339ae87759" containerName="manila-scheduler" containerID="cri-o://f97825bc260248f60e8fc6fe8a5251d3e8c6975b37b9d08dcc9b85510a361b05" gracePeriod=30 Oct 01 16:36:31 crc kubenswrapper[4949]: I1001 16:36:31.278796 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-scheduler-0" podUID="d3c0089c-0268-4b0b-a77d-fd339ae87759" containerName="probe" containerID="cri-o://7b15a83f40fe48732b8dd314f63ad53f32b0e253875ffeabe8aed95a4a3046ad" gracePeriod=30 Oct 01 16:36:32 crc kubenswrapper[4949]: I1001 16:36:32.298906 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8bc079a4-d954-4112-8a29-06b54b15b8a0","Type":"ContainerStarted","Data":"f49b8477e6852ffd3938d788a935529fbabab0b2174088e5b24ece57115de7f4"} Oct 01 16:36:32 crc kubenswrapper[4949]: I1001 16:36:32.302933 4949 generic.go:334] "Generic (PLEG): container finished" podID="d3c0089c-0268-4b0b-a77d-fd339ae87759" containerID="7b15a83f40fe48732b8dd314f63ad53f32b0e253875ffeabe8aed95a4a3046ad" exitCode=0 Oct 01 16:36:32 crc kubenswrapper[4949]: I1001 16:36:32.303003 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"d3c0089c-0268-4b0b-a77d-fd339ae87759","Type":"ContainerDied","Data":"7b15a83f40fe48732b8dd314f63ad53f32b0e253875ffeabe8aed95a4a3046ad"} Oct 01 16:36:33 crc kubenswrapper[4949]: I1001 16:36:33.403880 4949 prober.go:107] "Probe failed" probeType="Readiness" pod="openstack/horizon-7f78d9658d-xl5nq" podUID="b49dfcdb-910b-48e0-91bb-a426980dc277" containerName="horizon" probeResult="failure" output="Get \"https://10.217.0.254:8443/dashboard/auth/login/?next=/dashboard/\": dial tcp 10.217.0.254:8443: connect: connection refused" Oct 01 16:36:33 crc kubenswrapper[4949]: I1001 16:36:33.404401 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/horizon-7f78d9658d-xl5nq" Oct 01 16:36:34 crc kubenswrapper[4949]: I1001 16:36:34.331619 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/ceilometer-0" event={"ID":"8bc079a4-d954-4112-8a29-06b54b15b8a0","Type":"ContainerStarted","Data":"19fe3759b1f49f228befb62ad55ec3cd28f1c7f12d11c5cebb017d252bffdfb2"} Oct 01 16:36:34 crc kubenswrapper[4949]: I1001 16:36:34.332040 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack/ceilometer-0" Oct 01 16:36:34 crc kubenswrapper[4949]: I1001 16:36:34.363567 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/ceilometer-0" podStartSLOduration=2.507137104 podStartE2EDuration="6.363544178s" podCreationTimestamp="2025-10-01 16:36:28 +0000 UTC" firstStartedPulling="2025-10-01 16:36:29.474174374 +0000 UTC m=+3288.779780605" lastFinishedPulling="2025-10-01 16:36:33.330581438 +0000 UTC m=+3292.636187679" observedRunningTime="2025-10-01 16:36:34.35456829 +0000 UTC m=+3293.660174481" watchObservedRunningTime="2025-10-01 16:36:34.363544178 +0000 UTC m=+3293.669150379" Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.011649 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.108273 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t64j7\" (UniqueName: \"kubernetes.io/projected/d3c0089c-0268-4b0b-a77d-fd339ae87759-kube-api-access-t64j7\") pod \"d3c0089c-0268-4b0b-a77d-fd339ae87759\" (UID: \"d3c0089c-0268-4b0b-a77d-fd339ae87759\") " Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.108365 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d3c0089c-0268-4b0b-a77d-fd339ae87759-etc-machine-id\") pod \"d3c0089c-0268-4b0b-a77d-fd339ae87759\" (UID: \"d3c0089c-0268-4b0b-a77d-fd339ae87759\") " Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.108401 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3c0089c-0268-4b0b-a77d-fd339ae87759-combined-ca-bundle\") pod \"d3c0089c-0268-4b0b-a77d-fd339ae87759\" (UID: \"d3c0089c-0268-4b0b-a77d-fd339ae87759\") " Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.108502 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d3c0089c-0268-4b0b-a77d-fd339ae87759-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "d3c0089c-0268-4b0b-a77d-fd339ae87759" (UID: "d3c0089c-0268-4b0b-a77d-fd339ae87759"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.108638 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d3c0089c-0268-4b0b-a77d-fd339ae87759-config-data-custom\") pod \"d3c0089c-0268-4b0b-a77d-fd339ae87759\" (UID: \"d3c0089c-0268-4b0b-a77d-fd339ae87759\") " Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.108665 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3c0089c-0268-4b0b-a77d-fd339ae87759-config-data\") pod \"d3c0089c-0268-4b0b-a77d-fd339ae87759\" (UID: \"d3c0089c-0268-4b0b-a77d-fd339ae87759\") " Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.108710 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3c0089c-0268-4b0b-a77d-fd339ae87759-scripts\") pod \"d3c0089c-0268-4b0b-a77d-fd339ae87759\" (UID: \"d3c0089c-0268-4b0b-a77d-fd339ae87759\") " Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.109280 4949 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/d3c0089c-0268-4b0b-a77d-fd339ae87759-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.118199 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3c0089c-0268-4b0b-a77d-fd339ae87759-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "d3c0089c-0268-4b0b-a77d-fd339ae87759" (UID: "d3c0089c-0268-4b0b-a77d-fd339ae87759"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.121698 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3c0089c-0268-4b0b-a77d-fd339ae87759-scripts" (OuterVolumeSpecName: "scripts") pod "d3c0089c-0268-4b0b-a77d-fd339ae87759" (UID: "d3c0089c-0268-4b0b-a77d-fd339ae87759"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.125437 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3c0089c-0268-4b0b-a77d-fd339ae87759-kube-api-access-t64j7" (OuterVolumeSpecName: "kube-api-access-t64j7") pod "d3c0089c-0268-4b0b-a77d-fd339ae87759" (UID: "d3c0089c-0268-4b0b-a77d-fd339ae87759"). InnerVolumeSpecName "kube-api-access-t64j7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.173233 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3c0089c-0268-4b0b-a77d-fd339ae87759-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "d3c0089c-0268-4b0b-a77d-fd339ae87759" (UID: "d3c0089c-0268-4b0b-a77d-fd339ae87759"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.211529 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/d3c0089c-0268-4b0b-a77d-fd339ae87759-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.211578 4949 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/d3c0089c-0268-4b0b-a77d-fd339ae87759-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.211591 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d3c0089c-0268-4b0b-a77d-fd339ae87759-scripts\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.211604 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t64j7\" (UniqueName: \"kubernetes.io/projected/d3c0089c-0268-4b0b-a77d-fd339ae87759-kube-api-access-t64j7\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.223503 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3c0089c-0268-4b0b-a77d-fd339ae87759-config-data" (OuterVolumeSpecName: "config-data") pod "d3c0089c-0268-4b0b-a77d-fd339ae87759" (UID: "d3c0089c-0268-4b0b-a77d-fd339ae87759"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.313280 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d3c0089c-0268-4b0b-a77d-fd339ae87759-config-data\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.362781 4949 generic.go:334] "Generic (PLEG): container finished" podID="d3c0089c-0268-4b0b-a77d-fd339ae87759" containerID="f97825bc260248f60e8fc6fe8a5251d3e8c6975b37b9d08dcc9b85510a361b05" exitCode=0 Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.362817 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"d3c0089c-0268-4b0b-a77d-fd339ae87759","Type":"ContainerDied","Data":"f97825bc260248f60e8fc6fe8a5251d3e8c6975b37b9d08dcc9b85510a361b05"} Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.362885 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"d3c0089c-0268-4b0b-a77d-fd339ae87759","Type":"ContainerDied","Data":"aec54d6f4e530a9ce7c354d41b6b81c16eb63dae85213d89fd1de00a7ffd5e64"} Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.362908 4949 scope.go:117] "RemoveContainer" containerID="7b15a83f40fe48732b8dd314f63ad53f32b0e253875ffeabe8aed95a4a3046ad" Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.363263 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.390110 4949 scope.go:117] "RemoveContainer" containerID="f97825bc260248f60e8fc6fe8a5251d3e8c6975b37b9d08dcc9b85510a361b05" Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.411088 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-scheduler-0"] Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.419429 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-scheduler-0"] Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.423492 4949 scope.go:117] "RemoveContainer" containerID="7b15a83f40fe48732b8dd314f63ad53f32b0e253875ffeabe8aed95a4a3046ad" Oct 01 16:36:37 crc kubenswrapper[4949]: E1001 16:36:37.424100 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b15a83f40fe48732b8dd314f63ad53f32b0e253875ffeabe8aed95a4a3046ad\": container with ID starting with 7b15a83f40fe48732b8dd314f63ad53f32b0e253875ffeabe8aed95a4a3046ad not found: ID does not exist" containerID="7b15a83f40fe48732b8dd314f63ad53f32b0e253875ffeabe8aed95a4a3046ad" Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.424158 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b15a83f40fe48732b8dd314f63ad53f32b0e253875ffeabe8aed95a4a3046ad"} err="failed to get container status \"7b15a83f40fe48732b8dd314f63ad53f32b0e253875ffeabe8aed95a4a3046ad\": rpc error: code = NotFound desc = could not find container \"7b15a83f40fe48732b8dd314f63ad53f32b0e253875ffeabe8aed95a4a3046ad\": container with ID starting with 7b15a83f40fe48732b8dd314f63ad53f32b0e253875ffeabe8aed95a4a3046ad not found: ID does not exist" Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.424183 4949 scope.go:117] "RemoveContainer" containerID="f97825bc260248f60e8fc6fe8a5251d3e8c6975b37b9d08dcc9b85510a361b05" Oct 01 16:36:37 crc kubenswrapper[4949]: E1001 16:36:37.426651 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f97825bc260248f60e8fc6fe8a5251d3e8c6975b37b9d08dcc9b85510a361b05\": container with ID starting with f97825bc260248f60e8fc6fe8a5251d3e8c6975b37b9d08dcc9b85510a361b05 not found: ID does not exist" containerID="f97825bc260248f60e8fc6fe8a5251d3e8c6975b37b9d08dcc9b85510a361b05" Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.426674 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f97825bc260248f60e8fc6fe8a5251d3e8c6975b37b9d08dcc9b85510a361b05"} err="failed to get container status \"f97825bc260248f60e8fc6fe8a5251d3e8c6975b37b9d08dcc9b85510a361b05\": rpc error: code = NotFound desc = could not find container \"f97825bc260248f60e8fc6fe8a5251d3e8c6975b37b9d08dcc9b85510a361b05\": container with ID starting with f97825bc260248f60e8fc6fe8a5251d3e8c6975b37b9d08dcc9b85510a361b05 not found: ID does not exist" Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.429993 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-scheduler-0"] Oct 01 16:36:37 crc kubenswrapper[4949]: E1001 16:36:37.430471 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3c0089c-0268-4b0b-a77d-fd339ae87759" containerName="manila-scheduler" Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.430492 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3c0089c-0268-4b0b-a77d-fd339ae87759" containerName="manila-scheduler" Oct 01 16:36:37 crc kubenswrapper[4949]: E1001 16:36:37.430543 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d3c0089c-0268-4b0b-a77d-fd339ae87759" containerName="probe" Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.430551 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="d3c0089c-0268-4b0b-a77d-fd339ae87759" containerName="probe" Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.430771 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3c0089c-0268-4b0b-a77d-fd339ae87759" containerName="probe" Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.430796 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="d3c0089c-0268-4b0b-a77d-fd339ae87759" containerName="manila-scheduler" Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.431991 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.434954 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-scheduler-config-data" Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.450627 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.517750 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/32632bf5-03f4-494c-8e79-3cb86d093629-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"32632bf5-03f4-494c-8e79-3cb86d093629\") " pod="openstack/manila-scheduler-0" Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.517824 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llxld\" (UniqueName: \"kubernetes.io/projected/32632bf5-03f4-494c-8e79-3cb86d093629-kube-api-access-llxld\") pod \"manila-scheduler-0\" (UID: \"32632bf5-03f4-494c-8e79-3cb86d093629\") " pod="openstack/manila-scheduler-0" Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.517907 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32632bf5-03f4-494c-8e79-3cb86d093629-scripts\") pod \"manila-scheduler-0\" (UID: \"32632bf5-03f4-494c-8e79-3cb86d093629\") " pod="openstack/manila-scheduler-0" Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.517958 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32632bf5-03f4-494c-8e79-3cb86d093629-config-data\") pod \"manila-scheduler-0\" (UID: \"32632bf5-03f4-494c-8e79-3cb86d093629\") " pod="openstack/manila-scheduler-0" Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.518080 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/32632bf5-03f4-494c-8e79-3cb86d093629-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"32632bf5-03f4-494c-8e79-3cb86d093629\") " pod="openstack/manila-scheduler-0" Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.518103 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32632bf5-03f4-494c-8e79-3cb86d093629-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"32632bf5-03f4-494c-8e79-3cb86d093629\") " pod="openstack/manila-scheduler-0" Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.618207 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d3c0089c-0268-4b0b-a77d-fd339ae87759" path="/var/lib/kubelet/pods/d3c0089c-0268-4b0b-a77d-fd339ae87759/volumes" Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.619523 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llxld\" (UniqueName: \"kubernetes.io/projected/32632bf5-03f4-494c-8e79-3cb86d093629-kube-api-access-llxld\") pod \"manila-scheduler-0\" (UID: \"32632bf5-03f4-494c-8e79-3cb86d093629\") " pod="openstack/manila-scheduler-0" Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.619597 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32632bf5-03f4-494c-8e79-3cb86d093629-scripts\") pod \"manila-scheduler-0\" (UID: \"32632bf5-03f4-494c-8e79-3cb86d093629\") " pod="openstack/manila-scheduler-0" Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.619632 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32632bf5-03f4-494c-8e79-3cb86d093629-config-data\") pod \"manila-scheduler-0\" (UID: \"32632bf5-03f4-494c-8e79-3cb86d093629\") " pod="openstack/manila-scheduler-0" Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.619706 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/32632bf5-03f4-494c-8e79-3cb86d093629-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"32632bf5-03f4-494c-8e79-3cb86d093629\") " pod="openstack/manila-scheduler-0" Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.619728 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32632bf5-03f4-494c-8e79-3cb86d093629-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"32632bf5-03f4-494c-8e79-3cb86d093629\") " pod="openstack/manila-scheduler-0" Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.619778 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/32632bf5-03f4-494c-8e79-3cb86d093629-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"32632bf5-03f4-494c-8e79-3cb86d093629\") " pod="openstack/manila-scheduler-0" Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.619979 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/32632bf5-03f4-494c-8e79-3cb86d093629-etc-machine-id\") pod \"manila-scheduler-0\" (UID: \"32632bf5-03f4-494c-8e79-3cb86d093629\") " pod="openstack/manila-scheduler-0" Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.623753 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/32632bf5-03f4-494c-8e79-3cb86d093629-scripts\") pod \"manila-scheduler-0\" (UID: \"32632bf5-03f4-494c-8e79-3cb86d093629\") " pod="openstack/manila-scheduler-0" Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.623864 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/32632bf5-03f4-494c-8e79-3cb86d093629-config-data-custom\") pod \"manila-scheduler-0\" (UID: \"32632bf5-03f4-494c-8e79-3cb86d093629\") " pod="openstack/manila-scheduler-0" Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.624298 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/32632bf5-03f4-494c-8e79-3cb86d093629-config-data\") pod \"manila-scheduler-0\" (UID: \"32632bf5-03f4-494c-8e79-3cb86d093629\") " pod="openstack/manila-scheduler-0" Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.625293 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/32632bf5-03f4-494c-8e79-3cb86d093629-combined-ca-bundle\") pod \"manila-scheduler-0\" (UID: \"32632bf5-03f4-494c-8e79-3cb86d093629\") " pod="openstack/manila-scheduler-0" Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.639314 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llxld\" (UniqueName: \"kubernetes.io/projected/32632bf5-03f4-494c-8e79-3cb86d093629-kube-api-access-llxld\") pod \"manila-scheduler-0\" (UID: \"32632bf5-03f4-494c-8e79-3cb86d093629\") " pod="openstack/manila-scheduler-0" Oct 01 16:36:37 crc kubenswrapper[4949]: I1001 16:36:37.765971 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-scheduler-0" Oct 01 16:36:38 crc kubenswrapper[4949]: I1001 16:36:38.251483 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7f78d9658d-xl5nq" Oct 01 16:36:38 crc kubenswrapper[4949]: I1001 16:36:38.281713 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-scheduler-0"] Oct 01 16:36:38 crc kubenswrapper[4949]: W1001 16:36:38.289730 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod32632bf5_03f4_494c_8e79_3cb86d093629.slice/crio-9415e79e346bcf8bc29bc667be86fdef5b5019a9f07680b4c0bc5b48b6c30173 WatchSource:0}: Error finding container 9415e79e346bcf8bc29bc667be86fdef5b5019a9f07680b4c0bc5b48b6c30173: Status 404 returned error can't find the container with id 9415e79e346bcf8bc29bc667be86fdef5b5019a9f07680b4c0bc5b48b6c30173 Oct 01 16:36:38 crc kubenswrapper[4949]: I1001 16:36:38.334367 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b49dfcdb-910b-48e0-91bb-a426980dc277-horizon-tls-certs\") pod \"b49dfcdb-910b-48e0-91bb-a426980dc277\" (UID: \"b49dfcdb-910b-48e0-91bb-a426980dc277\") " Oct 01 16:36:38 crc kubenswrapper[4949]: I1001 16:36:38.334691 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b49dfcdb-910b-48e0-91bb-a426980dc277-horizon-secret-key\") pod \"b49dfcdb-910b-48e0-91bb-a426980dc277\" (UID: \"b49dfcdb-910b-48e0-91bb-a426980dc277\") " Oct 01 16:36:38 crc kubenswrapper[4949]: I1001 16:36:38.334845 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b49dfcdb-910b-48e0-91bb-a426980dc277-config-data\") pod \"b49dfcdb-910b-48e0-91bb-a426980dc277\" (UID: \"b49dfcdb-910b-48e0-91bb-a426980dc277\") " Oct 01 16:36:38 crc kubenswrapper[4949]: I1001 16:36:38.334945 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wcn9w\" (UniqueName: \"kubernetes.io/projected/b49dfcdb-910b-48e0-91bb-a426980dc277-kube-api-access-wcn9w\") pod \"b49dfcdb-910b-48e0-91bb-a426980dc277\" (UID: \"b49dfcdb-910b-48e0-91bb-a426980dc277\") " Oct 01 16:36:38 crc kubenswrapper[4949]: I1001 16:36:38.335045 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b49dfcdb-910b-48e0-91bb-a426980dc277-logs\") pod \"b49dfcdb-910b-48e0-91bb-a426980dc277\" (UID: \"b49dfcdb-910b-48e0-91bb-a426980dc277\") " Oct 01 16:36:38 crc kubenswrapper[4949]: I1001 16:36:38.335302 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b49dfcdb-910b-48e0-91bb-a426980dc277-scripts\") pod \"b49dfcdb-910b-48e0-91bb-a426980dc277\" (UID: \"b49dfcdb-910b-48e0-91bb-a426980dc277\") " Oct 01 16:36:38 crc kubenswrapper[4949]: I1001 16:36:38.335390 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b49dfcdb-910b-48e0-91bb-a426980dc277-combined-ca-bundle\") pod \"b49dfcdb-910b-48e0-91bb-a426980dc277\" (UID: \"b49dfcdb-910b-48e0-91bb-a426980dc277\") " Oct 01 16:36:38 crc kubenswrapper[4949]: I1001 16:36:38.336250 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b49dfcdb-910b-48e0-91bb-a426980dc277-logs" (OuterVolumeSpecName: "logs") pod "b49dfcdb-910b-48e0-91bb-a426980dc277" (UID: "b49dfcdb-910b-48e0-91bb-a426980dc277"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:36:38 crc kubenswrapper[4949]: I1001 16:36:38.339715 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b49dfcdb-910b-48e0-91bb-a426980dc277-horizon-secret-key" (OuterVolumeSpecName: "horizon-secret-key") pod "b49dfcdb-910b-48e0-91bb-a426980dc277" (UID: "b49dfcdb-910b-48e0-91bb-a426980dc277"). InnerVolumeSpecName "horizon-secret-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:36:38 crc kubenswrapper[4949]: I1001 16:36:38.341094 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b49dfcdb-910b-48e0-91bb-a426980dc277-kube-api-access-wcn9w" (OuterVolumeSpecName: "kube-api-access-wcn9w") pod "b49dfcdb-910b-48e0-91bb-a426980dc277" (UID: "b49dfcdb-910b-48e0-91bb-a426980dc277"). InnerVolumeSpecName "kube-api-access-wcn9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:36:38 crc kubenswrapper[4949]: I1001 16:36:38.357763 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b49dfcdb-910b-48e0-91bb-a426980dc277-config-data" (OuterVolumeSpecName: "config-data") pod "b49dfcdb-910b-48e0-91bb-a426980dc277" (UID: "b49dfcdb-910b-48e0-91bb-a426980dc277"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 16:36:38 crc kubenswrapper[4949]: I1001 16:36:38.361656 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b49dfcdb-910b-48e0-91bb-a426980dc277-scripts" (OuterVolumeSpecName: "scripts") pod "b49dfcdb-910b-48e0-91bb-a426980dc277" (UID: "b49dfcdb-910b-48e0-91bb-a426980dc277"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 16:36:38 crc kubenswrapper[4949]: I1001 16:36:38.364802 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b49dfcdb-910b-48e0-91bb-a426980dc277-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "b49dfcdb-910b-48e0-91bb-a426980dc277" (UID: "b49dfcdb-910b-48e0-91bb-a426980dc277"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:36:38 crc kubenswrapper[4949]: I1001 16:36:38.374716 4949 generic.go:334] "Generic (PLEG): container finished" podID="b49dfcdb-910b-48e0-91bb-a426980dc277" containerID="2b7f23ac7a5568b87ae50dd77d191ecb1fb638e0f0f34ff46fbcb56839f9fb46" exitCode=137 Oct 01 16:36:38 crc kubenswrapper[4949]: I1001 16:36:38.374786 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/horizon-7f78d9658d-xl5nq" Oct 01 16:36:38 crc kubenswrapper[4949]: I1001 16:36:38.374782 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f78d9658d-xl5nq" event={"ID":"b49dfcdb-910b-48e0-91bb-a426980dc277","Type":"ContainerDied","Data":"2b7f23ac7a5568b87ae50dd77d191ecb1fb638e0f0f34ff46fbcb56839f9fb46"} Oct 01 16:36:38 crc kubenswrapper[4949]: I1001 16:36:38.375333 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/horizon-7f78d9658d-xl5nq" event={"ID":"b49dfcdb-910b-48e0-91bb-a426980dc277","Type":"ContainerDied","Data":"301323ff76bbfd53336b665902f54c0ceb204f65c6902d0b61dde6f4dfcbff27"} Oct 01 16:36:38 crc kubenswrapper[4949]: I1001 16:36:38.375355 4949 scope.go:117] "RemoveContainer" containerID="7d34943c16dbb9818f91ad5b0f6b4a227edbc27415577f841aed76aad83b78e1" Oct 01 16:36:38 crc kubenswrapper[4949]: I1001 16:36:38.381075 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"32632bf5-03f4-494c-8e79-3cb86d093629","Type":"ContainerStarted","Data":"9415e79e346bcf8bc29bc667be86fdef5b5019a9f07680b4c0bc5b48b6c30173"} Oct 01 16:36:38 crc kubenswrapper[4949]: I1001 16:36:38.386289 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b49dfcdb-910b-48e0-91bb-a426980dc277-horizon-tls-certs" (OuterVolumeSpecName: "horizon-tls-certs") pod "b49dfcdb-910b-48e0-91bb-a426980dc277" (UID: "b49dfcdb-910b-48e0-91bb-a426980dc277"). InnerVolumeSpecName "horizon-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:36:38 crc kubenswrapper[4949]: I1001 16:36:38.437461 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/b49dfcdb-910b-48e0-91bb-a426980dc277-scripts\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:38 crc kubenswrapper[4949]: I1001 16:36:38.437489 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/b49dfcdb-910b-48e0-91bb-a426980dc277-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:38 crc kubenswrapper[4949]: I1001 16:36:38.437498 4949 reconciler_common.go:293] "Volume detached for volume \"horizon-tls-certs\" (UniqueName: \"kubernetes.io/secret/b49dfcdb-910b-48e0-91bb-a426980dc277-horizon-tls-certs\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:38 crc kubenswrapper[4949]: I1001 16:36:38.437506 4949 reconciler_common.go:293] "Volume detached for volume \"horizon-secret-key\" (UniqueName: \"kubernetes.io/secret/b49dfcdb-910b-48e0-91bb-a426980dc277-horizon-secret-key\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:38 crc kubenswrapper[4949]: I1001 16:36:38.437514 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/b49dfcdb-910b-48e0-91bb-a426980dc277-config-data\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:38 crc kubenswrapper[4949]: I1001 16:36:38.437523 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wcn9w\" (UniqueName: \"kubernetes.io/projected/b49dfcdb-910b-48e0-91bb-a426980dc277-kube-api-access-wcn9w\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:38 crc kubenswrapper[4949]: I1001 16:36:38.437530 4949 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b49dfcdb-910b-48e0-91bb-a426980dc277-logs\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:38 crc kubenswrapper[4949]: I1001 16:36:38.566823 4949 scope.go:117] "RemoveContainer" containerID="2b7f23ac7a5568b87ae50dd77d191ecb1fb638e0f0f34ff46fbcb56839f9fb46" Oct 01 16:36:38 crc kubenswrapper[4949]: I1001 16:36:38.583345 4949 scope.go:117] "RemoveContainer" containerID="7d34943c16dbb9818f91ad5b0f6b4a227edbc27415577f841aed76aad83b78e1" Oct 01 16:36:38 crc kubenswrapper[4949]: E1001 16:36:38.583772 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d34943c16dbb9818f91ad5b0f6b4a227edbc27415577f841aed76aad83b78e1\": container with ID starting with 7d34943c16dbb9818f91ad5b0f6b4a227edbc27415577f841aed76aad83b78e1 not found: ID does not exist" containerID="7d34943c16dbb9818f91ad5b0f6b4a227edbc27415577f841aed76aad83b78e1" Oct 01 16:36:38 crc kubenswrapper[4949]: I1001 16:36:38.583825 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d34943c16dbb9818f91ad5b0f6b4a227edbc27415577f841aed76aad83b78e1"} err="failed to get container status \"7d34943c16dbb9818f91ad5b0f6b4a227edbc27415577f841aed76aad83b78e1\": rpc error: code = NotFound desc = could not find container \"7d34943c16dbb9818f91ad5b0f6b4a227edbc27415577f841aed76aad83b78e1\": container with ID starting with 7d34943c16dbb9818f91ad5b0f6b4a227edbc27415577f841aed76aad83b78e1 not found: ID does not exist" Oct 01 16:36:38 crc kubenswrapper[4949]: I1001 16:36:38.583893 4949 scope.go:117] "RemoveContainer" containerID="2b7f23ac7a5568b87ae50dd77d191ecb1fb638e0f0f34ff46fbcb56839f9fb46" Oct 01 16:36:38 crc kubenswrapper[4949]: E1001 16:36:38.584273 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2b7f23ac7a5568b87ae50dd77d191ecb1fb638e0f0f34ff46fbcb56839f9fb46\": container with ID starting with 2b7f23ac7a5568b87ae50dd77d191ecb1fb638e0f0f34ff46fbcb56839f9fb46 not found: ID does not exist" containerID="2b7f23ac7a5568b87ae50dd77d191ecb1fb638e0f0f34ff46fbcb56839f9fb46" Oct 01 16:36:38 crc kubenswrapper[4949]: I1001 16:36:38.584313 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2b7f23ac7a5568b87ae50dd77d191ecb1fb638e0f0f34ff46fbcb56839f9fb46"} err="failed to get container status \"2b7f23ac7a5568b87ae50dd77d191ecb1fb638e0f0f34ff46fbcb56839f9fb46\": rpc error: code = NotFound desc = could not find container \"2b7f23ac7a5568b87ae50dd77d191ecb1fb638e0f0f34ff46fbcb56839f9fb46\": container with ID starting with 2b7f23ac7a5568b87ae50dd77d191ecb1fb638e0f0f34ff46fbcb56839f9fb46 not found: ID does not exist" Oct 01 16:36:38 crc kubenswrapper[4949]: I1001 16:36:38.717097 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/horizon-7f78d9658d-xl5nq"] Oct 01 16:36:38 crc kubenswrapper[4949]: I1001 16:36:38.725976 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/horizon-7f78d9658d-xl5nq"] Oct 01 16:36:39 crc kubenswrapper[4949]: I1001 16:36:39.397408 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"32632bf5-03f4-494c-8e79-3cb86d093629","Type":"ContainerStarted","Data":"7bf767396cb88f607cebc554370820626e746e3a6d6d00d1df66a7c3972420be"} Oct 01 16:36:39 crc kubenswrapper[4949]: I1001 16:36:39.397776 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-scheduler-0" event={"ID":"32632bf5-03f4-494c-8e79-3cb86d093629","Type":"ContainerStarted","Data":"c74ec0f4dc616ff3fc98ae9ade16545958a93d964246fde5afcf54799176fa5a"} Oct 01 16:36:39 crc kubenswrapper[4949]: I1001 16:36:39.618195 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b49dfcdb-910b-48e0-91bb-a426980dc277" path="/var/lib/kubelet/pods/b49dfcdb-910b-48e0-91bb-a426980dc277/volumes" Oct 01 16:36:40 crc kubenswrapper[4949]: I1001 16:36:40.953045 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Oct 01 16:36:40 crc kubenswrapper[4949]: I1001 16:36:40.995027 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-scheduler-0" podStartSLOduration=3.995001232 podStartE2EDuration="3.995001232s" podCreationTimestamp="2025-10-01 16:36:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 16:36:39.42497963 +0000 UTC m=+3298.730585821" watchObservedRunningTime="2025-10-01 16:36:40.995001232 +0000 UTC m=+3300.300607433" Oct 01 16:36:41 crc kubenswrapper[4949]: I1001 16:36:41.041003 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Oct 01 16:36:41 crc kubenswrapper[4949]: I1001 16:36:41.432530 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="7cc67aa3-197f-42f8-9c2f-8461e871faa5" containerName="manila-share" containerID="cri-o://75596e7b489b14276d229078a8475ec781d821e33274b8f8911c5cc64214f52a" gracePeriod=30 Oct 01 16:36:41 crc kubenswrapper[4949]: I1001 16:36:41.432606 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack/manila-share-share1-0" podUID="7cc67aa3-197f-42f8-9c2f-8461e871faa5" containerName="probe" containerID="cri-o://0071108268b8c2be0bbccd5d9f658c7ff06324e6bb59779ccb9bb72a68bb022c" gracePeriod=30 Oct 01 16:36:41 crc kubenswrapper[4949]: E1001 16:36:41.770957 4949 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7cc67aa3_197f_42f8_9c2f_8461e871faa5.slice/crio-0071108268b8c2be0bbccd5d9f658c7ff06324e6bb59779ccb9bb72a68bb022c.scope\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7cc67aa3_197f_42f8_9c2f_8461e871faa5.slice/crio-conmon-0071108268b8c2be0bbccd5d9f658c7ff06324e6bb59779ccb9bb72a68bb022c.scope\": RecentStats: unable to find data in memory cache]" Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.226013 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/manila-api-0" Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.416576 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.455671 4949 generic.go:334] "Generic (PLEG): container finished" podID="7cc67aa3-197f-42f8-9c2f-8461e871faa5" containerID="0071108268b8c2be0bbccd5d9f658c7ff06324e6bb59779ccb9bb72a68bb022c" exitCode=0 Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.455704 4949 generic.go:334] "Generic (PLEG): container finished" podID="7cc67aa3-197f-42f8-9c2f-8461e871faa5" containerID="75596e7b489b14276d229078a8475ec781d821e33274b8f8911c5cc64214f52a" exitCode=1 Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.455750 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"7cc67aa3-197f-42f8-9c2f-8461e871faa5","Type":"ContainerDied","Data":"0071108268b8c2be0bbccd5d9f658c7ff06324e6bb59779ccb9bb72a68bb022c"} Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.455777 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.455785 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"7cc67aa3-197f-42f8-9c2f-8461e871faa5","Type":"ContainerDied","Data":"75596e7b489b14276d229078a8475ec781d821e33274b8f8911c5cc64214f52a"} Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.455824 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"7cc67aa3-197f-42f8-9c2f-8461e871faa5","Type":"ContainerDied","Data":"179d9554d581bb2b22e954712dacc231f04516fe77ba7455c3948d1816f68187"} Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.455844 4949 scope.go:117] "RemoveContainer" containerID="0071108268b8c2be0bbccd5d9f658c7ff06324e6bb59779ccb9bb72a68bb022c" Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.477316 4949 scope.go:117] "RemoveContainer" containerID="75596e7b489b14276d229078a8475ec781d821e33274b8f8911c5cc64214f52a" Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.495414 4949 scope.go:117] "RemoveContainer" containerID="0071108268b8c2be0bbccd5d9f658c7ff06324e6bb59779ccb9bb72a68bb022c" Oct 01 16:36:42 crc kubenswrapper[4949]: E1001 16:36:42.495881 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0071108268b8c2be0bbccd5d9f658c7ff06324e6bb59779ccb9bb72a68bb022c\": container with ID starting with 0071108268b8c2be0bbccd5d9f658c7ff06324e6bb59779ccb9bb72a68bb022c not found: ID does not exist" containerID="0071108268b8c2be0bbccd5d9f658c7ff06324e6bb59779ccb9bb72a68bb022c" Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.495920 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0071108268b8c2be0bbccd5d9f658c7ff06324e6bb59779ccb9bb72a68bb022c"} err="failed to get container status \"0071108268b8c2be0bbccd5d9f658c7ff06324e6bb59779ccb9bb72a68bb022c\": rpc error: code = NotFound desc = could not find container \"0071108268b8c2be0bbccd5d9f658c7ff06324e6bb59779ccb9bb72a68bb022c\": container with ID starting with 0071108268b8c2be0bbccd5d9f658c7ff06324e6bb59779ccb9bb72a68bb022c not found: ID does not exist" Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.495950 4949 scope.go:117] "RemoveContainer" containerID="75596e7b489b14276d229078a8475ec781d821e33274b8f8911c5cc64214f52a" Oct 01 16:36:42 crc kubenswrapper[4949]: E1001 16:36:42.496328 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"75596e7b489b14276d229078a8475ec781d821e33274b8f8911c5cc64214f52a\": container with ID starting with 75596e7b489b14276d229078a8475ec781d821e33274b8f8911c5cc64214f52a not found: ID does not exist" containerID="75596e7b489b14276d229078a8475ec781d821e33274b8f8911c5cc64214f52a" Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.496372 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75596e7b489b14276d229078a8475ec781d821e33274b8f8911c5cc64214f52a"} err="failed to get container status \"75596e7b489b14276d229078a8475ec781d821e33274b8f8911c5cc64214f52a\": rpc error: code = NotFound desc = could not find container \"75596e7b489b14276d229078a8475ec781d821e33274b8f8911c5cc64214f52a\": container with ID starting with 75596e7b489b14276d229078a8475ec781d821e33274b8f8911c5cc64214f52a not found: ID does not exist" Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.496400 4949 scope.go:117] "RemoveContainer" containerID="0071108268b8c2be0bbccd5d9f658c7ff06324e6bb59779ccb9bb72a68bb022c" Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.496728 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0071108268b8c2be0bbccd5d9f658c7ff06324e6bb59779ccb9bb72a68bb022c"} err="failed to get container status \"0071108268b8c2be0bbccd5d9f658c7ff06324e6bb59779ccb9bb72a68bb022c\": rpc error: code = NotFound desc = could not find container \"0071108268b8c2be0bbccd5d9f658c7ff06324e6bb59779ccb9bb72a68bb022c\": container with ID starting with 0071108268b8c2be0bbccd5d9f658c7ff06324e6bb59779ccb9bb72a68bb022c not found: ID does not exist" Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.496755 4949 scope.go:117] "RemoveContainer" containerID="75596e7b489b14276d229078a8475ec781d821e33274b8f8911c5cc64214f52a" Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.497011 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"75596e7b489b14276d229078a8475ec781d821e33274b8f8911c5cc64214f52a"} err="failed to get container status \"75596e7b489b14276d229078a8475ec781d821e33274b8f8911c5cc64214f52a\": rpc error: code = NotFound desc = could not find container \"75596e7b489b14276d229078a8475ec781d821e33274b8f8911c5cc64214f52a\": container with ID starting with 75596e7b489b14276d229078a8475ec781d821e33274b8f8911c5cc64214f52a not found: ID does not exist" Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.537617 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7cc67aa3-197f-42f8-9c2f-8461e871faa5-ceph\") pod \"7cc67aa3-197f-42f8-9c2f-8461e871faa5\" (UID: \"7cc67aa3-197f-42f8-9c2f-8461e871faa5\") " Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.537872 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7cc67aa3-197f-42f8-9c2f-8461e871faa5-config-data-custom\") pod \"7cc67aa3-197f-42f8-9c2f-8461e871faa5\" (UID: \"7cc67aa3-197f-42f8-9c2f-8461e871faa5\") " Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.538002 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cc67aa3-197f-42f8-9c2f-8461e871faa5-scripts\") pod \"7cc67aa3-197f-42f8-9c2f-8461e871faa5\" (UID: \"7cc67aa3-197f-42f8-9c2f-8461e871faa5\") " Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.538029 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cc67aa3-197f-42f8-9c2f-8461e871faa5-combined-ca-bundle\") pod \"7cc67aa3-197f-42f8-9c2f-8461e871faa5\" (UID: \"7cc67aa3-197f-42f8-9c2f-8461e871faa5\") " Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.538063 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cc67aa3-197f-42f8-9c2f-8461e871faa5-config-data\") pod \"7cc67aa3-197f-42f8-9c2f-8461e871faa5\" (UID: \"7cc67aa3-197f-42f8-9c2f-8461e871faa5\") " Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.538107 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s9hf7\" (UniqueName: \"kubernetes.io/projected/7cc67aa3-197f-42f8-9c2f-8461e871faa5-kube-api-access-s9hf7\") pod \"7cc67aa3-197f-42f8-9c2f-8461e871faa5\" (UID: \"7cc67aa3-197f-42f8-9c2f-8461e871faa5\") " Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.538141 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7cc67aa3-197f-42f8-9c2f-8461e871faa5-etc-machine-id\") pod \"7cc67aa3-197f-42f8-9c2f-8461e871faa5\" (UID: \"7cc67aa3-197f-42f8-9c2f-8461e871faa5\") " Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.538173 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/7cc67aa3-197f-42f8-9c2f-8461e871faa5-var-lib-manila\") pod \"7cc67aa3-197f-42f8-9c2f-8461e871faa5\" (UID: \"7cc67aa3-197f-42f8-9c2f-8461e871faa5\") " Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.538593 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cc67aa3-197f-42f8-9c2f-8461e871faa5-var-lib-manila" (OuterVolumeSpecName: "var-lib-manila") pod "7cc67aa3-197f-42f8-9c2f-8461e871faa5" (UID: "7cc67aa3-197f-42f8-9c2f-8461e871faa5"). InnerVolumeSpecName "var-lib-manila". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.539428 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7cc67aa3-197f-42f8-9c2f-8461e871faa5-etc-machine-id" (OuterVolumeSpecName: "etc-machine-id") pod "7cc67aa3-197f-42f8-9c2f-8461e871faa5" (UID: "7cc67aa3-197f-42f8-9c2f-8461e871faa5"). InnerVolumeSpecName "etc-machine-id". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.544063 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cc67aa3-197f-42f8-9c2f-8461e871faa5-scripts" (OuterVolumeSpecName: "scripts") pod "7cc67aa3-197f-42f8-9c2f-8461e871faa5" (UID: "7cc67aa3-197f-42f8-9c2f-8461e871faa5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.544151 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cc67aa3-197f-42f8-9c2f-8461e871faa5-config-data-custom" (OuterVolumeSpecName: "config-data-custom") pod "7cc67aa3-197f-42f8-9c2f-8461e871faa5" (UID: "7cc67aa3-197f-42f8-9c2f-8461e871faa5"). InnerVolumeSpecName "config-data-custom". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.545620 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cc67aa3-197f-42f8-9c2f-8461e871faa5-ceph" (OuterVolumeSpecName: "ceph") pod "7cc67aa3-197f-42f8-9c2f-8461e871faa5" (UID: "7cc67aa3-197f-42f8-9c2f-8461e871faa5"). InnerVolumeSpecName "ceph". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.545853 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7cc67aa3-197f-42f8-9c2f-8461e871faa5-kube-api-access-s9hf7" (OuterVolumeSpecName: "kube-api-access-s9hf7") pod "7cc67aa3-197f-42f8-9c2f-8461e871faa5" (UID: "7cc67aa3-197f-42f8-9c2f-8461e871faa5"). InnerVolumeSpecName "kube-api-access-s9hf7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.588011 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cc67aa3-197f-42f8-9c2f-8461e871faa5-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "7cc67aa3-197f-42f8-9c2f-8461e871faa5" (UID: "7cc67aa3-197f-42f8-9c2f-8461e871faa5"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.625170 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7cc67aa3-197f-42f8-9c2f-8461e871faa5-config-data" (OuterVolumeSpecName: "config-data") pod "7cc67aa3-197f-42f8-9c2f-8461e871faa5" (UID: "7cc67aa3-197f-42f8-9c2f-8461e871faa5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.640906 4949 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7cc67aa3-197f-42f8-9c2f-8461e871faa5-scripts\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.640940 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/7cc67aa3-197f-42f8-9c2f-8461e871faa5-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.640955 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7cc67aa3-197f-42f8-9c2f-8461e871faa5-config-data\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.640966 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s9hf7\" (UniqueName: \"kubernetes.io/projected/7cc67aa3-197f-42f8-9c2f-8461e871faa5-kube-api-access-s9hf7\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.640976 4949 reconciler_common.go:293] "Volume detached for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/7cc67aa3-197f-42f8-9c2f-8461e871faa5-etc-machine-id\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.640985 4949 reconciler_common.go:293] "Volume detached for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/7cc67aa3-197f-42f8-9c2f-8461e871faa5-var-lib-manila\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.640995 4949 reconciler_common.go:293] "Volume detached for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/7cc67aa3-197f-42f8-9c2f-8461e871faa5-ceph\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.641003 4949 reconciler_common.go:293] "Volume detached for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/7cc67aa3-197f-42f8-9c2f-8461e871faa5-config-data-custom\") on node \"crc\" DevicePath \"\"" Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.809429 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-share-share1-0"] Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.821817 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-share-share1-0"] Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.840522 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/manila-share-share1-0"] Oct 01 16:36:42 crc kubenswrapper[4949]: E1001 16:36:42.841183 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b49dfcdb-910b-48e0-91bb-a426980dc277" containerName="horizon" Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.841216 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="b49dfcdb-910b-48e0-91bb-a426980dc277" containerName="horizon" Oct 01 16:36:42 crc kubenswrapper[4949]: E1001 16:36:42.841258 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cc67aa3-197f-42f8-9c2f-8461e871faa5" containerName="probe" Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.841270 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cc67aa3-197f-42f8-9c2f-8461e871faa5" containerName="probe" Oct 01 16:36:42 crc kubenswrapper[4949]: E1001 16:36:42.841299 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7cc67aa3-197f-42f8-9c2f-8461e871faa5" containerName="manila-share" Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.841312 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="7cc67aa3-197f-42f8-9c2f-8461e871faa5" containerName="manila-share" Oct 01 16:36:42 crc kubenswrapper[4949]: E1001 16:36:42.841342 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b49dfcdb-910b-48e0-91bb-a426980dc277" containerName="horizon-log" Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.841354 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="b49dfcdb-910b-48e0-91bb-a426980dc277" containerName="horizon-log" Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.841680 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cc67aa3-197f-42f8-9c2f-8461e871faa5" containerName="manila-share" Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.841714 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="7cc67aa3-197f-42f8-9c2f-8461e871faa5" containerName="probe" Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.841750 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="b49dfcdb-910b-48e0-91bb-a426980dc277" containerName="horizon" Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.841776 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="b49dfcdb-910b-48e0-91bb-a426980dc277" containerName="horizon-log" Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.843512 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.848862 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.850890 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"manila-share-share1-config-data" Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.949321 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/a6094af2-c113-4b8f-9de7-a8bc511523d6-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"a6094af2-c113-4b8f-9de7-a8bc511523d6\") " pod="openstack/manila-share-share1-0" Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.949361 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6094af2-c113-4b8f-9de7-a8bc511523d6-scripts\") pod \"manila-share-share1-0\" (UID: \"a6094af2-c113-4b8f-9de7-a8bc511523d6\") " pod="openstack/manila-share-share1-0" Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.949416 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6094af2-c113-4b8f-9de7-a8bc511523d6-config-data\") pod \"manila-share-share1-0\" (UID: \"a6094af2-c113-4b8f-9de7-a8bc511523d6\") " pod="openstack/manila-share-share1-0" Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.949586 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a6094af2-c113-4b8f-9de7-a8bc511523d6-ceph\") pod \"manila-share-share1-0\" (UID: \"a6094af2-c113-4b8f-9de7-a8bc511523d6\") " pod="openstack/manila-share-share1-0" Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.949685 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgnv5\" (UniqueName: \"kubernetes.io/projected/a6094af2-c113-4b8f-9de7-a8bc511523d6-kube-api-access-sgnv5\") pod \"manila-share-share1-0\" (UID: \"a6094af2-c113-4b8f-9de7-a8bc511523d6\") " pod="openstack/manila-share-share1-0" Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.949723 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a6094af2-c113-4b8f-9de7-a8bc511523d6-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"a6094af2-c113-4b8f-9de7-a8bc511523d6\") " pod="openstack/manila-share-share1-0" Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.950040 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a6094af2-c113-4b8f-9de7-a8bc511523d6-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"a6094af2-c113-4b8f-9de7-a8bc511523d6\") " pod="openstack/manila-share-share1-0" Oct 01 16:36:42 crc kubenswrapper[4949]: I1001 16:36:42.950114 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6094af2-c113-4b8f-9de7-a8bc511523d6-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"a6094af2-c113-4b8f-9de7-a8bc511523d6\") " pod="openstack/manila-share-share1-0" Oct 01 16:36:43 crc kubenswrapper[4949]: I1001 16:36:43.051817 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6094af2-c113-4b8f-9de7-a8bc511523d6-config-data\") pod \"manila-share-share1-0\" (UID: \"a6094af2-c113-4b8f-9de7-a8bc511523d6\") " pod="openstack/manila-share-share1-0" Oct 01 16:36:43 crc kubenswrapper[4949]: I1001 16:36:43.051891 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a6094af2-c113-4b8f-9de7-a8bc511523d6-ceph\") pod \"manila-share-share1-0\" (UID: \"a6094af2-c113-4b8f-9de7-a8bc511523d6\") " pod="openstack/manila-share-share1-0" Oct 01 16:36:43 crc kubenswrapper[4949]: I1001 16:36:43.051937 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgnv5\" (UniqueName: \"kubernetes.io/projected/a6094af2-c113-4b8f-9de7-a8bc511523d6-kube-api-access-sgnv5\") pod \"manila-share-share1-0\" (UID: \"a6094af2-c113-4b8f-9de7-a8bc511523d6\") " pod="openstack/manila-share-share1-0" Oct 01 16:36:43 crc kubenswrapper[4949]: I1001 16:36:43.051963 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a6094af2-c113-4b8f-9de7-a8bc511523d6-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"a6094af2-c113-4b8f-9de7-a8bc511523d6\") " pod="openstack/manila-share-share1-0" Oct 01 16:36:43 crc kubenswrapper[4949]: I1001 16:36:43.052047 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a6094af2-c113-4b8f-9de7-a8bc511523d6-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"a6094af2-c113-4b8f-9de7-a8bc511523d6\") " pod="openstack/manila-share-share1-0" Oct 01 16:36:43 crc kubenswrapper[4949]: I1001 16:36:43.052142 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6094af2-c113-4b8f-9de7-a8bc511523d6-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"a6094af2-c113-4b8f-9de7-a8bc511523d6\") " pod="openstack/manila-share-share1-0" Oct 01 16:36:43 crc kubenswrapper[4949]: I1001 16:36:43.052185 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/a6094af2-c113-4b8f-9de7-a8bc511523d6-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"a6094af2-c113-4b8f-9de7-a8bc511523d6\") " pod="openstack/manila-share-share1-0" Oct 01 16:36:43 crc kubenswrapper[4949]: I1001 16:36:43.052205 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6094af2-c113-4b8f-9de7-a8bc511523d6-scripts\") pod \"manila-share-share1-0\" (UID: \"a6094af2-c113-4b8f-9de7-a8bc511523d6\") " pod="openstack/manila-share-share1-0" Oct 01 16:36:43 crc kubenswrapper[4949]: I1001 16:36:43.052357 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-manila\" (UniqueName: \"kubernetes.io/host-path/a6094af2-c113-4b8f-9de7-a8bc511523d6-var-lib-manila\") pod \"manila-share-share1-0\" (UID: \"a6094af2-c113-4b8f-9de7-a8bc511523d6\") " pod="openstack/manila-share-share1-0" Oct 01 16:36:43 crc kubenswrapper[4949]: I1001 16:36:43.052543 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-machine-id\" (UniqueName: \"kubernetes.io/host-path/a6094af2-c113-4b8f-9de7-a8bc511523d6-etc-machine-id\") pod \"manila-share-share1-0\" (UID: \"a6094af2-c113-4b8f-9de7-a8bc511523d6\") " pod="openstack/manila-share-share1-0" Oct 01 16:36:43 crc kubenswrapper[4949]: I1001 16:36:43.056113 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceph\" (UniqueName: \"kubernetes.io/projected/a6094af2-c113-4b8f-9de7-a8bc511523d6-ceph\") pod \"manila-share-share1-0\" (UID: \"a6094af2-c113-4b8f-9de7-a8bc511523d6\") " pod="openstack/manila-share-share1-0" Oct 01 16:36:43 crc kubenswrapper[4949]: I1001 16:36:43.056411 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6094af2-c113-4b8f-9de7-a8bc511523d6-config-data\") pod \"manila-share-share1-0\" (UID: \"a6094af2-c113-4b8f-9de7-a8bc511523d6\") " pod="openstack/manila-share-share1-0" Oct 01 16:36:43 crc kubenswrapper[4949]: I1001 16:36:43.056436 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6094af2-c113-4b8f-9de7-a8bc511523d6-scripts\") pod \"manila-share-share1-0\" (UID: \"a6094af2-c113-4b8f-9de7-a8bc511523d6\") " pod="openstack/manila-share-share1-0" Oct 01 16:36:43 crc kubenswrapper[4949]: I1001 16:36:43.056522 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/a6094af2-c113-4b8f-9de7-a8bc511523d6-combined-ca-bundle\") pod \"manila-share-share1-0\" (UID: \"a6094af2-c113-4b8f-9de7-a8bc511523d6\") " pod="openstack/manila-share-share1-0" Oct 01 16:36:43 crc kubenswrapper[4949]: I1001 16:36:43.057200 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-custom\" (UniqueName: \"kubernetes.io/secret/a6094af2-c113-4b8f-9de7-a8bc511523d6-config-data-custom\") pod \"manila-share-share1-0\" (UID: \"a6094af2-c113-4b8f-9de7-a8bc511523d6\") " pod="openstack/manila-share-share1-0" Oct 01 16:36:43 crc kubenswrapper[4949]: I1001 16:36:43.078516 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgnv5\" (UniqueName: \"kubernetes.io/projected/a6094af2-c113-4b8f-9de7-a8bc511523d6-kube-api-access-sgnv5\") pod \"manila-share-share1-0\" (UID: \"a6094af2-c113-4b8f-9de7-a8bc511523d6\") " pod="openstack/manila-share-share1-0" Oct 01 16:36:43 crc kubenswrapper[4949]: I1001 16:36:43.169995 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/manila-share-share1-0" Oct 01 16:36:43 crc kubenswrapper[4949]: I1001 16:36:43.624857 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7cc67aa3-197f-42f8-9c2f-8461e871faa5" path="/var/lib/kubelet/pods/7cc67aa3-197f-42f8-9c2f-8461e871faa5/volumes" Oct 01 16:36:43 crc kubenswrapper[4949]: I1001 16:36:43.848664 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/manila-share-share1-0"] Oct 01 16:36:44 crc kubenswrapper[4949]: I1001 16:36:44.494693 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"a6094af2-c113-4b8f-9de7-a8bc511523d6","Type":"ContainerStarted","Data":"e796488be0b2fcb2b03d11dcf7f810c81fa20b54bab30a941c3adc9188803dc5"} Oct 01 16:36:44 crc kubenswrapper[4949]: I1001 16:36:44.495047 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"a6094af2-c113-4b8f-9de7-a8bc511523d6","Type":"ContainerStarted","Data":"071427003c41e22cf5880d692b3b4a36c8af5d33848cd062749d6cac5dd78301"} Oct 01 16:36:45 crc kubenswrapper[4949]: I1001 16:36:45.507349 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/manila-share-share1-0" event={"ID":"a6094af2-c113-4b8f-9de7-a8bc511523d6","Type":"ContainerStarted","Data":"464a455ea8926bd5542c55ae01aac65b10d26145a6bf833be4066456270d4923"} Oct 01 16:36:45 crc kubenswrapper[4949]: I1001 16:36:45.552271 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/manila-share-share1-0" podStartSLOduration=3.552247999 podStartE2EDuration="3.552247999s" podCreationTimestamp="2025-10-01 16:36:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 16:36:45.54029439 +0000 UTC m=+3304.845900661" watchObservedRunningTime="2025-10-01 16:36:45.552247999 +0000 UTC m=+3304.857854190" Oct 01 16:36:47 crc kubenswrapper[4949]: I1001 16:36:47.766851 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-scheduler-0" Oct 01 16:36:48 crc kubenswrapper[4949]: I1001 16:36:48.038921 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 16:36:48 crc kubenswrapper[4949]: I1001 16:36:48.039004 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 16:36:53 crc kubenswrapper[4949]: I1001 16:36:53.170461 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack/manila-share-share1-0" Oct 01 16:36:58 crc kubenswrapper[4949]: I1001 16:36:58.991283 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack/ceilometer-0" Oct 01 16:36:59 crc kubenswrapper[4949]: I1001 16:36:59.275212 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-scheduler-0" Oct 01 16:37:04 crc kubenswrapper[4949]: I1001 16:37:04.603357 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack/manila-share-share1-0" Oct 01 16:37:18 crc kubenswrapper[4949]: I1001 16:37:18.038917 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 16:37:18 crc kubenswrapper[4949]: I1001 16:37:18.039617 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 16:37:48 crc kubenswrapper[4949]: I1001 16:37:48.038727 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 16:37:48 crc kubenswrapper[4949]: I1001 16:37:48.039387 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 16:37:48 crc kubenswrapper[4949]: I1001 16:37:48.039447 4949 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l6287" Oct 01 16:37:48 crc kubenswrapper[4949]: I1001 16:37:48.040438 4949 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2ae213e758e493fc55d198c1a675039b65ae449df8af35955dd5bb19ef866b5d"} pod="openshift-machine-config-operator/machine-config-daemon-l6287" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 01 16:37:48 crc kubenswrapper[4949]: I1001 16:37:48.040537 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" containerID="cri-o://2ae213e758e493fc55d198c1a675039b65ae449df8af35955dd5bb19ef866b5d" gracePeriod=600 Oct 01 16:37:49 crc kubenswrapper[4949]: I1001 16:37:49.199969 4949 generic.go:334] "Generic (PLEG): container finished" podID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerID="2ae213e758e493fc55d198c1a675039b65ae449df8af35955dd5bb19ef866b5d" exitCode=0 Oct 01 16:37:49 crc kubenswrapper[4949]: I1001 16:37:49.200047 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" event={"ID":"0e15cd67-d4ad-49b8-96a6-da114105e558","Type":"ContainerDied","Data":"2ae213e758e493fc55d198c1a675039b65ae449df8af35955dd5bb19ef866b5d"} Oct 01 16:37:49 crc kubenswrapper[4949]: I1001 16:37:49.200742 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" event={"ID":"0e15cd67-d4ad-49b8-96a6-da114105e558","Type":"ContainerStarted","Data":"3f27d6beb245179886854d4e0b571554a832cd0a55ee5032309258751bf2c77f"} Oct 01 16:37:49 crc kubenswrapper[4949]: I1001 16:37:49.200794 4949 scope.go:117] "RemoveContainer" containerID="2163a2b05def712ac5b674496b27b05a9fdad015f2ccd67d0776760e65d84a2c" Oct 01 16:38:13 crc kubenswrapper[4949]: I1001 16:38:13.099116 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/tempest-tests-tempest"] Oct 01 16:38:13 crc kubenswrapper[4949]: I1001 16:38:13.103227 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Oct 01 16:38:13 crc kubenswrapper[4949]: I1001 16:38:13.114185 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"test-operator-controller-priv-key" Oct 01 16:38:13 crc kubenswrapper[4949]: I1001 16:38:13.114278 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-custom-data-s0" Oct 01 16:38:13 crc kubenswrapper[4949]: I1001 16:38:13.114739 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Oct 01 16:38:13 crc kubenswrapper[4949]: I1001 16:38:13.116459 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-fx25w" Oct 01 16:38:13 crc kubenswrapper[4949]: I1001 16:38:13.120704 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Oct 01 16:38:13 crc kubenswrapper[4949]: I1001 16:38:13.211803 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/375603dd-5dd3-4d2f-ac58-5335ebc721c0-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"375603dd-5dd3-4d2f-ac58-5335ebc721c0\") " pod="openstack/tempest-tests-tempest" Oct 01 16:38:13 crc kubenswrapper[4949]: I1001 16:38:13.212002 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/375603dd-5dd3-4d2f-ac58-5335ebc721c0-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"375603dd-5dd3-4d2f-ac58-5335ebc721c0\") " pod="openstack/tempest-tests-tempest" Oct 01 16:38:13 crc kubenswrapper[4949]: I1001 16:38:13.212512 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8r6nh\" (UniqueName: \"kubernetes.io/projected/375603dd-5dd3-4d2f-ac58-5335ebc721c0-kube-api-access-8r6nh\") pod \"tempest-tests-tempest\" (UID: \"375603dd-5dd3-4d2f-ac58-5335ebc721c0\") " pod="openstack/tempest-tests-tempest" Oct 01 16:38:13 crc kubenswrapper[4949]: I1001 16:38:13.212680 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/375603dd-5dd3-4d2f-ac58-5335ebc721c0-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"375603dd-5dd3-4d2f-ac58-5335ebc721c0\") " pod="openstack/tempest-tests-tempest" Oct 01 16:38:13 crc kubenswrapper[4949]: I1001 16:38:13.212781 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/375603dd-5dd3-4d2f-ac58-5335ebc721c0-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"375603dd-5dd3-4d2f-ac58-5335ebc721c0\") " pod="openstack/tempest-tests-tempest" Oct 01 16:38:13 crc kubenswrapper[4949]: I1001 16:38:13.212914 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/375603dd-5dd3-4d2f-ac58-5335ebc721c0-config-data\") pod \"tempest-tests-tempest\" (UID: \"375603dd-5dd3-4d2f-ac58-5335ebc721c0\") " pod="openstack/tempest-tests-tempest" Oct 01 16:38:13 crc kubenswrapper[4949]: I1001 16:38:13.213041 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/375603dd-5dd3-4d2f-ac58-5335ebc721c0-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"375603dd-5dd3-4d2f-ac58-5335ebc721c0\") " pod="openstack/tempest-tests-tempest" Oct 01 16:38:13 crc kubenswrapper[4949]: I1001 16:38:13.213712 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"375603dd-5dd3-4d2f-ac58-5335ebc721c0\") " pod="openstack/tempest-tests-tempest" Oct 01 16:38:13 crc kubenswrapper[4949]: I1001 16:38:13.213840 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/375603dd-5dd3-4d2f-ac58-5335ebc721c0-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"375603dd-5dd3-4d2f-ac58-5335ebc721c0\") " pod="openstack/tempest-tests-tempest" Oct 01 16:38:13 crc kubenswrapper[4949]: I1001 16:38:13.315106 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/375603dd-5dd3-4d2f-ac58-5335ebc721c0-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"375603dd-5dd3-4d2f-ac58-5335ebc721c0\") " pod="openstack/tempest-tests-tempest" Oct 01 16:38:13 crc kubenswrapper[4949]: I1001 16:38:13.315475 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/375603dd-5dd3-4d2f-ac58-5335ebc721c0-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"375603dd-5dd3-4d2f-ac58-5335ebc721c0\") " pod="openstack/tempest-tests-tempest" Oct 01 16:38:13 crc kubenswrapper[4949]: I1001 16:38:13.315646 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/375603dd-5dd3-4d2f-ac58-5335ebc721c0-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"375603dd-5dd3-4d2f-ac58-5335ebc721c0\") " pod="openstack/tempest-tests-tempest" Oct 01 16:38:13 crc kubenswrapper[4949]: I1001 16:38:13.315846 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8r6nh\" (UniqueName: \"kubernetes.io/projected/375603dd-5dd3-4d2f-ac58-5335ebc721c0-kube-api-access-8r6nh\") pod \"tempest-tests-tempest\" (UID: \"375603dd-5dd3-4d2f-ac58-5335ebc721c0\") " pod="openstack/tempest-tests-tempest" Oct 01 16:38:13 crc kubenswrapper[4949]: I1001 16:38:13.315942 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/375603dd-5dd3-4d2f-ac58-5335ebc721c0-test-operator-ephemeral-temporary\") pod \"tempest-tests-tempest\" (UID: \"375603dd-5dd3-4d2f-ac58-5335ebc721c0\") " pod="openstack/tempest-tests-tempest" Oct 01 16:38:13 crc kubenswrapper[4949]: I1001 16:38:13.316118 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/375603dd-5dd3-4d2f-ac58-5335ebc721c0-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"375603dd-5dd3-4d2f-ac58-5335ebc721c0\") " pod="openstack/tempest-tests-tempest" Oct 01 16:38:13 crc kubenswrapper[4949]: I1001 16:38:13.316334 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/375603dd-5dd3-4d2f-ac58-5335ebc721c0-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"375603dd-5dd3-4d2f-ac58-5335ebc721c0\") " pod="openstack/tempest-tests-tempest" Oct 01 16:38:13 crc kubenswrapper[4949]: I1001 16:38:13.316483 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/375603dd-5dd3-4d2f-ac58-5335ebc721c0-config-data\") pod \"tempest-tests-tempest\" (UID: \"375603dd-5dd3-4d2f-ac58-5335ebc721c0\") " pod="openstack/tempest-tests-tempest" Oct 01 16:38:13 crc kubenswrapper[4949]: I1001 16:38:13.316620 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/375603dd-5dd3-4d2f-ac58-5335ebc721c0-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"375603dd-5dd3-4d2f-ac58-5335ebc721c0\") " pod="openstack/tempest-tests-tempest" Oct 01 16:38:13 crc kubenswrapper[4949]: I1001 16:38:13.316774 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"375603dd-5dd3-4d2f-ac58-5335ebc721c0\") " pod="openstack/tempest-tests-tempest" Oct 01 16:38:13 crc kubenswrapper[4949]: I1001 16:38:13.317271 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/375603dd-5dd3-4d2f-ac58-5335ebc721c0-test-operator-ephemeral-workdir\") pod \"tempest-tests-tempest\" (UID: \"375603dd-5dd3-4d2f-ac58-5335ebc721c0\") " pod="openstack/tempest-tests-tempest" Oct 01 16:38:13 crc kubenswrapper[4949]: I1001 16:38:13.317539 4949 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"375603dd-5dd3-4d2f-ac58-5335ebc721c0\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/tempest-tests-tempest" Oct 01 16:38:13 crc kubenswrapper[4949]: I1001 16:38:13.318087 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/375603dd-5dd3-4d2f-ac58-5335ebc721c0-openstack-config\") pod \"tempest-tests-tempest\" (UID: \"375603dd-5dd3-4d2f-ac58-5335ebc721c0\") " pod="openstack/tempest-tests-tempest" Oct 01 16:38:13 crc kubenswrapper[4949]: I1001 16:38:13.320417 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/375603dd-5dd3-4d2f-ac58-5335ebc721c0-config-data\") pod \"tempest-tests-tempest\" (UID: \"375603dd-5dd3-4d2f-ac58-5335ebc721c0\") " pod="openstack/tempest-tests-tempest" Oct 01 16:38:13 crc kubenswrapper[4949]: I1001 16:38:13.324614 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/375603dd-5dd3-4d2f-ac58-5335ebc721c0-ca-certs\") pod \"tempest-tests-tempest\" (UID: \"375603dd-5dd3-4d2f-ac58-5335ebc721c0\") " pod="openstack/tempest-tests-tempest" Oct 01 16:38:13 crc kubenswrapper[4949]: I1001 16:38:13.325474 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/375603dd-5dd3-4d2f-ac58-5335ebc721c0-openstack-config-secret\") pod \"tempest-tests-tempest\" (UID: \"375603dd-5dd3-4d2f-ac58-5335ebc721c0\") " pod="openstack/tempest-tests-tempest" Oct 01 16:38:13 crc kubenswrapper[4949]: I1001 16:38:13.331745 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/375603dd-5dd3-4d2f-ac58-5335ebc721c0-ssh-key\") pod \"tempest-tests-tempest\" (UID: \"375603dd-5dd3-4d2f-ac58-5335ebc721c0\") " pod="openstack/tempest-tests-tempest" Oct 01 16:38:13 crc kubenswrapper[4949]: I1001 16:38:13.338569 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8r6nh\" (UniqueName: \"kubernetes.io/projected/375603dd-5dd3-4d2f-ac58-5335ebc721c0-kube-api-access-8r6nh\") pod \"tempest-tests-tempest\" (UID: \"375603dd-5dd3-4d2f-ac58-5335ebc721c0\") " pod="openstack/tempest-tests-tempest" Oct 01 16:38:13 crc kubenswrapper[4949]: I1001 16:38:13.355505 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"tempest-tests-tempest\" (UID: \"375603dd-5dd3-4d2f-ac58-5335ebc721c0\") " pod="openstack/tempest-tests-tempest" Oct 01 16:38:13 crc kubenswrapper[4949]: I1001 16:38:13.441203 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Oct 01 16:38:13 crc kubenswrapper[4949]: I1001 16:38:13.981327 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/tempest-tests-tempest"] Oct 01 16:38:14 crc kubenswrapper[4949]: I1001 16:38:14.475700 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"375603dd-5dd3-4d2f-ac58-5335ebc721c0","Type":"ContainerStarted","Data":"d3f2d8c199bb8f9d4df2aba6de1909a436af752c4ea0d6e776c06e052ea5184e"} Oct 01 16:38:40 crc kubenswrapper[4949]: E1001 16:38:40.410014 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified" Oct 01 16:38:40 crc kubenswrapper[4949]: E1001 16:38:40.410866 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:tempest-tests-tempest-tests-runner,Image:quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:test-operator-ephemeral-workdir,ReadOnly:false,MountPath:/var/lib/tempest,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-ephemeral-temporary,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-data,ReadOnly:false,MountPath:/etc/test_operator,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:test-operator-logs,ReadOnly:false,MountPath:/var/lib/tempest/external_files,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/etc/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config,ReadOnly:true,MountPath:/var/lib/tempest/.config/openstack/clouds.yaml,SubPath:clouds.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:openstack-config-secret,ReadOnly:false,MountPath:/etc/openstack/secure.yaml,SubPath:secure.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ca-certs,ReadOnly:true,MountPath:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem,SubPath:tls-ca-bundle.pem,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:ssh-key,ReadOnly:false,MountPath:/var/lib/tempest/id_ecdsa,SubPath:ssh_key,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8r6nh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*42480,RunAsNonRoot:*false,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:*true,RunAsGroup:*42480,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-custom-data-s0,},Optional:nil,},SecretRef:nil,},EnvFromSource{Prefix:,ConfigMapRef:&ConfigMapEnvSource{LocalObjectReference:LocalObjectReference{Name:tempest-tests-tempest-env-vars-s0,},Optional:nil,},SecretRef:nil,},},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod tempest-tests-tempest_openstack(375603dd-5dd3-4d2f-ac58-5335ebc721c0): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 01 16:38:40 crc kubenswrapper[4949]: E1001 16:38:40.412182 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openstack/tempest-tests-tempest" podUID="375603dd-5dd3-4d2f-ac58-5335ebc721c0" Oct 01 16:38:40 crc kubenswrapper[4949]: E1001 16:38:40.760843 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tempest-tests-tempest-tests-runner\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/podified-antelope-centos9/openstack-tempest-all:current-podified\\\"\"" pod="openstack/tempest-tests-tempest" podUID="375603dd-5dd3-4d2f-ac58-5335ebc721c0" Oct 01 16:38:47 crc kubenswrapper[4949]: I1001 16:38:47.586833 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vx8rp"] Oct 01 16:38:47 crc kubenswrapper[4949]: I1001 16:38:47.591284 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vx8rp" Oct 01 16:38:47 crc kubenswrapper[4949]: I1001 16:38:47.626827 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vx8rp"] Oct 01 16:38:47 crc kubenswrapper[4949]: I1001 16:38:47.674077 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9nhd5\" (UniqueName: \"kubernetes.io/projected/8ffe1c1b-7237-478f-b718-ae5c36b65d15-kube-api-access-9nhd5\") pod \"certified-operators-vx8rp\" (UID: \"8ffe1c1b-7237-478f-b718-ae5c36b65d15\") " pod="openshift-marketplace/certified-operators-vx8rp" Oct 01 16:38:47 crc kubenswrapper[4949]: I1001 16:38:47.674191 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ffe1c1b-7237-478f-b718-ae5c36b65d15-catalog-content\") pod \"certified-operators-vx8rp\" (UID: \"8ffe1c1b-7237-478f-b718-ae5c36b65d15\") " pod="openshift-marketplace/certified-operators-vx8rp" Oct 01 16:38:47 crc kubenswrapper[4949]: I1001 16:38:47.674447 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ffe1c1b-7237-478f-b718-ae5c36b65d15-utilities\") pod \"certified-operators-vx8rp\" (UID: \"8ffe1c1b-7237-478f-b718-ae5c36b65d15\") " pod="openshift-marketplace/certified-operators-vx8rp" Oct 01 16:38:47 crc kubenswrapper[4949]: I1001 16:38:47.776288 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9nhd5\" (UniqueName: \"kubernetes.io/projected/8ffe1c1b-7237-478f-b718-ae5c36b65d15-kube-api-access-9nhd5\") pod \"certified-operators-vx8rp\" (UID: \"8ffe1c1b-7237-478f-b718-ae5c36b65d15\") " pod="openshift-marketplace/certified-operators-vx8rp" Oct 01 16:38:47 crc kubenswrapper[4949]: I1001 16:38:47.776386 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ffe1c1b-7237-478f-b718-ae5c36b65d15-catalog-content\") pod \"certified-operators-vx8rp\" (UID: \"8ffe1c1b-7237-478f-b718-ae5c36b65d15\") " pod="openshift-marketplace/certified-operators-vx8rp" Oct 01 16:38:47 crc kubenswrapper[4949]: I1001 16:38:47.776929 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ffe1c1b-7237-478f-b718-ae5c36b65d15-catalog-content\") pod \"certified-operators-vx8rp\" (UID: \"8ffe1c1b-7237-478f-b718-ae5c36b65d15\") " pod="openshift-marketplace/certified-operators-vx8rp" Oct 01 16:38:47 crc kubenswrapper[4949]: I1001 16:38:47.777057 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ffe1c1b-7237-478f-b718-ae5c36b65d15-utilities\") pod \"certified-operators-vx8rp\" (UID: \"8ffe1c1b-7237-478f-b718-ae5c36b65d15\") " pod="openshift-marketplace/certified-operators-vx8rp" Oct 01 16:38:47 crc kubenswrapper[4949]: I1001 16:38:47.777381 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ffe1c1b-7237-478f-b718-ae5c36b65d15-utilities\") pod \"certified-operators-vx8rp\" (UID: \"8ffe1c1b-7237-478f-b718-ae5c36b65d15\") " pod="openshift-marketplace/certified-operators-vx8rp" Oct 01 16:38:47 crc kubenswrapper[4949]: I1001 16:38:47.800447 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9nhd5\" (UniqueName: \"kubernetes.io/projected/8ffe1c1b-7237-478f-b718-ae5c36b65d15-kube-api-access-9nhd5\") pod \"certified-operators-vx8rp\" (UID: \"8ffe1c1b-7237-478f-b718-ae5c36b65d15\") " pod="openshift-marketplace/certified-operators-vx8rp" Oct 01 16:38:47 crc kubenswrapper[4949]: I1001 16:38:47.932052 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vx8rp" Oct 01 16:38:48 crc kubenswrapper[4949]: I1001 16:38:48.623112 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vx8rp"] Oct 01 16:38:48 crc kubenswrapper[4949]: I1001 16:38:48.865342 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vx8rp" event={"ID":"8ffe1c1b-7237-478f-b718-ae5c36b65d15","Type":"ContainerStarted","Data":"0b67f68a0affe8e64ca939077179c081e1efc15ce7e45e49af7c5d15d36bebe4"} Oct 01 16:38:49 crc kubenswrapper[4949]: I1001 16:38:49.882881 4949 generic.go:334] "Generic (PLEG): container finished" podID="8ffe1c1b-7237-478f-b718-ae5c36b65d15" containerID="d25d805eb6c87fd71517145e94528ce115a213b8bbd4a472953477bea889fb14" exitCode=0 Oct 01 16:38:49 crc kubenswrapper[4949]: I1001 16:38:49.882967 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vx8rp" event={"ID":"8ffe1c1b-7237-478f-b718-ae5c36b65d15","Type":"ContainerDied","Data":"d25d805eb6c87fd71517145e94528ce115a213b8bbd4a472953477bea889fb14"} Oct 01 16:38:50 crc kubenswrapper[4949]: I1001 16:38:50.894116 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vx8rp" event={"ID":"8ffe1c1b-7237-478f-b718-ae5c36b65d15","Type":"ContainerStarted","Data":"2dd56c49c8beb8ed5a9805f8abfcab8ba2d926a598684625a07de2ad555239b6"} Oct 01 16:38:51 crc kubenswrapper[4949]: I1001 16:38:51.908100 4949 generic.go:334] "Generic (PLEG): container finished" podID="8ffe1c1b-7237-478f-b718-ae5c36b65d15" containerID="2dd56c49c8beb8ed5a9805f8abfcab8ba2d926a598684625a07de2ad555239b6" exitCode=0 Oct 01 16:38:51 crc kubenswrapper[4949]: I1001 16:38:51.908233 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vx8rp" event={"ID":"8ffe1c1b-7237-478f-b718-ae5c36b65d15","Type":"ContainerDied","Data":"2dd56c49c8beb8ed5a9805f8abfcab8ba2d926a598684625a07de2ad555239b6"} Oct 01 16:38:52 crc kubenswrapper[4949]: I1001 16:38:52.920653 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vx8rp" event={"ID":"8ffe1c1b-7237-478f-b718-ae5c36b65d15","Type":"ContainerStarted","Data":"2ccd2da675d46f2ca95a2c91503b74ec24106335f1b05754520944fe3d60046a"} Oct 01 16:38:52 crc kubenswrapper[4949]: I1001 16:38:52.940035 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vx8rp" podStartSLOduration=3.425179668 podStartE2EDuration="5.940011584s" podCreationTimestamp="2025-10-01 16:38:47 +0000 UTC" firstStartedPulling="2025-10-01 16:38:49.88723244 +0000 UTC m=+3429.192838681" lastFinishedPulling="2025-10-01 16:38:52.402064396 +0000 UTC m=+3431.707670597" observedRunningTime="2025-10-01 16:38:52.935773937 +0000 UTC m=+3432.241380148" watchObservedRunningTime="2025-10-01 16:38:52.940011584 +0000 UTC m=+3432.245617785" Oct 01 16:38:56 crc kubenswrapper[4949]: I1001 16:38:56.097987 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack"/"tempest-tests-tempest-env-vars-s0" Oct 01 16:38:57 crc kubenswrapper[4949]: I1001 16:38:57.932614 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vx8rp" Oct 01 16:38:57 crc kubenswrapper[4949]: I1001 16:38:57.933269 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vx8rp" Oct 01 16:38:57 crc kubenswrapper[4949]: I1001 16:38:57.984275 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"375603dd-5dd3-4d2f-ac58-5335ebc721c0","Type":"ContainerStarted","Data":"90ccc406714e8d1f7386a089e60c53f461963176e960fc92e32aca72f4c72c3b"} Oct 01 16:38:57 crc kubenswrapper[4949]: I1001 16:38:57.999772 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vx8rp" Oct 01 16:38:58 crc kubenswrapper[4949]: I1001 16:38:58.006364 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/tempest-tests-tempest" podStartSLOduration=3.90463563 podStartE2EDuration="46.006341562s" podCreationTimestamp="2025-10-01 16:38:12 +0000 UTC" firstStartedPulling="2025-10-01 16:38:13.992965547 +0000 UTC m=+3393.298571778" lastFinishedPulling="2025-10-01 16:38:56.094671479 +0000 UTC m=+3435.400277710" observedRunningTime="2025-10-01 16:38:58.001355774 +0000 UTC m=+3437.306961985" watchObservedRunningTime="2025-10-01 16:38:58.006341562 +0000 UTC m=+3437.311947763" Oct 01 16:38:58 crc kubenswrapper[4949]: I1001 16:38:58.068473 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vx8rp" Oct 01 16:38:58 crc kubenswrapper[4949]: I1001 16:38:58.255188 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vx8rp"] Oct 01 16:39:00 crc kubenswrapper[4949]: I1001 16:39:00.014149 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vx8rp" podUID="8ffe1c1b-7237-478f-b718-ae5c36b65d15" containerName="registry-server" containerID="cri-o://2ccd2da675d46f2ca95a2c91503b74ec24106335f1b05754520944fe3d60046a" gracePeriod=2 Oct 01 16:39:00 crc kubenswrapper[4949]: I1001 16:39:00.564261 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vx8rp" Oct 01 16:39:00 crc kubenswrapper[4949]: I1001 16:39:00.706052 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ffe1c1b-7237-478f-b718-ae5c36b65d15-utilities\") pod \"8ffe1c1b-7237-478f-b718-ae5c36b65d15\" (UID: \"8ffe1c1b-7237-478f-b718-ae5c36b65d15\") " Oct 01 16:39:00 crc kubenswrapper[4949]: I1001 16:39:00.706420 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ffe1c1b-7237-478f-b718-ae5c36b65d15-catalog-content\") pod \"8ffe1c1b-7237-478f-b718-ae5c36b65d15\" (UID: \"8ffe1c1b-7237-478f-b718-ae5c36b65d15\") " Oct 01 16:39:00 crc kubenswrapper[4949]: I1001 16:39:00.706527 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9nhd5\" (UniqueName: \"kubernetes.io/projected/8ffe1c1b-7237-478f-b718-ae5c36b65d15-kube-api-access-9nhd5\") pod \"8ffe1c1b-7237-478f-b718-ae5c36b65d15\" (UID: \"8ffe1c1b-7237-478f-b718-ae5c36b65d15\") " Oct 01 16:39:00 crc kubenswrapper[4949]: I1001 16:39:00.707727 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ffe1c1b-7237-478f-b718-ae5c36b65d15-utilities" (OuterVolumeSpecName: "utilities") pod "8ffe1c1b-7237-478f-b718-ae5c36b65d15" (UID: "8ffe1c1b-7237-478f-b718-ae5c36b65d15"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:39:00 crc kubenswrapper[4949]: I1001 16:39:00.727310 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8ffe1c1b-7237-478f-b718-ae5c36b65d15-kube-api-access-9nhd5" (OuterVolumeSpecName: "kube-api-access-9nhd5") pod "8ffe1c1b-7237-478f-b718-ae5c36b65d15" (UID: "8ffe1c1b-7237-478f-b718-ae5c36b65d15"). InnerVolumeSpecName "kube-api-access-9nhd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:39:00 crc kubenswrapper[4949]: I1001 16:39:00.783836 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8ffe1c1b-7237-478f-b718-ae5c36b65d15-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8ffe1c1b-7237-478f-b718-ae5c36b65d15" (UID: "8ffe1c1b-7237-478f-b718-ae5c36b65d15"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:39:00 crc kubenswrapper[4949]: I1001 16:39:00.808519 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9nhd5\" (UniqueName: \"kubernetes.io/projected/8ffe1c1b-7237-478f-b718-ae5c36b65d15-kube-api-access-9nhd5\") on node \"crc\" DevicePath \"\"" Oct 01 16:39:00 crc kubenswrapper[4949]: I1001 16:39:00.808548 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8ffe1c1b-7237-478f-b718-ae5c36b65d15-utilities\") on node \"crc\" DevicePath \"\"" Oct 01 16:39:00 crc kubenswrapper[4949]: I1001 16:39:00.808557 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8ffe1c1b-7237-478f-b718-ae5c36b65d15-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 01 16:39:01 crc kubenswrapper[4949]: I1001 16:39:01.034435 4949 generic.go:334] "Generic (PLEG): container finished" podID="8ffe1c1b-7237-478f-b718-ae5c36b65d15" containerID="2ccd2da675d46f2ca95a2c91503b74ec24106335f1b05754520944fe3d60046a" exitCode=0 Oct 01 16:39:01 crc kubenswrapper[4949]: I1001 16:39:01.034497 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vx8rp" event={"ID":"8ffe1c1b-7237-478f-b718-ae5c36b65d15","Type":"ContainerDied","Data":"2ccd2da675d46f2ca95a2c91503b74ec24106335f1b05754520944fe3d60046a"} Oct 01 16:39:01 crc kubenswrapper[4949]: I1001 16:39:01.034540 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vx8rp" Oct 01 16:39:01 crc kubenswrapper[4949]: I1001 16:39:01.034558 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vx8rp" event={"ID":"8ffe1c1b-7237-478f-b718-ae5c36b65d15","Type":"ContainerDied","Data":"0b67f68a0affe8e64ca939077179c081e1efc15ce7e45e49af7c5d15d36bebe4"} Oct 01 16:39:01 crc kubenswrapper[4949]: I1001 16:39:01.034761 4949 scope.go:117] "RemoveContainer" containerID="2ccd2da675d46f2ca95a2c91503b74ec24106335f1b05754520944fe3d60046a" Oct 01 16:39:01 crc kubenswrapper[4949]: I1001 16:39:01.070303 4949 scope.go:117] "RemoveContainer" containerID="2dd56c49c8beb8ed5a9805f8abfcab8ba2d926a598684625a07de2ad555239b6" Oct 01 16:39:01 crc kubenswrapper[4949]: I1001 16:39:01.085296 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vx8rp"] Oct 01 16:39:01 crc kubenswrapper[4949]: I1001 16:39:01.095800 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vx8rp"] Oct 01 16:39:01 crc kubenswrapper[4949]: I1001 16:39:01.107734 4949 scope.go:117] "RemoveContainer" containerID="d25d805eb6c87fd71517145e94528ce115a213b8bbd4a472953477bea889fb14" Oct 01 16:39:01 crc kubenswrapper[4949]: I1001 16:39:01.143987 4949 scope.go:117] "RemoveContainer" containerID="2ccd2da675d46f2ca95a2c91503b74ec24106335f1b05754520944fe3d60046a" Oct 01 16:39:01 crc kubenswrapper[4949]: E1001 16:39:01.144494 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ccd2da675d46f2ca95a2c91503b74ec24106335f1b05754520944fe3d60046a\": container with ID starting with 2ccd2da675d46f2ca95a2c91503b74ec24106335f1b05754520944fe3d60046a not found: ID does not exist" containerID="2ccd2da675d46f2ca95a2c91503b74ec24106335f1b05754520944fe3d60046a" Oct 01 16:39:01 crc kubenswrapper[4949]: I1001 16:39:01.144549 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ccd2da675d46f2ca95a2c91503b74ec24106335f1b05754520944fe3d60046a"} err="failed to get container status \"2ccd2da675d46f2ca95a2c91503b74ec24106335f1b05754520944fe3d60046a\": rpc error: code = NotFound desc = could not find container \"2ccd2da675d46f2ca95a2c91503b74ec24106335f1b05754520944fe3d60046a\": container with ID starting with 2ccd2da675d46f2ca95a2c91503b74ec24106335f1b05754520944fe3d60046a not found: ID does not exist" Oct 01 16:39:01 crc kubenswrapper[4949]: I1001 16:39:01.144585 4949 scope.go:117] "RemoveContainer" containerID="2dd56c49c8beb8ed5a9805f8abfcab8ba2d926a598684625a07de2ad555239b6" Oct 01 16:39:01 crc kubenswrapper[4949]: E1001 16:39:01.144987 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2dd56c49c8beb8ed5a9805f8abfcab8ba2d926a598684625a07de2ad555239b6\": container with ID starting with 2dd56c49c8beb8ed5a9805f8abfcab8ba2d926a598684625a07de2ad555239b6 not found: ID does not exist" containerID="2dd56c49c8beb8ed5a9805f8abfcab8ba2d926a598684625a07de2ad555239b6" Oct 01 16:39:01 crc kubenswrapper[4949]: I1001 16:39:01.145108 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2dd56c49c8beb8ed5a9805f8abfcab8ba2d926a598684625a07de2ad555239b6"} err="failed to get container status \"2dd56c49c8beb8ed5a9805f8abfcab8ba2d926a598684625a07de2ad555239b6\": rpc error: code = NotFound desc = could not find container \"2dd56c49c8beb8ed5a9805f8abfcab8ba2d926a598684625a07de2ad555239b6\": container with ID starting with 2dd56c49c8beb8ed5a9805f8abfcab8ba2d926a598684625a07de2ad555239b6 not found: ID does not exist" Oct 01 16:39:01 crc kubenswrapper[4949]: I1001 16:39:01.145289 4949 scope.go:117] "RemoveContainer" containerID="d25d805eb6c87fd71517145e94528ce115a213b8bbd4a472953477bea889fb14" Oct 01 16:39:01 crc kubenswrapper[4949]: E1001 16:39:01.146733 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d25d805eb6c87fd71517145e94528ce115a213b8bbd4a472953477bea889fb14\": container with ID starting with d25d805eb6c87fd71517145e94528ce115a213b8bbd4a472953477bea889fb14 not found: ID does not exist" containerID="d25d805eb6c87fd71517145e94528ce115a213b8bbd4a472953477bea889fb14" Oct 01 16:39:01 crc kubenswrapper[4949]: I1001 16:39:01.146772 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d25d805eb6c87fd71517145e94528ce115a213b8bbd4a472953477bea889fb14"} err="failed to get container status \"d25d805eb6c87fd71517145e94528ce115a213b8bbd4a472953477bea889fb14\": rpc error: code = NotFound desc = could not find container \"d25d805eb6c87fd71517145e94528ce115a213b8bbd4a472953477bea889fb14\": container with ID starting with d25d805eb6c87fd71517145e94528ce115a213b8bbd4a472953477bea889fb14 not found: ID does not exist" Oct 01 16:39:01 crc kubenswrapper[4949]: I1001 16:39:01.631357 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8ffe1c1b-7237-478f-b718-ae5c36b65d15" path="/var/lib/kubelet/pods/8ffe1c1b-7237-478f-b718-ae5c36b65d15/volumes" Oct 01 16:39:48 crc kubenswrapper[4949]: I1001 16:39:48.039308 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 16:39:48 crc kubenswrapper[4949]: I1001 16:39:48.040078 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 16:40:18 crc kubenswrapper[4949]: I1001 16:40:18.039151 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 16:40:18 crc kubenswrapper[4949]: I1001 16:40:18.039694 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 16:40:27 crc kubenswrapper[4949]: I1001 16:40:27.727424 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-m5cl5"] Oct 01 16:40:27 crc kubenswrapper[4949]: E1001 16:40:27.728920 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ffe1c1b-7237-478f-b718-ae5c36b65d15" containerName="registry-server" Oct 01 16:40:27 crc kubenswrapper[4949]: I1001 16:40:27.728954 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ffe1c1b-7237-478f-b718-ae5c36b65d15" containerName="registry-server" Oct 01 16:40:27 crc kubenswrapper[4949]: E1001 16:40:27.728988 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ffe1c1b-7237-478f-b718-ae5c36b65d15" containerName="extract-content" Oct 01 16:40:27 crc kubenswrapper[4949]: I1001 16:40:27.729006 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ffe1c1b-7237-478f-b718-ae5c36b65d15" containerName="extract-content" Oct 01 16:40:27 crc kubenswrapper[4949]: E1001 16:40:27.729069 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8ffe1c1b-7237-478f-b718-ae5c36b65d15" containerName="extract-utilities" Oct 01 16:40:27 crc kubenswrapper[4949]: I1001 16:40:27.729090 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="8ffe1c1b-7237-478f-b718-ae5c36b65d15" containerName="extract-utilities" Oct 01 16:40:27 crc kubenswrapper[4949]: I1001 16:40:27.729574 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="8ffe1c1b-7237-478f-b718-ae5c36b65d15" containerName="registry-server" Oct 01 16:40:27 crc kubenswrapper[4949]: I1001 16:40:27.732301 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m5cl5" Oct 01 16:40:27 crc kubenswrapper[4949]: I1001 16:40:27.741420 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m5cl5"] Oct 01 16:40:27 crc kubenswrapper[4949]: I1001 16:40:27.895792 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jlsrl\" (UniqueName: \"kubernetes.io/projected/f91face3-a562-4e23-a7ce-a887644746fd-kube-api-access-jlsrl\") pod \"community-operators-m5cl5\" (UID: \"f91face3-a562-4e23-a7ce-a887644746fd\") " pod="openshift-marketplace/community-operators-m5cl5" Oct 01 16:40:27 crc kubenswrapper[4949]: I1001 16:40:27.895857 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f91face3-a562-4e23-a7ce-a887644746fd-catalog-content\") pod \"community-operators-m5cl5\" (UID: \"f91face3-a562-4e23-a7ce-a887644746fd\") " pod="openshift-marketplace/community-operators-m5cl5" Oct 01 16:40:27 crc kubenswrapper[4949]: I1001 16:40:27.895977 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f91face3-a562-4e23-a7ce-a887644746fd-utilities\") pod \"community-operators-m5cl5\" (UID: \"f91face3-a562-4e23-a7ce-a887644746fd\") " pod="openshift-marketplace/community-operators-m5cl5" Oct 01 16:40:27 crc kubenswrapper[4949]: I1001 16:40:27.998485 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jlsrl\" (UniqueName: \"kubernetes.io/projected/f91face3-a562-4e23-a7ce-a887644746fd-kube-api-access-jlsrl\") pod \"community-operators-m5cl5\" (UID: \"f91face3-a562-4e23-a7ce-a887644746fd\") " pod="openshift-marketplace/community-operators-m5cl5" Oct 01 16:40:27 crc kubenswrapper[4949]: I1001 16:40:27.998534 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f91face3-a562-4e23-a7ce-a887644746fd-catalog-content\") pod \"community-operators-m5cl5\" (UID: \"f91face3-a562-4e23-a7ce-a887644746fd\") " pod="openshift-marketplace/community-operators-m5cl5" Oct 01 16:40:27 crc kubenswrapper[4949]: I1001 16:40:27.998597 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f91face3-a562-4e23-a7ce-a887644746fd-utilities\") pod \"community-operators-m5cl5\" (UID: \"f91face3-a562-4e23-a7ce-a887644746fd\") " pod="openshift-marketplace/community-operators-m5cl5" Oct 01 16:40:27 crc kubenswrapper[4949]: I1001 16:40:27.999406 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f91face3-a562-4e23-a7ce-a887644746fd-utilities\") pod \"community-operators-m5cl5\" (UID: \"f91face3-a562-4e23-a7ce-a887644746fd\") " pod="openshift-marketplace/community-operators-m5cl5" Oct 01 16:40:27 crc kubenswrapper[4949]: I1001 16:40:27.999659 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f91face3-a562-4e23-a7ce-a887644746fd-catalog-content\") pod \"community-operators-m5cl5\" (UID: \"f91face3-a562-4e23-a7ce-a887644746fd\") " pod="openshift-marketplace/community-operators-m5cl5" Oct 01 16:40:28 crc kubenswrapper[4949]: I1001 16:40:28.035668 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jlsrl\" (UniqueName: \"kubernetes.io/projected/f91face3-a562-4e23-a7ce-a887644746fd-kube-api-access-jlsrl\") pod \"community-operators-m5cl5\" (UID: \"f91face3-a562-4e23-a7ce-a887644746fd\") " pod="openshift-marketplace/community-operators-m5cl5" Oct 01 16:40:28 crc kubenswrapper[4949]: I1001 16:40:28.060370 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m5cl5" Oct 01 16:40:28 crc kubenswrapper[4949]: I1001 16:40:28.598834 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m5cl5"] Oct 01 16:40:29 crc kubenswrapper[4949]: I1001 16:40:29.094890 4949 generic.go:334] "Generic (PLEG): container finished" podID="f91face3-a562-4e23-a7ce-a887644746fd" containerID="ef8a9552103785d47f65813ac8fb587e8169f756a185cb07eab7e0a8fb26daba" exitCode=0 Oct 01 16:40:29 crc kubenswrapper[4949]: I1001 16:40:29.094956 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m5cl5" event={"ID":"f91face3-a562-4e23-a7ce-a887644746fd","Type":"ContainerDied","Data":"ef8a9552103785d47f65813ac8fb587e8169f756a185cb07eab7e0a8fb26daba"} Oct 01 16:40:29 crc kubenswrapper[4949]: I1001 16:40:29.094999 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m5cl5" event={"ID":"f91face3-a562-4e23-a7ce-a887644746fd","Type":"ContainerStarted","Data":"c6fd631c54fc15a55f5a2659e2555e85d8313fdfe746f1d298e8959c9e117442"} Oct 01 16:40:30 crc kubenswrapper[4949]: I1001 16:40:30.106075 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m5cl5" event={"ID":"f91face3-a562-4e23-a7ce-a887644746fd","Type":"ContainerStarted","Data":"5511a0cd442658f05d85ae63042de7ca7e8961a1a099fd12dc4f86887c2ab6f7"} Oct 01 16:40:31 crc kubenswrapper[4949]: I1001 16:40:31.116070 4949 generic.go:334] "Generic (PLEG): container finished" podID="f91face3-a562-4e23-a7ce-a887644746fd" containerID="5511a0cd442658f05d85ae63042de7ca7e8961a1a099fd12dc4f86887c2ab6f7" exitCode=0 Oct 01 16:40:31 crc kubenswrapper[4949]: I1001 16:40:31.116165 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m5cl5" event={"ID":"f91face3-a562-4e23-a7ce-a887644746fd","Type":"ContainerDied","Data":"5511a0cd442658f05d85ae63042de7ca7e8961a1a099fd12dc4f86887c2ab6f7"} Oct 01 16:40:31 crc kubenswrapper[4949]: I1001 16:40:31.119795 4949 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 01 16:40:32 crc kubenswrapper[4949]: I1001 16:40:32.134888 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m5cl5" event={"ID":"f91face3-a562-4e23-a7ce-a887644746fd","Type":"ContainerStarted","Data":"ad858a4e63a915c3ffe6e77b475da9b34e1f9aea48b8f48e46057fffcf42fa6c"} Oct 01 16:40:32 crc kubenswrapper[4949]: I1001 16:40:32.168983 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-m5cl5" podStartSLOduration=2.452115596 podStartE2EDuration="5.168966407s" podCreationTimestamp="2025-10-01 16:40:27 +0000 UTC" firstStartedPulling="2025-10-01 16:40:29.097908436 +0000 UTC m=+3528.403514667" lastFinishedPulling="2025-10-01 16:40:31.814759257 +0000 UTC m=+3531.120365478" observedRunningTime="2025-10-01 16:40:32.163504876 +0000 UTC m=+3531.469111077" watchObservedRunningTime="2025-10-01 16:40:32.168966407 +0000 UTC m=+3531.474572598" Oct 01 16:40:38 crc kubenswrapper[4949]: I1001 16:40:38.061217 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-m5cl5" Oct 01 16:40:38 crc kubenswrapper[4949]: I1001 16:40:38.061995 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-m5cl5" Oct 01 16:40:38 crc kubenswrapper[4949]: I1001 16:40:38.136774 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-m5cl5" Oct 01 16:40:38 crc kubenswrapper[4949]: I1001 16:40:38.271055 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-m5cl5" Oct 01 16:40:38 crc kubenswrapper[4949]: I1001 16:40:38.384646 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m5cl5"] Oct 01 16:40:40 crc kubenswrapper[4949]: I1001 16:40:40.217099 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-m5cl5" podUID="f91face3-a562-4e23-a7ce-a887644746fd" containerName="registry-server" containerID="cri-o://ad858a4e63a915c3ffe6e77b475da9b34e1f9aea48b8f48e46057fffcf42fa6c" gracePeriod=2 Oct 01 16:40:40 crc kubenswrapper[4949]: I1001 16:40:40.726113 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m5cl5" Oct 01 16:40:40 crc kubenswrapper[4949]: I1001 16:40:40.804773 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jlsrl\" (UniqueName: \"kubernetes.io/projected/f91face3-a562-4e23-a7ce-a887644746fd-kube-api-access-jlsrl\") pod \"f91face3-a562-4e23-a7ce-a887644746fd\" (UID: \"f91face3-a562-4e23-a7ce-a887644746fd\") " Oct 01 16:40:40 crc kubenswrapper[4949]: I1001 16:40:40.804981 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f91face3-a562-4e23-a7ce-a887644746fd-utilities\") pod \"f91face3-a562-4e23-a7ce-a887644746fd\" (UID: \"f91face3-a562-4e23-a7ce-a887644746fd\") " Oct 01 16:40:40 crc kubenswrapper[4949]: I1001 16:40:40.805250 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f91face3-a562-4e23-a7ce-a887644746fd-catalog-content\") pod \"f91face3-a562-4e23-a7ce-a887644746fd\" (UID: \"f91face3-a562-4e23-a7ce-a887644746fd\") " Oct 01 16:40:40 crc kubenswrapper[4949]: I1001 16:40:40.807082 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f91face3-a562-4e23-a7ce-a887644746fd-utilities" (OuterVolumeSpecName: "utilities") pod "f91face3-a562-4e23-a7ce-a887644746fd" (UID: "f91face3-a562-4e23-a7ce-a887644746fd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:40:40 crc kubenswrapper[4949]: I1001 16:40:40.814628 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f91face3-a562-4e23-a7ce-a887644746fd-kube-api-access-jlsrl" (OuterVolumeSpecName: "kube-api-access-jlsrl") pod "f91face3-a562-4e23-a7ce-a887644746fd" (UID: "f91face3-a562-4e23-a7ce-a887644746fd"). InnerVolumeSpecName "kube-api-access-jlsrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:40:40 crc kubenswrapper[4949]: I1001 16:40:40.860712 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f91face3-a562-4e23-a7ce-a887644746fd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f91face3-a562-4e23-a7ce-a887644746fd" (UID: "f91face3-a562-4e23-a7ce-a887644746fd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:40:40 crc kubenswrapper[4949]: I1001 16:40:40.908369 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jlsrl\" (UniqueName: \"kubernetes.io/projected/f91face3-a562-4e23-a7ce-a887644746fd-kube-api-access-jlsrl\") on node \"crc\" DevicePath \"\"" Oct 01 16:40:40 crc kubenswrapper[4949]: I1001 16:40:40.908507 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f91face3-a562-4e23-a7ce-a887644746fd-utilities\") on node \"crc\" DevicePath \"\"" Oct 01 16:40:40 crc kubenswrapper[4949]: I1001 16:40:40.908538 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f91face3-a562-4e23-a7ce-a887644746fd-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 01 16:40:41 crc kubenswrapper[4949]: I1001 16:40:41.227790 4949 generic.go:334] "Generic (PLEG): container finished" podID="f91face3-a562-4e23-a7ce-a887644746fd" containerID="ad858a4e63a915c3ffe6e77b475da9b34e1f9aea48b8f48e46057fffcf42fa6c" exitCode=0 Oct 01 16:40:41 crc kubenswrapper[4949]: I1001 16:40:41.227873 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m5cl5" event={"ID":"f91face3-a562-4e23-a7ce-a887644746fd","Type":"ContainerDied","Data":"ad858a4e63a915c3ffe6e77b475da9b34e1f9aea48b8f48e46057fffcf42fa6c"} Oct 01 16:40:41 crc kubenswrapper[4949]: I1001 16:40:41.227924 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m5cl5" event={"ID":"f91face3-a562-4e23-a7ce-a887644746fd","Type":"ContainerDied","Data":"c6fd631c54fc15a55f5a2659e2555e85d8313fdfe746f1d298e8959c9e117442"} Oct 01 16:40:41 crc kubenswrapper[4949]: I1001 16:40:41.227883 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m5cl5" Oct 01 16:40:41 crc kubenswrapper[4949]: I1001 16:40:41.227945 4949 scope.go:117] "RemoveContainer" containerID="ad858a4e63a915c3ffe6e77b475da9b34e1f9aea48b8f48e46057fffcf42fa6c" Oct 01 16:40:41 crc kubenswrapper[4949]: I1001 16:40:41.269414 4949 scope.go:117] "RemoveContainer" containerID="5511a0cd442658f05d85ae63042de7ca7e8961a1a099fd12dc4f86887c2ab6f7" Oct 01 16:40:41 crc kubenswrapper[4949]: I1001 16:40:41.278524 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m5cl5"] Oct 01 16:40:41 crc kubenswrapper[4949]: I1001 16:40:41.288493 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-m5cl5"] Oct 01 16:40:41 crc kubenswrapper[4949]: I1001 16:40:41.305860 4949 scope.go:117] "RemoveContainer" containerID="ef8a9552103785d47f65813ac8fb587e8169f756a185cb07eab7e0a8fb26daba" Oct 01 16:40:41 crc kubenswrapper[4949]: I1001 16:40:41.350547 4949 scope.go:117] "RemoveContainer" containerID="ad858a4e63a915c3ffe6e77b475da9b34e1f9aea48b8f48e46057fffcf42fa6c" Oct 01 16:40:41 crc kubenswrapper[4949]: E1001 16:40:41.351174 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad858a4e63a915c3ffe6e77b475da9b34e1f9aea48b8f48e46057fffcf42fa6c\": container with ID starting with ad858a4e63a915c3ffe6e77b475da9b34e1f9aea48b8f48e46057fffcf42fa6c not found: ID does not exist" containerID="ad858a4e63a915c3ffe6e77b475da9b34e1f9aea48b8f48e46057fffcf42fa6c" Oct 01 16:40:41 crc kubenswrapper[4949]: I1001 16:40:41.351206 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad858a4e63a915c3ffe6e77b475da9b34e1f9aea48b8f48e46057fffcf42fa6c"} err="failed to get container status \"ad858a4e63a915c3ffe6e77b475da9b34e1f9aea48b8f48e46057fffcf42fa6c\": rpc error: code = NotFound desc = could not find container \"ad858a4e63a915c3ffe6e77b475da9b34e1f9aea48b8f48e46057fffcf42fa6c\": container with ID starting with ad858a4e63a915c3ffe6e77b475da9b34e1f9aea48b8f48e46057fffcf42fa6c not found: ID does not exist" Oct 01 16:40:41 crc kubenswrapper[4949]: I1001 16:40:41.351226 4949 scope.go:117] "RemoveContainer" containerID="5511a0cd442658f05d85ae63042de7ca7e8961a1a099fd12dc4f86887c2ab6f7" Oct 01 16:40:41 crc kubenswrapper[4949]: E1001 16:40:41.351921 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5511a0cd442658f05d85ae63042de7ca7e8961a1a099fd12dc4f86887c2ab6f7\": container with ID starting with 5511a0cd442658f05d85ae63042de7ca7e8961a1a099fd12dc4f86887c2ab6f7 not found: ID does not exist" containerID="5511a0cd442658f05d85ae63042de7ca7e8961a1a099fd12dc4f86887c2ab6f7" Oct 01 16:40:41 crc kubenswrapper[4949]: I1001 16:40:41.351999 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5511a0cd442658f05d85ae63042de7ca7e8961a1a099fd12dc4f86887c2ab6f7"} err="failed to get container status \"5511a0cd442658f05d85ae63042de7ca7e8961a1a099fd12dc4f86887c2ab6f7\": rpc error: code = NotFound desc = could not find container \"5511a0cd442658f05d85ae63042de7ca7e8961a1a099fd12dc4f86887c2ab6f7\": container with ID starting with 5511a0cd442658f05d85ae63042de7ca7e8961a1a099fd12dc4f86887c2ab6f7 not found: ID does not exist" Oct 01 16:40:41 crc kubenswrapper[4949]: I1001 16:40:41.352045 4949 scope.go:117] "RemoveContainer" containerID="ef8a9552103785d47f65813ac8fb587e8169f756a185cb07eab7e0a8fb26daba" Oct 01 16:40:41 crc kubenswrapper[4949]: E1001 16:40:41.352524 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef8a9552103785d47f65813ac8fb587e8169f756a185cb07eab7e0a8fb26daba\": container with ID starting with ef8a9552103785d47f65813ac8fb587e8169f756a185cb07eab7e0a8fb26daba not found: ID does not exist" containerID="ef8a9552103785d47f65813ac8fb587e8169f756a185cb07eab7e0a8fb26daba" Oct 01 16:40:41 crc kubenswrapper[4949]: I1001 16:40:41.352548 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef8a9552103785d47f65813ac8fb587e8169f756a185cb07eab7e0a8fb26daba"} err="failed to get container status \"ef8a9552103785d47f65813ac8fb587e8169f756a185cb07eab7e0a8fb26daba\": rpc error: code = NotFound desc = could not find container \"ef8a9552103785d47f65813ac8fb587e8169f756a185cb07eab7e0a8fb26daba\": container with ID starting with ef8a9552103785d47f65813ac8fb587e8169f756a185cb07eab7e0a8fb26daba not found: ID does not exist" Oct 01 16:40:41 crc kubenswrapper[4949]: I1001 16:40:41.622401 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f91face3-a562-4e23-a7ce-a887644746fd" path="/var/lib/kubelet/pods/f91face3-a562-4e23-a7ce-a887644746fd/volumes" Oct 01 16:40:48 crc kubenswrapper[4949]: I1001 16:40:48.038514 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 16:40:48 crc kubenswrapper[4949]: I1001 16:40:48.039112 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 16:40:48 crc kubenswrapper[4949]: I1001 16:40:48.039196 4949 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l6287" Oct 01 16:40:48 crc kubenswrapper[4949]: I1001 16:40:48.040235 4949 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3f27d6beb245179886854d4e0b571554a832cd0a55ee5032309258751bf2c77f"} pod="openshift-machine-config-operator/machine-config-daemon-l6287" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 01 16:40:48 crc kubenswrapper[4949]: I1001 16:40:48.040328 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" containerID="cri-o://3f27d6beb245179886854d4e0b571554a832cd0a55ee5032309258751bf2c77f" gracePeriod=600 Oct 01 16:40:48 crc kubenswrapper[4949]: E1001 16:40:48.169649 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:40:48 crc kubenswrapper[4949]: I1001 16:40:48.324174 4949 generic.go:334] "Generic (PLEG): container finished" podID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerID="3f27d6beb245179886854d4e0b571554a832cd0a55ee5032309258751bf2c77f" exitCode=0 Oct 01 16:40:48 crc kubenswrapper[4949]: I1001 16:40:48.324226 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" event={"ID":"0e15cd67-d4ad-49b8-96a6-da114105e558","Type":"ContainerDied","Data":"3f27d6beb245179886854d4e0b571554a832cd0a55ee5032309258751bf2c77f"} Oct 01 16:40:48 crc kubenswrapper[4949]: I1001 16:40:48.324309 4949 scope.go:117] "RemoveContainer" containerID="2ae213e758e493fc55d198c1a675039b65ae449df8af35955dd5bb19ef866b5d" Oct 01 16:40:48 crc kubenswrapper[4949]: I1001 16:40:48.334109 4949 scope.go:117] "RemoveContainer" containerID="3f27d6beb245179886854d4e0b571554a832cd0a55ee5032309258751bf2c77f" Oct 01 16:40:48 crc kubenswrapper[4949]: E1001 16:40:48.337524 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:41:01 crc kubenswrapper[4949]: I1001 16:41:01.625388 4949 scope.go:117] "RemoveContainer" containerID="3f27d6beb245179886854d4e0b571554a832cd0a55ee5032309258751bf2c77f" Oct 01 16:41:01 crc kubenswrapper[4949]: E1001 16:41:01.627329 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:41:15 crc kubenswrapper[4949]: I1001 16:41:15.601727 4949 scope.go:117] "RemoveContainer" containerID="3f27d6beb245179886854d4e0b571554a832cd0a55ee5032309258751bf2c77f" Oct 01 16:41:15 crc kubenswrapper[4949]: E1001 16:41:15.602366 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:41:17 crc kubenswrapper[4949]: I1001 16:41:17.089941 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cfrx2"] Oct 01 16:41:17 crc kubenswrapper[4949]: E1001 16:41:17.090741 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f91face3-a562-4e23-a7ce-a887644746fd" containerName="extract-utilities" Oct 01 16:41:17 crc kubenswrapper[4949]: I1001 16:41:17.090760 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="f91face3-a562-4e23-a7ce-a887644746fd" containerName="extract-utilities" Oct 01 16:41:17 crc kubenswrapper[4949]: E1001 16:41:17.090783 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f91face3-a562-4e23-a7ce-a887644746fd" containerName="extract-content" Oct 01 16:41:17 crc kubenswrapper[4949]: I1001 16:41:17.090789 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="f91face3-a562-4e23-a7ce-a887644746fd" containerName="extract-content" Oct 01 16:41:17 crc kubenswrapper[4949]: E1001 16:41:17.090801 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f91face3-a562-4e23-a7ce-a887644746fd" containerName="registry-server" Oct 01 16:41:17 crc kubenswrapper[4949]: I1001 16:41:17.090807 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="f91face3-a562-4e23-a7ce-a887644746fd" containerName="registry-server" Oct 01 16:41:17 crc kubenswrapper[4949]: I1001 16:41:17.090986 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="f91face3-a562-4e23-a7ce-a887644746fd" containerName="registry-server" Oct 01 16:41:17 crc kubenswrapper[4949]: I1001 16:41:17.092430 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cfrx2" Oct 01 16:41:17 crc kubenswrapper[4949]: I1001 16:41:17.105240 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cfrx2"] Oct 01 16:41:17 crc kubenswrapper[4949]: I1001 16:41:17.179592 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f09f2c0-6ba0-4bb0-9f54-5cca772067a9-utilities\") pod \"redhat-operators-cfrx2\" (UID: \"2f09f2c0-6ba0-4bb0-9f54-5cca772067a9\") " pod="openshift-marketplace/redhat-operators-cfrx2" Oct 01 16:41:17 crc kubenswrapper[4949]: I1001 16:41:17.180041 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqsn9\" (UniqueName: \"kubernetes.io/projected/2f09f2c0-6ba0-4bb0-9f54-5cca772067a9-kube-api-access-lqsn9\") pod \"redhat-operators-cfrx2\" (UID: \"2f09f2c0-6ba0-4bb0-9f54-5cca772067a9\") " pod="openshift-marketplace/redhat-operators-cfrx2" Oct 01 16:41:17 crc kubenswrapper[4949]: I1001 16:41:17.180100 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f09f2c0-6ba0-4bb0-9f54-5cca772067a9-catalog-content\") pod \"redhat-operators-cfrx2\" (UID: \"2f09f2c0-6ba0-4bb0-9f54-5cca772067a9\") " pod="openshift-marketplace/redhat-operators-cfrx2" Oct 01 16:41:17 crc kubenswrapper[4949]: I1001 16:41:17.282401 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f09f2c0-6ba0-4bb0-9f54-5cca772067a9-utilities\") pod \"redhat-operators-cfrx2\" (UID: \"2f09f2c0-6ba0-4bb0-9f54-5cca772067a9\") " pod="openshift-marketplace/redhat-operators-cfrx2" Oct 01 16:41:17 crc kubenswrapper[4949]: I1001 16:41:17.282682 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqsn9\" (UniqueName: \"kubernetes.io/projected/2f09f2c0-6ba0-4bb0-9f54-5cca772067a9-kube-api-access-lqsn9\") pod \"redhat-operators-cfrx2\" (UID: \"2f09f2c0-6ba0-4bb0-9f54-5cca772067a9\") " pod="openshift-marketplace/redhat-operators-cfrx2" Oct 01 16:41:17 crc kubenswrapper[4949]: I1001 16:41:17.282721 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f09f2c0-6ba0-4bb0-9f54-5cca772067a9-catalog-content\") pod \"redhat-operators-cfrx2\" (UID: \"2f09f2c0-6ba0-4bb0-9f54-5cca772067a9\") " pod="openshift-marketplace/redhat-operators-cfrx2" Oct 01 16:41:17 crc kubenswrapper[4949]: I1001 16:41:17.283189 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f09f2c0-6ba0-4bb0-9f54-5cca772067a9-utilities\") pod \"redhat-operators-cfrx2\" (UID: \"2f09f2c0-6ba0-4bb0-9f54-5cca772067a9\") " pod="openshift-marketplace/redhat-operators-cfrx2" Oct 01 16:41:17 crc kubenswrapper[4949]: I1001 16:41:17.283224 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f09f2c0-6ba0-4bb0-9f54-5cca772067a9-catalog-content\") pod \"redhat-operators-cfrx2\" (UID: \"2f09f2c0-6ba0-4bb0-9f54-5cca772067a9\") " pod="openshift-marketplace/redhat-operators-cfrx2" Oct 01 16:41:17 crc kubenswrapper[4949]: I1001 16:41:17.308262 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqsn9\" (UniqueName: \"kubernetes.io/projected/2f09f2c0-6ba0-4bb0-9f54-5cca772067a9-kube-api-access-lqsn9\") pod \"redhat-operators-cfrx2\" (UID: \"2f09f2c0-6ba0-4bb0-9f54-5cca772067a9\") " pod="openshift-marketplace/redhat-operators-cfrx2" Oct 01 16:41:17 crc kubenswrapper[4949]: I1001 16:41:17.417205 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cfrx2" Oct 01 16:41:17 crc kubenswrapper[4949]: I1001 16:41:17.916270 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cfrx2"] Oct 01 16:41:18 crc kubenswrapper[4949]: I1001 16:41:18.660205 4949 generic.go:334] "Generic (PLEG): container finished" podID="2f09f2c0-6ba0-4bb0-9f54-5cca772067a9" containerID="c1a8c3d405890cff0eb14cd4d31bddd876cc9efe9ceb1625f1373ff7c4c1e308" exitCode=0 Oct 01 16:41:18 crc kubenswrapper[4949]: I1001 16:41:18.660292 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cfrx2" event={"ID":"2f09f2c0-6ba0-4bb0-9f54-5cca772067a9","Type":"ContainerDied","Data":"c1a8c3d405890cff0eb14cd4d31bddd876cc9efe9ceb1625f1373ff7c4c1e308"} Oct 01 16:41:18 crc kubenswrapper[4949]: I1001 16:41:18.660546 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cfrx2" event={"ID":"2f09f2c0-6ba0-4bb0-9f54-5cca772067a9","Type":"ContainerStarted","Data":"f2f4208fb18424d6b324ffd3a683323383ecf3414d760f26321e76f4e54cef53"} Oct 01 16:41:19 crc kubenswrapper[4949]: I1001 16:41:19.672311 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cfrx2" event={"ID":"2f09f2c0-6ba0-4bb0-9f54-5cca772067a9","Type":"ContainerStarted","Data":"2379451bafbadc379e95a7b50818d7dcbc4e3c011b1e957cfa7db3fedc765a3c"} Oct 01 16:41:26 crc kubenswrapper[4949]: I1001 16:41:26.767183 4949 generic.go:334] "Generic (PLEG): container finished" podID="2f09f2c0-6ba0-4bb0-9f54-5cca772067a9" containerID="2379451bafbadc379e95a7b50818d7dcbc4e3c011b1e957cfa7db3fedc765a3c" exitCode=0 Oct 01 16:41:26 crc kubenswrapper[4949]: I1001 16:41:26.767244 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cfrx2" event={"ID":"2f09f2c0-6ba0-4bb0-9f54-5cca772067a9","Type":"ContainerDied","Data":"2379451bafbadc379e95a7b50818d7dcbc4e3c011b1e957cfa7db3fedc765a3c"} Oct 01 16:41:28 crc kubenswrapper[4949]: I1001 16:41:28.795577 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cfrx2" event={"ID":"2f09f2c0-6ba0-4bb0-9f54-5cca772067a9","Type":"ContainerStarted","Data":"0cf20b8959e78818fb6664d61544f7beb6aa58ffdd93a898f49c6a63e3a580b5"} Oct 01 16:41:28 crc kubenswrapper[4949]: I1001 16:41:28.820988 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cfrx2" podStartSLOduration=2.806417526 podStartE2EDuration="11.820963627s" podCreationTimestamp="2025-10-01 16:41:17 +0000 UTC" firstStartedPulling="2025-10-01 16:41:18.662026464 +0000 UTC m=+3577.967632655" lastFinishedPulling="2025-10-01 16:41:27.676572555 +0000 UTC m=+3586.982178756" observedRunningTime="2025-10-01 16:41:28.818293484 +0000 UTC m=+3588.123899685" watchObservedRunningTime="2025-10-01 16:41:28.820963627 +0000 UTC m=+3588.126569858" Oct 01 16:41:29 crc kubenswrapper[4949]: I1001 16:41:29.602453 4949 scope.go:117] "RemoveContainer" containerID="3f27d6beb245179886854d4e0b571554a832cd0a55ee5032309258751bf2c77f" Oct 01 16:41:29 crc kubenswrapper[4949]: E1001 16:41:29.602709 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:41:37 crc kubenswrapper[4949]: I1001 16:41:37.417642 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cfrx2" Oct 01 16:41:37 crc kubenswrapper[4949]: I1001 16:41:37.418456 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cfrx2" Oct 01 16:41:38 crc kubenswrapper[4949]: I1001 16:41:38.485481 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cfrx2" podUID="2f09f2c0-6ba0-4bb0-9f54-5cca772067a9" containerName="registry-server" probeResult="failure" output=< Oct 01 16:41:38 crc kubenswrapper[4949]: timeout: failed to connect service ":50051" within 1s Oct 01 16:41:38 crc kubenswrapper[4949]: > Oct 01 16:41:41 crc kubenswrapper[4949]: I1001 16:41:41.607906 4949 scope.go:117] "RemoveContainer" containerID="3f27d6beb245179886854d4e0b571554a832cd0a55ee5032309258751bf2c77f" Oct 01 16:41:41 crc kubenswrapper[4949]: E1001 16:41:41.608552 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:41:48 crc kubenswrapper[4949]: I1001 16:41:48.558930 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-cfrx2" podUID="2f09f2c0-6ba0-4bb0-9f54-5cca772067a9" containerName="registry-server" probeResult="failure" output=< Oct 01 16:41:48 crc kubenswrapper[4949]: timeout: failed to connect service ":50051" within 1s Oct 01 16:41:48 crc kubenswrapper[4949]: > Oct 01 16:41:56 crc kubenswrapper[4949]: I1001 16:41:56.601851 4949 scope.go:117] "RemoveContainer" containerID="3f27d6beb245179886854d4e0b571554a832cd0a55ee5032309258751bf2c77f" Oct 01 16:41:56 crc kubenswrapper[4949]: E1001 16:41:56.602617 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:41:57 crc kubenswrapper[4949]: I1001 16:41:57.484044 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cfrx2" Oct 01 16:41:57 crc kubenswrapper[4949]: I1001 16:41:57.547116 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cfrx2" Oct 01 16:41:57 crc kubenswrapper[4949]: I1001 16:41:57.735347 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cfrx2"] Oct 01 16:41:59 crc kubenswrapper[4949]: I1001 16:41:59.158197 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-cfrx2" podUID="2f09f2c0-6ba0-4bb0-9f54-5cca772067a9" containerName="registry-server" containerID="cri-o://0cf20b8959e78818fb6664d61544f7beb6aa58ffdd93a898f49c6a63e3a580b5" gracePeriod=2 Oct 01 16:41:59 crc kubenswrapper[4949]: I1001 16:41:59.730638 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cfrx2" Oct 01 16:41:59 crc kubenswrapper[4949]: I1001 16:41:59.837597 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lqsn9\" (UniqueName: \"kubernetes.io/projected/2f09f2c0-6ba0-4bb0-9f54-5cca772067a9-kube-api-access-lqsn9\") pod \"2f09f2c0-6ba0-4bb0-9f54-5cca772067a9\" (UID: \"2f09f2c0-6ba0-4bb0-9f54-5cca772067a9\") " Oct 01 16:41:59 crc kubenswrapper[4949]: I1001 16:41:59.837664 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f09f2c0-6ba0-4bb0-9f54-5cca772067a9-catalog-content\") pod \"2f09f2c0-6ba0-4bb0-9f54-5cca772067a9\" (UID: \"2f09f2c0-6ba0-4bb0-9f54-5cca772067a9\") " Oct 01 16:41:59 crc kubenswrapper[4949]: I1001 16:41:59.837728 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f09f2c0-6ba0-4bb0-9f54-5cca772067a9-utilities\") pod \"2f09f2c0-6ba0-4bb0-9f54-5cca772067a9\" (UID: \"2f09f2c0-6ba0-4bb0-9f54-5cca772067a9\") " Oct 01 16:41:59 crc kubenswrapper[4949]: I1001 16:41:59.838779 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f09f2c0-6ba0-4bb0-9f54-5cca772067a9-utilities" (OuterVolumeSpecName: "utilities") pod "2f09f2c0-6ba0-4bb0-9f54-5cca772067a9" (UID: "2f09f2c0-6ba0-4bb0-9f54-5cca772067a9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:41:59 crc kubenswrapper[4949]: I1001 16:41:59.851423 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2f09f2c0-6ba0-4bb0-9f54-5cca772067a9-kube-api-access-lqsn9" (OuterVolumeSpecName: "kube-api-access-lqsn9") pod "2f09f2c0-6ba0-4bb0-9f54-5cca772067a9" (UID: "2f09f2c0-6ba0-4bb0-9f54-5cca772067a9"). InnerVolumeSpecName "kube-api-access-lqsn9". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:41:59 crc kubenswrapper[4949]: I1001 16:41:59.931276 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2f09f2c0-6ba0-4bb0-9f54-5cca772067a9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "2f09f2c0-6ba0-4bb0-9f54-5cca772067a9" (UID: "2f09f2c0-6ba0-4bb0-9f54-5cca772067a9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:41:59 crc kubenswrapper[4949]: I1001 16:41:59.940229 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lqsn9\" (UniqueName: \"kubernetes.io/projected/2f09f2c0-6ba0-4bb0-9f54-5cca772067a9-kube-api-access-lqsn9\") on node \"crc\" DevicePath \"\"" Oct 01 16:41:59 crc kubenswrapper[4949]: I1001 16:41:59.940271 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2f09f2c0-6ba0-4bb0-9f54-5cca772067a9-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 01 16:41:59 crc kubenswrapper[4949]: I1001 16:41:59.940283 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2f09f2c0-6ba0-4bb0-9f54-5cca772067a9-utilities\") on node \"crc\" DevicePath \"\"" Oct 01 16:42:00 crc kubenswrapper[4949]: I1001 16:42:00.169858 4949 generic.go:334] "Generic (PLEG): container finished" podID="2f09f2c0-6ba0-4bb0-9f54-5cca772067a9" containerID="0cf20b8959e78818fb6664d61544f7beb6aa58ffdd93a898f49c6a63e3a580b5" exitCode=0 Oct 01 16:42:00 crc kubenswrapper[4949]: I1001 16:42:00.169913 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cfrx2" event={"ID":"2f09f2c0-6ba0-4bb0-9f54-5cca772067a9","Type":"ContainerDied","Data":"0cf20b8959e78818fb6664d61544f7beb6aa58ffdd93a898f49c6a63e3a580b5"} Oct 01 16:42:00 crc kubenswrapper[4949]: I1001 16:42:00.169954 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cfrx2" Oct 01 16:42:00 crc kubenswrapper[4949]: I1001 16:42:00.169978 4949 scope.go:117] "RemoveContainer" containerID="0cf20b8959e78818fb6664d61544f7beb6aa58ffdd93a898f49c6a63e3a580b5" Oct 01 16:42:00 crc kubenswrapper[4949]: I1001 16:42:00.169967 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cfrx2" event={"ID":"2f09f2c0-6ba0-4bb0-9f54-5cca772067a9","Type":"ContainerDied","Data":"f2f4208fb18424d6b324ffd3a683323383ecf3414d760f26321e76f4e54cef53"} Oct 01 16:42:00 crc kubenswrapper[4949]: I1001 16:42:00.204860 4949 scope.go:117] "RemoveContainer" containerID="2379451bafbadc379e95a7b50818d7dcbc4e3c011b1e957cfa7db3fedc765a3c" Oct 01 16:42:00 crc kubenswrapper[4949]: I1001 16:42:00.211572 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-cfrx2"] Oct 01 16:42:00 crc kubenswrapper[4949]: I1001 16:42:00.220034 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-cfrx2"] Oct 01 16:42:00 crc kubenswrapper[4949]: I1001 16:42:00.243742 4949 scope.go:117] "RemoveContainer" containerID="c1a8c3d405890cff0eb14cd4d31bddd876cc9efe9ceb1625f1373ff7c4c1e308" Oct 01 16:42:00 crc kubenswrapper[4949]: I1001 16:42:00.277552 4949 scope.go:117] "RemoveContainer" containerID="0cf20b8959e78818fb6664d61544f7beb6aa58ffdd93a898f49c6a63e3a580b5" Oct 01 16:42:00 crc kubenswrapper[4949]: E1001 16:42:00.278024 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0cf20b8959e78818fb6664d61544f7beb6aa58ffdd93a898f49c6a63e3a580b5\": container with ID starting with 0cf20b8959e78818fb6664d61544f7beb6aa58ffdd93a898f49c6a63e3a580b5 not found: ID does not exist" containerID="0cf20b8959e78818fb6664d61544f7beb6aa58ffdd93a898f49c6a63e3a580b5" Oct 01 16:42:00 crc kubenswrapper[4949]: I1001 16:42:00.278063 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0cf20b8959e78818fb6664d61544f7beb6aa58ffdd93a898f49c6a63e3a580b5"} err="failed to get container status \"0cf20b8959e78818fb6664d61544f7beb6aa58ffdd93a898f49c6a63e3a580b5\": rpc error: code = NotFound desc = could not find container \"0cf20b8959e78818fb6664d61544f7beb6aa58ffdd93a898f49c6a63e3a580b5\": container with ID starting with 0cf20b8959e78818fb6664d61544f7beb6aa58ffdd93a898f49c6a63e3a580b5 not found: ID does not exist" Oct 01 16:42:00 crc kubenswrapper[4949]: I1001 16:42:00.278089 4949 scope.go:117] "RemoveContainer" containerID="2379451bafbadc379e95a7b50818d7dcbc4e3c011b1e957cfa7db3fedc765a3c" Oct 01 16:42:00 crc kubenswrapper[4949]: E1001 16:42:00.278426 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2379451bafbadc379e95a7b50818d7dcbc4e3c011b1e957cfa7db3fedc765a3c\": container with ID starting with 2379451bafbadc379e95a7b50818d7dcbc4e3c011b1e957cfa7db3fedc765a3c not found: ID does not exist" containerID="2379451bafbadc379e95a7b50818d7dcbc4e3c011b1e957cfa7db3fedc765a3c" Oct 01 16:42:00 crc kubenswrapper[4949]: I1001 16:42:00.278516 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2379451bafbadc379e95a7b50818d7dcbc4e3c011b1e957cfa7db3fedc765a3c"} err="failed to get container status \"2379451bafbadc379e95a7b50818d7dcbc4e3c011b1e957cfa7db3fedc765a3c\": rpc error: code = NotFound desc = could not find container \"2379451bafbadc379e95a7b50818d7dcbc4e3c011b1e957cfa7db3fedc765a3c\": container with ID starting with 2379451bafbadc379e95a7b50818d7dcbc4e3c011b1e957cfa7db3fedc765a3c not found: ID does not exist" Oct 01 16:42:00 crc kubenswrapper[4949]: I1001 16:42:00.278559 4949 scope.go:117] "RemoveContainer" containerID="c1a8c3d405890cff0eb14cd4d31bddd876cc9efe9ceb1625f1373ff7c4c1e308" Oct 01 16:42:00 crc kubenswrapper[4949]: E1001 16:42:00.279956 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c1a8c3d405890cff0eb14cd4d31bddd876cc9efe9ceb1625f1373ff7c4c1e308\": container with ID starting with c1a8c3d405890cff0eb14cd4d31bddd876cc9efe9ceb1625f1373ff7c4c1e308 not found: ID does not exist" containerID="c1a8c3d405890cff0eb14cd4d31bddd876cc9efe9ceb1625f1373ff7c4c1e308" Oct 01 16:42:00 crc kubenswrapper[4949]: I1001 16:42:00.279987 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c1a8c3d405890cff0eb14cd4d31bddd876cc9efe9ceb1625f1373ff7c4c1e308"} err="failed to get container status \"c1a8c3d405890cff0eb14cd4d31bddd876cc9efe9ceb1625f1373ff7c4c1e308\": rpc error: code = NotFound desc = could not find container \"c1a8c3d405890cff0eb14cd4d31bddd876cc9efe9ceb1625f1373ff7c4c1e308\": container with ID starting with c1a8c3d405890cff0eb14cd4d31bddd876cc9efe9ceb1625f1373ff7c4c1e308 not found: ID does not exist" Oct 01 16:42:01 crc kubenswrapper[4949]: I1001 16:42:01.613134 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2f09f2c0-6ba0-4bb0-9f54-5cca772067a9" path="/var/lib/kubelet/pods/2f09f2c0-6ba0-4bb0-9f54-5cca772067a9/volumes" Oct 01 16:42:11 crc kubenswrapper[4949]: I1001 16:42:11.609876 4949 scope.go:117] "RemoveContainer" containerID="3f27d6beb245179886854d4e0b571554a832cd0a55ee5032309258751bf2c77f" Oct 01 16:42:11 crc kubenswrapper[4949]: E1001 16:42:11.610736 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:42:26 crc kubenswrapper[4949]: I1001 16:42:26.602003 4949 scope.go:117] "RemoveContainer" containerID="3f27d6beb245179886854d4e0b571554a832cd0a55ee5032309258751bf2c77f" Oct 01 16:42:26 crc kubenswrapper[4949]: E1001 16:42:26.602672 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:42:40 crc kubenswrapper[4949]: I1001 16:42:40.602095 4949 scope.go:117] "RemoveContainer" containerID="3f27d6beb245179886854d4e0b571554a832cd0a55ee5032309258751bf2c77f" Oct 01 16:42:40 crc kubenswrapper[4949]: E1001 16:42:40.602918 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:42:53 crc kubenswrapper[4949]: I1001 16:42:53.601814 4949 scope.go:117] "RemoveContainer" containerID="3f27d6beb245179886854d4e0b571554a832cd0a55ee5032309258751bf2c77f" Oct 01 16:42:53 crc kubenswrapper[4949]: E1001 16:42:53.603160 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:43:06 crc kubenswrapper[4949]: I1001 16:43:06.601338 4949 scope.go:117] "RemoveContainer" containerID="3f27d6beb245179886854d4e0b571554a832cd0a55ee5032309258751bf2c77f" Oct 01 16:43:06 crc kubenswrapper[4949]: E1001 16:43:06.602205 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:43:17 crc kubenswrapper[4949]: I1001 16:43:17.602040 4949 scope.go:117] "RemoveContainer" containerID="3f27d6beb245179886854d4e0b571554a832cd0a55ee5032309258751bf2c77f" Oct 01 16:43:17 crc kubenswrapper[4949]: E1001 16:43:17.603047 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:43:30 crc kubenswrapper[4949]: I1001 16:43:30.601481 4949 scope.go:117] "RemoveContainer" containerID="3f27d6beb245179886854d4e0b571554a832cd0a55ee5032309258751bf2c77f" Oct 01 16:43:30 crc kubenswrapper[4949]: E1001 16:43:30.602245 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:43:34 crc kubenswrapper[4949]: I1001 16:43:34.260718 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qjhnc"] Oct 01 16:43:34 crc kubenswrapper[4949]: E1001 16:43:34.262782 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f09f2c0-6ba0-4bb0-9f54-5cca772067a9" containerName="extract-content" Oct 01 16:43:34 crc kubenswrapper[4949]: I1001 16:43:34.262873 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f09f2c0-6ba0-4bb0-9f54-5cca772067a9" containerName="extract-content" Oct 01 16:43:34 crc kubenswrapper[4949]: E1001 16:43:34.262962 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f09f2c0-6ba0-4bb0-9f54-5cca772067a9" containerName="extract-utilities" Oct 01 16:43:34 crc kubenswrapper[4949]: I1001 16:43:34.263027 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f09f2c0-6ba0-4bb0-9f54-5cca772067a9" containerName="extract-utilities" Oct 01 16:43:34 crc kubenswrapper[4949]: E1001 16:43:34.263105 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2f09f2c0-6ba0-4bb0-9f54-5cca772067a9" containerName="registry-server" Oct 01 16:43:34 crc kubenswrapper[4949]: I1001 16:43:34.263201 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="2f09f2c0-6ba0-4bb0-9f54-5cca772067a9" containerName="registry-server" Oct 01 16:43:34 crc kubenswrapper[4949]: I1001 16:43:34.263507 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="2f09f2c0-6ba0-4bb0-9f54-5cca772067a9" containerName="registry-server" Oct 01 16:43:34 crc kubenswrapper[4949]: I1001 16:43:34.265192 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qjhnc" Oct 01 16:43:34 crc kubenswrapper[4949]: I1001 16:43:34.272075 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qjhnc"] Oct 01 16:43:34 crc kubenswrapper[4949]: I1001 16:43:34.333333 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mmqf\" (UniqueName: \"kubernetes.io/projected/dd43c2c7-db16-4a21-875e-c1d3bf4f61a7-kube-api-access-8mmqf\") pod \"redhat-marketplace-qjhnc\" (UID: \"dd43c2c7-db16-4a21-875e-c1d3bf4f61a7\") " pod="openshift-marketplace/redhat-marketplace-qjhnc" Oct 01 16:43:34 crc kubenswrapper[4949]: I1001 16:43:34.333440 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd43c2c7-db16-4a21-875e-c1d3bf4f61a7-catalog-content\") pod \"redhat-marketplace-qjhnc\" (UID: \"dd43c2c7-db16-4a21-875e-c1d3bf4f61a7\") " pod="openshift-marketplace/redhat-marketplace-qjhnc" Oct 01 16:43:34 crc kubenswrapper[4949]: I1001 16:43:34.333493 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd43c2c7-db16-4a21-875e-c1d3bf4f61a7-utilities\") pod \"redhat-marketplace-qjhnc\" (UID: \"dd43c2c7-db16-4a21-875e-c1d3bf4f61a7\") " pod="openshift-marketplace/redhat-marketplace-qjhnc" Oct 01 16:43:34 crc kubenswrapper[4949]: I1001 16:43:34.435411 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd43c2c7-db16-4a21-875e-c1d3bf4f61a7-utilities\") pod \"redhat-marketplace-qjhnc\" (UID: \"dd43c2c7-db16-4a21-875e-c1d3bf4f61a7\") " pod="openshift-marketplace/redhat-marketplace-qjhnc" Oct 01 16:43:34 crc kubenswrapper[4949]: I1001 16:43:34.435532 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mmqf\" (UniqueName: \"kubernetes.io/projected/dd43c2c7-db16-4a21-875e-c1d3bf4f61a7-kube-api-access-8mmqf\") pod \"redhat-marketplace-qjhnc\" (UID: \"dd43c2c7-db16-4a21-875e-c1d3bf4f61a7\") " pod="openshift-marketplace/redhat-marketplace-qjhnc" Oct 01 16:43:34 crc kubenswrapper[4949]: I1001 16:43:34.435628 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd43c2c7-db16-4a21-875e-c1d3bf4f61a7-catalog-content\") pod \"redhat-marketplace-qjhnc\" (UID: \"dd43c2c7-db16-4a21-875e-c1d3bf4f61a7\") " pod="openshift-marketplace/redhat-marketplace-qjhnc" Oct 01 16:43:34 crc kubenswrapper[4949]: I1001 16:43:34.435959 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd43c2c7-db16-4a21-875e-c1d3bf4f61a7-utilities\") pod \"redhat-marketplace-qjhnc\" (UID: \"dd43c2c7-db16-4a21-875e-c1d3bf4f61a7\") " pod="openshift-marketplace/redhat-marketplace-qjhnc" Oct 01 16:43:34 crc kubenswrapper[4949]: I1001 16:43:34.436269 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd43c2c7-db16-4a21-875e-c1d3bf4f61a7-catalog-content\") pod \"redhat-marketplace-qjhnc\" (UID: \"dd43c2c7-db16-4a21-875e-c1d3bf4f61a7\") " pod="openshift-marketplace/redhat-marketplace-qjhnc" Oct 01 16:43:34 crc kubenswrapper[4949]: I1001 16:43:34.459343 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mmqf\" (UniqueName: \"kubernetes.io/projected/dd43c2c7-db16-4a21-875e-c1d3bf4f61a7-kube-api-access-8mmqf\") pod \"redhat-marketplace-qjhnc\" (UID: \"dd43c2c7-db16-4a21-875e-c1d3bf4f61a7\") " pod="openshift-marketplace/redhat-marketplace-qjhnc" Oct 01 16:43:34 crc kubenswrapper[4949]: I1001 16:43:34.600729 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qjhnc" Oct 01 16:43:35 crc kubenswrapper[4949]: I1001 16:43:35.034823 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qjhnc"] Oct 01 16:43:35 crc kubenswrapper[4949]: I1001 16:43:35.090116 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qjhnc" event={"ID":"dd43c2c7-db16-4a21-875e-c1d3bf4f61a7","Type":"ContainerStarted","Data":"1486964876e1e29d3c99af875e6d90f32687df332016dbd6b969671c542cf1bc"} Oct 01 16:43:36 crc kubenswrapper[4949]: I1001 16:43:36.121429 4949 generic.go:334] "Generic (PLEG): container finished" podID="dd43c2c7-db16-4a21-875e-c1d3bf4f61a7" containerID="699d73f2e667f5664c82babfbcb28285905c859803f2df52988b27094058b12f" exitCode=0 Oct 01 16:43:36 crc kubenswrapper[4949]: I1001 16:43:36.121624 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qjhnc" event={"ID":"dd43c2c7-db16-4a21-875e-c1d3bf4f61a7","Type":"ContainerDied","Data":"699d73f2e667f5664c82babfbcb28285905c859803f2df52988b27094058b12f"} Oct 01 16:43:38 crc kubenswrapper[4949]: I1001 16:43:38.141147 4949 generic.go:334] "Generic (PLEG): container finished" podID="dd43c2c7-db16-4a21-875e-c1d3bf4f61a7" containerID="c6e1f0044915ce10fed75ef1f28358bd765961147eded2cc4b8a981da1808727" exitCode=0 Oct 01 16:43:38 crc kubenswrapper[4949]: I1001 16:43:38.141568 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qjhnc" event={"ID":"dd43c2c7-db16-4a21-875e-c1d3bf4f61a7","Type":"ContainerDied","Data":"c6e1f0044915ce10fed75ef1f28358bd765961147eded2cc4b8a981da1808727"} Oct 01 16:43:40 crc kubenswrapper[4949]: I1001 16:43:40.169064 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qjhnc" event={"ID":"dd43c2c7-db16-4a21-875e-c1d3bf4f61a7","Type":"ContainerStarted","Data":"8e861105152f582937fb6e9a0b7e078ce30465ed7093eeea90f3b226cf5c1d67"} Oct 01 16:43:40 crc kubenswrapper[4949]: I1001 16:43:40.193813 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qjhnc" podStartSLOduration=3.280059781 podStartE2EDuration="6.193770744s" podCreationTimestamp="2025-10-01 16:43:34 +0000 UTC" firstStartedPulling="2025-10-01 16:43:36.126288158 +0000 UTC m=+3715.431894349" lastFinishedPulling="2025-10-01 16:43:39.039999111 +0000 UTC m=+3718.345605312" observedRunningTime="2025-10-01 16:43:40.188074086 +0000 UTC m=+3719.493680317" watchObservedRunningTime="2025-10-01 16:43:40.193770744 +0000 UTC m=+3719.499376945" Oct 01 16:43:44 crc kubenswrapper[4949]: I1001 16:43:44.600917 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-qjhnc" Oct 01 16:43:44 crc kubenswrapper[4949]: I1001 16:43:44.601655 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qjhnc" Oct 01 16:43:44 crc kubenswrapper[4949]: I1001 16:43:44.602884 4949 scope.go:117] "RemoveContainer" containerID="3f27d6beb245179886854d4e0b571554a832cd0a55ee5032309258751bf2c77f" Oct 01 16:43:44 crc kubenswrapper[4949]: E1001 16:43:44.603171 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:43:44 crc kubenswrapper[4949]: I1001 16:43:44.675534 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qjhnc" Oct 01 16:43:45 crc kubenswrapper[4949]: I1001 16:43:45.288943 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qjhnc" Oct 01 16:43:45 crc kubenswrapper[4949]: I1001 16:43:45.340058 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qjhnc"] Oct 01 16:43:47 crc kubenswrapper[4949]: I1001 16:43:47.249894 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qjhnc" podUID="dd43c2c7-db16-4a21-875e-c1d3bf4f61a7" containerName="registry-server" containerID="cri-o://8e861105152f582937fb6e9a0b7e078ce30465ed7093eeea90f3b226cf5c1d67" gracePeriod=2 Oct 01 16:43:48 crc kubenswrapper[4949]: I1001 16:43:48.260959 4949 generic.go:334] "Generic (PLEG): container finished" podID="dd43c2c7-db16-4a21-875e-c1d3bf4f61a7" containerID="8e861105152f582937fb6e9a0b7e078ce30465ed7093eeea90f3b226cf5c1d67" exitCode=0 Oct 01 16:43:48 crc kubenswrapper[4949]: I1001 16:43:48.261030 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qjhnc" event={"ID":"dd43c2c7-db16-4a21-875e-c1d3bf4f61a7","Type":"ContainerDied","Data":"8e861105152f582937fb6e9a0b7e078ce30465ed7093eeea90f3b226cf5c1d67"} Oct 01 16:43:48 crc kubenswrapper[4949]: I1001 16:43:48.543984 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qjhnc" Oct 01 16:43:48 crc kubenswrapper[4949]: I1001 16:43:48.645197 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mmqf\" (UniqueName: \"kubernetes.io/projected/dd43c2c7-db16-4a21-875e-c1d3bf4f61a7-kube-api-access-8mmqf\") pod \"dd43c2c7-db16-4a21-875e-c1d3bf4f61a7\" (UID: \"dd43c2c7-db16-4a21-875e-c1d3bf4f61a7\") " Oct 01 16:43:48 crc kubenswrapper[4949]: I1001 16:43:48.645338 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd43c2c7-db16-4a21-875e-c1d3bf4f61a7-utilities\") pod \"dd43c2c7-db16-4a21-875e-c1d3bf4f61a7\" (UID: \"dd43c2c7-db16-4a21-875e-c1d3bf4f61a7\") " Oct 01 16:43:48 crc kubenswrapper[4949]: I1001 16:43:48.645517 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd43c2c7-db16-4a21-875e-c1d3bf4f61a7-catalog-content\") pod \"dd43c2c7-db16-4a21-875e-c1d3bf4f61a7\" (UID: \"dd43c2c7-db16-4a21-875e-c1d3bf4f61a7\") " Oct 01 16:43:48 crc kubenswrapper[4949]: I1001 16:43:48.646324 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd43c2c7-db16-4a21-875e-c1d3bf4f61a7-utilities" (OuterVolumeSpecName: "utilities") pod "dd43c2c7-db16-4a21-875e-c1d3bf4f61a7" (UID: "dd43c2c7-db16-4a21-875e-c1d3bf4f61a7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:43:48 crc kubenswrapper[4949]: I1001 16:43:48.655355 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd43c2c7-db16-4a21-875e-c1d3bf4f61a7-kube-api-access-8mmqf" (OuterVolumeSpecName: "kube-api-access-8mmqf") pod "dd43c2c7-db16-4a21-875e-c1d3bf4f61a7" (UID: "dd43c2c7-db16-4a21-875e-c1d3bf4f61a7"). InnerVolumeSpecName "kube-api-access-8mmqf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:43:48 crc kubenswrapper[4949]: I1001 16:43:48.659260 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dd43c2c7-db16-4a21-875e-c1d3bf4f61a7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dd43c2c7-db16-4a21-875e-c1d3bf4f61a7" (UID: "dd43c2c7-db16-4a21-875e-c1d3bf4f61a7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:43:48 crc kubenswrapper[4949]: I1001 16:43:48.747433 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dd43c2c7-db16-4a21-875e-c1d3bf4f61a7-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 01 16:43:48 crc kubenswrapper[4949]: I1001 16:43:48.747469 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mmqf\" (UniqueName: \"kubernetes.io/projected/dd43c2c7-db16-4a21-875e-c1d3bf4f61a7-kube-api-access-8mmqf\") on node \"crc\" DevicePath \"\"" Oct 01 16:43:48 crc kubenswrapper[4949]: I1001 16:43:48.747483 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dd43c2c7-db16-4a21-875e-c1d3bf4f61a7-utilities\") on node \"crc\" DevicePath \"\"" Oct 01 16:43:49 crc kubenswrapper[4949]: I1001 16:43:49.272572 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qjhnc" event={"ID":"dd43c2c7-db16-4a21-875e-c1d3bf4f61a7","Type":"ContainerDied","Data":"1486964876e1e29d3c99af875e6d90f32687df332016dbd6b969671c542cf1bc"} Oct 01 16:43:49 crc kubenswrapper[4949]: I1001 16:43:49.272716 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qjhnc" Oct 01 16:43:49 crc kubenswrapper[4949]: I1001 16:43:49.272942 4949 scope.go:117] "RemoveContainer" containerID="8e861105152f582937fb6e9a0b7e078ce30465ed7093eeea90f3b226cf5c1d67" Oct 01 16:43:49 crc kubenswrapper[4949]: I1001 16:43:49.299487 4949 scope.go:117] "RemoveContainer" containerID="c6e1f0044915ce10fed75ef1f28358bd765961147eded2cc4b8a981da1808727" Oct 01 16:43:49 crc kubenswrapper[4949]: I1001 16:43:49.321859 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qjhnc"] Oct 01 16:43:49 crc kubenswrapper[4949]: I1001 16:43:49.330597 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qjhnc"] Oct 01 16:43:49 crc kubenswrapper[4949]: I1001 16:43:49.337364 4949 scope.go:117] "RemoveContainer" containerID="699d73f2e667f5664c82babfbcb28285905c859803f2df52988b27094058b12f" Oct 01 16:43:49 crc kubenswrapper[4949]: I1001 16:43:49.616315 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dd43c2c7-db16-4a21-875e-c1d3bf4f61a7" path="/var/lib/kubelet/pods/dd43c2c7-db16-4a21-875e-c1d3bf4f61a7/volumes" Oct 01 16:43:57 crc kubenswrapper[4949]: I1001 16:43:57.602481 4949 scope.go:117] "RemoveContainer" containerID="3f27d6beb245179886854d4e0b571554a832cd0a55ee5032309258751bf2c77f" Oct 01 16:43:57 crc kubenswrapper[4949]: E1001 16:43:57.603268 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:44:12 crc kubenswrapper[4949]: I1001 16:44:12.601627 4949 scope.go:117] "RemoveContainer" containerID="3f27d6beb245179886854d4e0b571554a832cd0a55ee5032309258751bf2c77f" Oct 01 16:44:12 crc kubenswrapper[4949]: E1001 16:44:12.602424 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:44:23 crc kubenswrapper[4949]: I1001 16:44:23.601873 4949 scope.go:117] "RemoveContainer" containerID="3f27d6beb245179886854d4e0b571554a832cd0a55ee5032309258751bf2c77f" Oct 01 16:44:23 crc kubenswrapper[4949]: E1001 16:44:23.602583 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:44:38 crc kubenswrapper[4949]: I1001 16:44:38.602338 4949 scope.go:117] "RemoveContainer" containerID="3f27d6beb245179886854d4e0b571554a832cd0a55ee5032309258751bf2c77f" Oct 01 16:44:38 crc kubenswrapper[4949]: E1001 16:44:38.603141 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:44:49 crc kubenswrapper[4949]: I1001 16:44:49.601688 4949 scope.go:117] "RemoveContainer" containerID="3f27d6beb245179886854d4e0b571554a832cd0a55ee5032309258751bf2c77f" Oct 01 16:44:49 crc kubenswrapper[4949]: E1001 16:44:49.602529 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:45:00 crc kubenswrapper[4949]: I1001 16:45:00.145388 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29322285-4lfsl"] Oct 01 16:45:00 crc kubenswrapper[4949]: E1001 16:45:00.146445 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd43c2c7-db16-4a21-875e-c1d3bf4f61a7" containerName="extract-utilities" Oct 01 16:45:00 crc kubenswrapper[4949]: I1001 16:45:00.146466 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd43c2c7-db16-4a21-875e-c1d3bf4f61a7" containerName="extract-utilities" Oct 01 16:45:00 crc kubenswrapper[4949]: E1001 16:45:00.146497 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd43c2c7-db16-4a21-875e-c1d3bf4f61a7" containerName="extract-content" Oct 01 16:45:00 crc kubenswrapper[4949]: I1001 16:45:00.146506 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd43c2c7-db16-4a21-875e-c1d3bf4f61a7" containerName="extract-content" Oct 01 16:45:00 crc kubenswrapper[4949]: E1001 16:45:00.146549 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dd43c2c7-db16-4a21-875e-c1d3bf4f61a7" containerName="registry-server" Oct 01 16:45:00 crc kubenswrapper[4949]: I1001 16:45:00.146557 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="dd43c2c7-db16-4a21-875e-c1d3bf4f61a7" containerName="registry-server" Oct 01 16:45:00 crc kubenswrapper[4949]: I1001 16:45:00.146781 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="dd43c2c7-db16-4a21-875e-c1d3bf4f61a7" containerName="registry-server" Oct 01 16:45:00 crc kubenswrapper[4949]: I1001 16:45:00.148237 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29322285-4lfsl" Oct 01 16:45:00 crc kubenswrapper[4949]: I1001 16:45:00.151439 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 01 16:45:00 crc kubenswrapper[4949]: I1001 16:45:00.152446 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 01 16:45:00 crc kubenswrapper[4949]: I1001 16:45:00.159629 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29322285-4lfsl"] Oct 01 16:45:00 crc kubenswrapper[4949]: I1001 16:45:00.233976 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6fbf08b9-e454-4705-af84-cc056ec8e2ca-config-volume\") pod \"collect-profiles-29322285-4lfsl\" (UID: \"6fbf08b9-e454-4705-af84-cc056ec8e2ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322285-4lfsl" Oct 01 16:45:00 crc kubenswrapper[4949]: I1001 16:45:00.234268 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-md2v8\" (UniqueName: \"kubernetes.io/projected/6fbf08b9-e454-4705-af84-cc056ec8e2ca-kube-api-access-md2v8\") pod \"collect-profiles-29322285-4lfsl\" (UID: \"6fbf08b9-e454-4705-af84-cc056ec8e2ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322285-4lfsl" Oct 01 16:45:00 crc kubenswrapper[4949]: I1001 16:45:00.234318 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6fbf08b9-e454-4705-af84-cc056ec8e2ca-secret-volume\") pod \"collect-profiles-29322285-4lfsl\" (UID: \"6fbf08b9-e454-4705-af84-cc056ec8e2ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322285-4lfsl" Oct 01 16:45:00 crc kubenswrapper[4949]: I1001 16:45:00.335515 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6fbf08b9-e454-4705-af84-cc056ec8e2ca-config-volume\") pod \"collect-profiles-29322285-4lfsl\" (UID: \"6fbf08b9-e454-4705-af84-cc056ec8e2ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322285-4lfsl" Oct 01 16:45:00 crc kubenswrapper[4949]: I1001 16:45:00.335680 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-md2v8\" (UniqueName: \"kubernetes.io/projected/6fbf08b9-e454-4705-af84-cc056ec8e2ca-kube-api-access-md2v8\") pod \"collect-profiles-29322285-4lfsl\" (UID: \"6fbf08b9-e454-4705-af84-cc056ec8e2ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322285-4lfsl" Oct 01 16:45:00 crc kubenswrapper[4949]: I1001 16:45:00.335708 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6fbf08b9-e454-4705-af84-cc056ec8e2ca-secret-volume\") pod \"collect-profiles-29322285-4lfsl\" (UID: \"6fbf08b9-e454-4705-af84-cc056ec8e2ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322285-4lfsl" Oct 01 16:45:00 crc kubenswrapper[4949]: I1001 16:45:00.336760 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6fbf08b9-e454-4705-af84-cc056ec8e2ca-config-volume\") pod \"collect-profiles-29322285-4lfsl\" (UID: \"6fbf08b9-e454-4705-af84-cc056ec8e2ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322285-4lfsl" Oct 01 16:45:00 crc kubenswrapper[4949]: I1001 16:45:00.347058 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6fbf08b9-e454-4705-af84-cc056ec8e2ca-secret-volume\") pod \"collect-profiles-29322285-4lfsl\" (UID: \"6fbf08b9-e454-4705-af84-cc056ec8e2ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322285-4lfsl" Oct 01 16:45:00 crc kubenswrapper[4949]: I1001 16:45:00.352243 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-md2v8\" (UniqueName: \"kubernetes.io/projected/6fbf08b9-e454-4705-af84-cc056ec8e2ca-kube-api-access-md2v8\") pod \"collect-profiles-29322285-4lfsl\" (UID: \"6fbf08b9-e454-4705-af84-cc056ec8e2ca\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322285-4lfsl" Oct 01 16:45:00 crc kubenswrapper[4949]: I1001 16:45:00.481481 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29322285-4lfsl" Oct 01 16:45:00 crc kubenswrapper[4949]: I1001 16:45:00.941924 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29322285-4lfsl"] Oct 01 16:45:01 crc kubenswrapper[4949]: I1001 16:45:01.060509 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29322285-4lfsl" event={"ID":"6fbf08b9-e454-4705-af84-cc056ec8e2ca","Type":"ContainerStarted","Data":"6216a0c1e59b84abbf40bc0d207f9b688a97e41f741f05774f903459cf13c749"} Oct 01 16:45:02 crc kubenswrapper[4949]: I1001 16:45:02.071166 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29322285-4lfsl" event={"ID":"6fbf08b9-e454-4705-af84-cc056ec8e2ca","Type":"ContainerStarted","Data":"3c6e15bda750117b7f52b223f100a90cd9428dd41d0c7a89d9d3c5416b74d99b"} Oct 01 16:45:02 crc kubenswrapper[4949]: I1001 16:45:02.104397 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29322285-4lfsl" podStartSLOduration=2.104380075 podStartE2EDuration="2.104380075s" podCreationTimestamp="2025-10-01 16:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 16:45:02.098735589 +0000 UTC m=+3801.404341800" watchObservedRunningTime="2025-10-01 16:45:02.104380075 +0000 UTC m=+3801.409986266" Oct 01 16:45:03 crc kubenswrapper[4949]: I1001 16:45:03.081606 4949 generic.go:334] "Generic (PLEG): container finished" podID="6fbf08b9-e454-4705-af84-cc056ec8e2ca" containerID="3c6e15bda750117b7f52b223f100a90cd9428dd41d0c7a89d9d3c5416b74d99b" exitCode=0 Oct 01 16:45:03 crc kubenswrapper[4949]: I1001 16:45:03.081947 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29322285-4lfsl" event={"ID":"6fbf08b9-e454-4705-af84-cc056ec8e2ca","Type":"ContainerDied","Data":"3c6e15bda750117b7f52b223f100a90cd9428dd41d0c7a89d9d3c5416b74d99b"} Oct 01 16:45:04 crc kubenswrapper[4949]: I1001 16:45:04.601746 4949 scope.go:117] "RemoveContainer" containerID="3f27d6beb245179886854d4e0b571554a832cd0a55ee5032309258751bf2c77f" Oct 01 16:45:04 crc kubenswrapper[4949]: E1001 16:45:04.602467 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:45:04 crc kubenswrapper[4949]: I1001 16:45:04.614169 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29322285-4lfsl" Oct 01 16:45:04 crc kubenswrapper[4949]: I1001 16:45:04.712202 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29322240-gggbl"] Oct 01 16:45:04 crc kubenswrapper[4949]: I1001 16:45:04.727674 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29322240-gggbl"] Oct 01 16:45:04 crc kubenswrapper[4949]: I1001 16:45:04.739067 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6fbf08b9-e454-4705-af84-cc056ec8e2ca-secret-volume\") pod \"6fbf08b9-e454-4705-af84-cc056ec8e2ca\" (UID: \"6fbf08b9-e454-4705-af84-cc056ec8e2ca\") " Oct 01 16:45:04 crc kubenswrapper[4949]: I1001 16:45:04.739282 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6fbf08b9-e454-4705-af84-cc056ec8e2ca-config-volume\") pod \"6fbf08b9-e454-4705-af84-cc056ec8e2ca\" (UID: \"6fbf08b9-e454-4705-af84-cc056ec8e2ca\") " Oct 01 16:45:04 crc kubenswrapper[4949]: I1001 16:45:04.739376 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-md2v8\" (UniqueName: \"kubernetes.io/projected/6fbf08b9-e454-4705-af84-cc056ec8e2ca-kube-api-access-md2v8\") pod \"6fbf08b9-e454-4705-af84-cc056ec8e2ca\" (UID: \"6fbf08b9-e454-4705-af84-cc056ec8e2ca\") " Oct 01 16:45:04 crc kubenswrapper[4949]: I1001 16:45:04.745610 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6fbf08b9-e454-4705-af84-cc056ec8e2ca-config-volume" (OuterVolumeSpecName: "config-volume") pod "6fbf08b9-e454-4705-af84-cc056ec8e2ca" (UID: "6fbf08b9-e454-4705-af84-cc056ec8e2ca"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 16:45:04 crc kubenswrapper[4949]: I1001 16:45:04.748847 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6fbf08b9-e454-4705-af84-cc056ec8e2ca-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "6fbf08b9-e454-4705-af84-cc056ec8e2ca" (UID: "6fbf08b9-e454-4705-af84-cc056ec8e2ca"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 16:45:04 crc kubenswrapper[4949]: I1001 16:45:04.749463 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fbf08b9-e454-4705-af84-cc056ec8e2ca-kube-api-access-md2v8" (OuterVolumeSpecName: "kube-api-access-md2v8") pod "6fbf08b9-e454-4705-af84-cc056ec8e2ca" (UID: "6fbf08b9-e454-4705-af84-cc056ec8e2ca"). InnerVolumeSpecName "kube-api-access-md2v8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:45:04 crc kubenswrapper[4949]: I1001 16:45:04.842741 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-md2v8\" (UniqueName: \"kubernetes.io/projected/6fbf08b9-e454-4705-af84-cc056ec8e2ca-kube-api-access-md2v8\") on node \"crc\" DevicePath \"\"" Oct 01 16:45:04 crc kubenswrapper[4949]: I1001 16:45:04.842780 4949 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/6fbf08b9-e454-4705-af84-cc056ec8e2ca-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 01 16:45:04 crc kubenswrapper[4949]: I1001 16:45:04.842792 4949 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6fbf08b9-e454-4705-af84-cc056ec8e2ca-config-volume\") on node \"crc\" DevicePath \"\"" Oct 01 16:45:05 crc kubenswrapper[4949]: I1001 16:45:05.101179 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29322285-4lfsl" event={"ID":"6fbf08b9-e454-4705-af84-cc056ec8e2ca","Type":"ContainerDied","Data":"6216a0c1e59b84abbf40bc0d207f9b688a97e41f741f05774f903459cf13c749"} Oct 01 16:45:05 crc kubenswrapper[4949]: I1001 16:45:05.101677 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6216a0c1e59b84abbf40bc0d207f9b688a97e41f741f05774f903459cf13c749" Oct 01 16:45:05 crc kubenswrapper[4949]: I1001 16:45:05.101218 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29322285-4lfsl" Oct 01 16:45:05 crc kubenswrapper[4949]: I1001 16:45:05.621670 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b5b65325-4c38-4c7a-99b4-3fc38060f598" path="/var/lib/kubelet/pods/b5b65325-4c38-4c7a-99b4-3fc38060f598/volumes" Oct 01 16:45:15 crc kubenswrapper[4949]: I1001 16:45:15.601338 4949 scope.go:117] "RemoveContainer" containerID="3f27d6beb245179886854d4e0b571554a832cd0a55ee5032309258751bf2c77f" Oct 01 16:45:15 crc kubenswrapper[4949]: E1001 16:45:15.602106 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:45:28 crc kubenswrapper[4949]: I1001 16:45:28.603672 4949 scope.go:117] "RemoveContainer" containerID="3f27d6beb245179886854d4e0b571554a832cd0a55ee5032309258751bf2c77f" Oct 01 16:45:28 crc kubenswrapper[4949]: E1001 16:45:28.605596 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:45:34 crc kubenswrapper[4949]: I1001 16:45:34.044855 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-create-srpdf"] Oct 01 16:45:34 crc kubenswrapper[4949]: I1001 16:45:34.052907 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-create-srpdf"] Oct 01 16:45:35 crc kubenswrapper[4949]: I1001 16:45:35.623777 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4fbb69f3-8bbc-4acd-97eb-b67dcd314f2d" path="/var/lib/kubelet/pods/4fbb69f3-8bbc-4acd-97eb-b67dcd314f2d/volumes" Oct 01 16:45:42 crc kubenswrapper[4949]: I1001 16:45:42.602951 4949 scope.go:117] "RemoveContainer" containerID="3f27d6beb245179886854d4e0b571554a832cd0a55ee5032309258751bf2c77f" Oct 01 16:45:42 crc kubenswrapper[4949]: E1001 16:45:42.606931 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:45:45 crc kubenswrapper[4949]: I1001 16:45:45.024939 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-97c5-account-create-f68gb"] Oct 01 16:45:45 crc kubenswrapper[4949]: I1001 16:45:45.034103 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-97c5-account-create-f68gb"] Oct 01 16:45:45 crc kubenswrapper[4949]: I1001 16:45:45.612363 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="837eef8c-5b04-4afe-90a1-05f19f168bba" path="/var/lib/kubelet/pods/837eef8c-5b04-4afe-90a1-05f19f168bba/volumes" Oct 01 16:45:48 crc kubenswrapper[4949]: I1001 16:45:48.468889 4949 scope.go:117] "RemoveContainer" containerID="e1de2de2ea6f7e7f607a8e84d1054c2b1e49b2ff79099848223ed324c405699b" Oct 01 16:45:48 crc kubenswrapper[4949]: I1001 16:45:48.582921 4949 scope.go:117] "RemoveContainer" containerID="4ee2ecb36bb3f42c8e16bdfacc147b29d74a14d5d411bb8773a939e730be58a7" Oct 01 16:45:48 crc kubenswrapper[4949]: I1001 16:45:48.632635 4949 scope.go:117] "RemoveContainer" containerID="515e3b028d6e494898ffce45e312069a6c7492536b2a441f16660fa62487d4f0" Oct 01 16:45:57 crc kubenswrapper[4949]: I1001 16:45:57.601444 4949 scope.go:117] "RemoveContainer" containerID="3f27d6beb245179886854d4e0b571554a832cd0a55ee5032309258751bf2c77f" Oct 01 16:45:58 crc kubenswrapper[4949]: I1001 16:45:58.639983 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" event={"ID":"0e15cd67-d4ad-49b8-96a6-da114105e558","Type":"ContainerStarted","Data":"96294e80ce7fbdd1396feefeeb458695df351cf1bda29b0fa09dc48d23e27708"} Oct 01 16:46:08 crc kubenswrapper[4949]: I1001 16:46:08.057646 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack/manila-db-sync-c2td2"] Oct 01 16:46:08 crc kubenswrapper[4949]: I1001 16:46:08.068530 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack/manila-db-sync-c2td2"] Oct 01 16:46:09 crc kubenswrapper[4949]: I1001 16:46:09.625499 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b775d20-0507-4612-af5a-3400bd30e637" path="/var/lib/kubelet/pods/6b775d20-0507-4612-af5a-3400bd30e637/volumes" Oct 01 16:46:48 crc kubenswrapper[4949]: I1001 16:46:48.734325 4949 scope.go:117] "RemoveContainer" containerID="3242fd9f074592e47af74d4ab46e9b01ec7b0182fbe1e5c565b422005acde5d2" Oct 01 16:48:18 crc kubenswrapper[4949]: I1001 16:48:18.038478 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 16:48:18 crc kubenswrapper[4949]: I1001 16:48:18.039090 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 16:48:48 crc kubenswrapper[4949]: I1001 16:48:48.038441 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 16:48:48 crc kubenswrapper[4949]: I1001 16:48:48.038875 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 16:49:18 crc kubenswrapper[4949]: I1001 16:49:18.038570 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 16:49:18 crc kubenswrapper[4949]: I1001 16:49:18.039482 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 16:49:18 crc kubenswrapper[4949]: I1001 16:49:18.039550 4949 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l6287" Oct 01 16:49:18 crc kubenswrapper[4949]: I1001 16:49:18.040778 4949 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"96294e80ce7fbdd1396feefeeb458695df351cf1bda29b0fa09dc48d23e27708"} pod="openshift-machine-config-operator/machine-config-daemon-l6287" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 01 16:49:18 crc kubenswrapper[4949]: I1001 16:49:18.040897 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" containerID="cri-o://96294e80ce7fbdd1396feefeeb458695df351cf1bda29b0fa09dc48d23e27708" gracePeriod=600 Oct 01 16:49:18 crc kubenswrapper[4949]: I1001 16:49:18.569603 4949 generic.go:334] "Generic (PLEG): container finished" podID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerID="96294e80ce7fbdd1396feefeeb458695df351cf1bda29b0fa09dc48d23e27708" exitCode=0 Oct 01 16:49:18 crc kubenswrapper[4949]: I1001 16:49:18.569663 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" event={"ID":"0e15cd67-d4ad-49b8-96a6-da114105e558","Type":"ContainerDied","Data":"96294e80ce7fbdd1396feefeeb458695df351cf1bda29b0fa09dc48d23e27708"} Oct 01 16:49:18 crc kubenswrapper[4949]: I1001 16:49:18.570017 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" event={"ID":"0e15cd67-d4ad-49b8-96a6-da114105e558","Type":"ContainerStarted","Data":"8bf2e7829238a8cfa378df5f67d25a20a40582c69cfd862883190466bce008c5"} Oct 01 16:49:18 crc kubenswrapper[4949]: I1001 16:49:18.570048 4949 scope.go:117] "RemoveContainer" containerID="3f27d6beb245179886854d4e0b571554a832cd0a55ee5032309258751bf2c77f" Oct 01 16:49:22 crc kubenswrapper[4949]: I1001 16:49:22.401101 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2slrz"] Oct 01 16:49:22 crc kubenswrapper[4949]: E1001 16:49:22.404187 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6fbf08b9-e454-4705-af84-cc056ec8e2ca" containerName="collect-profiles" Oct 01 16:49:22 crc kubenswrapper[4949]: I1001 16:49:22.404367 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fbf08b9-e454-4705-af84-cc056ec8e2ca" containerName="collect-profiles" Oct 01 16:49:22 crc kubenswrapper[4949]: I1001 16:49:22.404913 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="6fbf08b9-e454-4705-af84-cc056ec8e2ca" containerName="collect-profiles" Oct 01 16:49:22 crc kubenswrapper[4949]: I1001 16:49:22.408168 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2slrz" Oct 01 16:49:22 crc kubenswrapper[4949]: I1001 16:49:22.416039 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2slrz"] Oct 01 16:49:22 crc kubenswrapper[4949]: I1001 16:49:22.571929 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w22cm\" (UniqueName: \"kubernetes.io/projected/c81a5762-5dc6-42b0-a623-6f61ac900ab4-kube-api-access-w22cm\") pod \"certified-operators-2slrz\" (UID: \"c81a5762-5dc6-42b0-a623-6f61ac900ab4\") " pod="openshift-marketplace/certified-operators-2slrz" Oct 01 16:49:22 crc kubenswrapper[4949]: I1001 16:49:22.571980 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c81a5762-5dc6-42b0-a623-6f61ac900ab4-utilities\") pod \"certified-operators-2slrz\" (UID: \"c81a5762-5dc6-42b0-a623-6f61ac900ab4\") " pod="openshift-marketplace/certified-operators-2slrz" Oct 01 16:49:22 crc kubenswrapper[4949]: I1001 16:49:22.572176 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c81a5762-5dc6-42b0-a623-6f61ac900ab4-catalog-content\") pod \"certified-operators-2slrz\" (UID: \"c81a5762-5dc6-42b0-a623-6f61ac900ab4\") " pod="openshift-marketplace/certified-operators-2slrz" Oct 01 16:49:22 crc kubenswrapper[4949]: I1001 16:49:22.673725 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w22cm\" (UniqueName: \"kubernetes.io/projected/c81a5762-5dc6-42b0-a623-6f61ac900ab4-kube-api-access-w22cm\") pod \"certified-operators-2slrz\" (UID: \"c81a5762-5dc6-42b0-a623-6f61ac900ab4\") " pod="openshift-marketplace/certified-operators-2slrz" Oct 01 16:49:22 crc kubenswrapper[4949]: I1001 16:49:22.674041 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c81a5762-5dc6-42b0-a623-6f61ac900ab4-utilities\") pod \"certified-operators-2slrz\" (UID: \"c81a5762-5dc6-42b0-a623-6f61ac900ab4\") " pod="openshift-marketplace/certified-operators-2slrz" Oct 01 16:49:22 crc kubenswrapper[4949]: I1001 16:49:22.674141 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c81a5762-5dc6-42b0-a623-6f61ac900ab4-catalog-content\") pod \"certified-operators-2slrz\" (UID: \"c81a5762-5dc6-42b0-a623-6f61ac900ab4\") " pod="openshift-marketplace/certified-operators-2slrz" Oct 01 16:49:22 crc kubenswrapper[4949]: I1001 16:49:22.674769 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c81a5762-5dc6-42b0-a623-6f61ac900ab4-utilities\") pod \"certified-operators-2slrz\" (UID: \"c81a5762-5dc6-42b0-a623-6f61ac900ab4\") " pod="openshift-marketplace/certified-operators-2slrz" Oct 01 16:49:22 crc kubenswrapper[4949]: I1001 16:49:22.674944 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c81a5762-5dc6-42b0-a623-6f61ac900ab4-catalog-content\") pod \"certified-operators-2slrz\" (UID: \"c81a5762-5dc6-42b0-a623-6f61ac900ab4\") " pod="openshift-marketplace/certified-operators-2slrz" Oct 01 16:49:22 crc kubenswrapper[4949]: I1001 16:49:22.705661 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w22cm\" (UniqueName: \"kubernetes.io/projected/c81a5762-5dc6-42b0-a623-6f61ac900ab4-kube-api-access-w22cm\") pod \"certified-operators-2slrz\" (UID: \"c81a5762-5dc6-42b0-a623-6f61ac900ab4\") " pod="openshift-marketplace/certified-operators-2slrz" Oct 01 16:49:22 crc kubenswrapper[4949]: I1001 16:49:22.749591 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2slrz" Oct 01 16:49:23 crc kubenswrapper[4949]: I1001 16:49:23.340814 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2slrz"] Oct 01 16:49:23 crc kubenswrapper[4949]: I1001 16:49:23.623350 4949 generic.go:334] "Generic (PLEG): container finished" podID="c81a5762-5dc6-42b0-a623-6f61ac900ab4" containerID="8c890b84375101959886b66e08e2a1253c042c0d6bec96f3f599af3df92247b1" exitCode=0 Oct 01 16:49:23 crc kubenswrapper[4949]: I1001 16:49:23.623446 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2slrz" event={"ID":"c81a5762-5dc6-42b0-a623-6f61ac900ab4","Type":"ContainerDied","Data":"8c890b84375101959886b66e08e2a1253c042c0d6bec96f3f599af3df92247b1"} Oct 01 16:49:23 crc kubenswrapper[4949]: I1001 16:49:23.623841 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2slrz" event={"ID":"c81a5762-5dc6-42b0-a623-6f61ac900ab4","Type":"ContainerStarted","Data":"4aa50618510399e5aa7e346fbc6ff2257dd6bfef1601602ea1eb75ff733cf3ce"} Oct 01 16:49:23 crc kubenswrapper[4949]: I1001 16:49:23.625993 4949 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 01 16:49:25 crc kubenswrapper[4949]: I1001 16:49:25.658824 4949 generic.go:334] "Generic (PLEG): container finished" podID="c81a5762-5dc6-42b0-a623-6f61ac900ab4" containerID="fda044c2a11b3ef8d2e593ce920e6ed338633541440f1759f50dc4e26ec06ca9" exitCode=0 Oct 01 16:49:25 crc kubenswrapper[4949]: I1001 16:49:25.658860 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2slrz" event={"ID":"c81a5762-5dc6-42b0-a623-6f61ac900ab4","Type":"ContainerDied","Data":"fda044c2a11b3ef8d2e593ce920e6ed338633541440f1759f50dc4e26ec06ca9"} Oct 01 16:49:26 crc kubenswrapper[4949]: I1001 16:49:26.672266 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2slrz" event={"ID":"c81a5762-5dc6-42b0-a623-6f61ac900ab4","Type":"ContainerStarted","Data":"110e142294037823c4a91b94b481c239f98a1b1bcaf0722f123f26b8cfa93552"} Oct 01 16:49:26 crc kubenswrapper[4949]: I1001 16:49:26.691447 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2slrz" podStartSLOduration=2.039559514 podStartE2EDuration="4.691431061s" podCreationTimestamp="2025-10-01 16:49:22 +0000 UTC" firstStartedPulling="2025-10-01 16:49:23.625675692 +0000 UTC m=+4062.931281903" lastFinishedPulling="2025-10-01 16:49:26.277547259 +0000 UTC m=+4065.583153450" observedRunningTime="2025-10-01 16:49:26.68923422 +0000 UTC m=+4065.994840411" watchObservedRunningTime="2025-10-01 16:49:26.691431061 +0000 UTC m=+4065.997037242" Oct 01 16:49:32 crc kubenswrapper[4949]: I1001 16:49:32.749773 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2slrz" Oct 01 16:49:32 crc kubenswrapper[4949]: I1001 16:49:32.750359 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2slrz" Oct 01 16:49:32 crc kubenswrapper[4949]: I1001 16:49:32.841080 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2slrz" Oct 01 16:49:33 crc kubenswrapper[4949]: I1001 16:49:33.816255 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2slrz" Oct 01 16:49:33 crc kubenswrapper[4949]: I1001 16:49:33.880499 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2slrz"] Oct 01 16:49:35 crc kubenswrapper[4949]: I1001 16:49:35.770549 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2slrz" podUID="c81a5762-5dc6-42b0-a623-6f61ac900ab4" containerName="registry-server" containerID="cri-o://110e142294037823c4a91b94b481c239f98a1b1bcaf0722f123f26b8cfa93552" gracePeriod=2 Oct 01 16:49:36 crc kubenswrapper[4949]: I1001 16:49:36.793587 4949 generic.go:334] "Generic (PLEG): container finished" podID="c81a5762-5dc6-42b0-a623-6f61ac900ab4" containerID="110e142294037823c4a91b94b481c239f98a1b1bcaf0722f123f26b8cfa93552" exitCode=0 Oct 01 16:49:36 crc kubenswrapper[4949]: I1001 16:49:36.793644 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2slrz" event={"ID":"c81a5762-5dc6-42b0-a623-6f61ac900ab4","Type":"ContainerDied","Data":"110e142294037823c4a91b94b481c239f98a1b1bcaf0722f123f26b8cfa93552"} Oct 01 16:49:36 crc kubenswrapper[4949]: I1001 16:49:36.953971 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2slrz" Oct 01 16:49:37 crc kubenswrapper[4949]: I1001 16:49:37.063044 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w22cm\" (UniqueName: \"kubernetes.io/projected/c81a5762-5dc6-42b0-a623-6f61ac900ab4-kube-api-access-w22cm\") pod \"c81a5762-5dc6-42b0-a623-6f61ac900ab4\" (UID: \"c81a5762-5dc6-42b0-a623-6f61ac900ab4\") " Oct 01 16:49:37 crc kubenswrapper[4949]: I1001 16:49:37.063206 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c81a5762-5dc6-42b0-a623-6f61ac900ab4-utilities\") pod \"c81a5762-5dc6-42b0-a623-6f61ac900ab4\" (UID: \"c81a5762-5dc6-42b0-a623-6f61ac900ab4\") " Oct 01 16:49:37 crc kubenswrapper[4949]: I1001 16:49:37.063248 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c81a5762-5dc6-42b0-a623-6f61ac900ab4-catalog-content\") pod \"c81a5762-5dc6-42b0-a623-6f61ac900ab4\" (UID: \"c81a5762-5dc6-42b0-a623-6f61ac900ab4\") " Oct 01 16:49:37 crc kubenswrapper[4949]: I1001 16:49:37.064287 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c81a5762-5dc6-42b0-a623-6f61ac900ab4-utilities" (OuterVolumeSpecName: "utilities") pod "c81a5762-5dc6-42b0-a623-6f61ac900ab4" (UID: "c81a5762-5dc6-42b0-a623-6f61ac900ab4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:49:37 crc kubenswrapper[4949]: I1001 16:49:37.070850 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c81a5762-5dc6-42b0-a623-6f61ac900ab4-kube-api-access-w22cm" (OuterVolumeSpecName: "kube-api-access-w22cm") pod "c81a5762-5dc6-42b0-a623-6f61ac900ab4" (UID: "c81a5762-5dc6-42b0-a623-6f61ac900ab4"). InnerVolumeSpecName "kube-api-access-w22cm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:49:37 crc kubenswrapper[4949]: I1001 16:49:37.106356 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c81a5762-5dc6-42b0-a623-6f61ac900ab4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c81a5762-5dc6-42b0-a623-6f61ac900ab4" (UID: "c81a5762-5dc6-42b0-a623-6f61ac900ab4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:49:37 crc kubenswrapper[4949]: I1001 16:49:37.166087 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c81a5762-5dc6-42b0-a623-6f61ac900ab4-utilities\") on node \"crc\" DevicePath \"\"" Oct 01 16:49:37 crc kubenswrapper[4949]: I1001 16:49:37.166141 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c81a5762-5dc6-42b0-a623-6f61ac900ab4-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 01 16:49:37 crc kubenswrapper[4949]: I1001 16:49:37.166155 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w22cm\" (UniqueName: \"kubernetes.io/projected/c81a5762-5dc6-42b0-a623-6f61ac900ab4-kube-api-access-w22cm\") on node \"crc\" DevicePath \"\"" Oct 01 16:49:37 crc kubenswrapper[4949]: I1001 16:49:37.807218 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2slrz" event={"ID":"c81a5762-5dc6-42b0-a623-6f61ac900ab4","Type":"ContainerDied","Data":"4aa50618510399e5aa7e346fbc6ff2257dd6bfef1601602ea1eb75ff733cf3ce"} Oct 01 16:49:37 crc kubenswrapper[4949]: I1001 16:49:37.807298 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2slrz" Oct 01 16:49:37 crc kubenswrapper[4949]: I1001 16:49:37.807681 4949 scope.go:117] "RemoveContainer" containerID="110e142294037823c4a91b94b481c239f98a1b1bcaf0722f123f26b8cfa93552" Oct 01 16:49:37 crc kubenswrapper[4949]: I1001 16:49:37.845839 4949 scope.go:117] "RemoveContainer" containerID="fda044c2a11b3ef8d2e593ce920e6ed338633541440f1759f50dc4e26ec06ca9" Oct 01 16:49:37 crc kubenswrapper[4949]: I1001 16:49:37.851019 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2slrz"] Oct 01 16:49:37 crc kubenswrapper[4949]: I1001 16:49:37.859015 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2slrz"] Oct 01 16:49:37 crc kubenswrapper[4949]: I1001 16:49:37.884305 4949 scope.go:117] "RemoveContainer" containerID="8c890b84375101959886b66e08e2a1253c042c0d6bec96f3f599af3df92247b1" Oct 01 16:49:39 crc kubenswrapper[4949]: I1001 16:49:39.612802 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c81a5762-5dc6-42b0-a623-6f61ac900ab4" path="/var/lib/kubelet/pods/c81a5762-5dc6-42b0-a623-6f61ac900ab4/volumes" Oct 01 16:51:05 crc kubenswrapper[4949]: I1001 16:51:05.775597 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zmw7k"] Oct 01 16:51:05 crc kubenswrapper[4949]: E1001 16:51:05.776534 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c81a5762-5dc6-42b0-a623-6f61ac900ab4" containerName="extract-content" Oct 01 16:51:05 crc kubenswrapper[4949]: I1001 16:51:05.776549 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="c81a5762-5dc6-42b0-a623-6f61ac900ab4" containerName="extract-content" Oct 01 16:51:05 crc kubenswrapper[4949]: E1001 16:51:05.776593 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c81a5762-5dc6-42b0-a623-6f61ac900ab4" containerName="extract-utilities" Oct 01 16:51:05 crc kubenswrapper[4949]: I1001 16:51:05.776601 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="c81a5762-5dc6-42b0-a623-6f61ac900ab4" containerName="extract-utilities" Oct 01 16:51:05 crc kubenswrapper[4949]: E1001 16:51:05.776621 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c81a5762-5dc6-42b0-a623-6f61ac900ab4" containerName="registry-server" Oct 01 16:51:05 crc kubenswrapper[4949]: I1001 16:51:05.776629 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="c81a5762-5dc6-42b0-a623-6f61ac900ab4" containerName="registry-server" Oct 01 16:51:05 crc kubenswrapper[4949]: I1001 16:51:05.776869 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="c81a5762-5dc6-42b0-a623-6f61ac900ab4" containerName="registry-server" Oct 01 16:51:05 crc kubenswrapper[4949]: I1001 16:51:05.778507 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zmw7k" Oct 01 16:51:05 crc kubenswrapper[4949]: I1001 16:51:05.789104 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zmw7k"] Oct 01 16:51:05 crc kubenswrapper[4949]: I1001 16:51:05.902623 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c-utilities\") pod \"community-operators-zmw7k\" (UID: \"63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c\") " pod="openshift-marketplace/community-operators-zmw7k" Oct 01 16:51:05 crc kubenswrapper[4949]: I1001 16:51:05.902829 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c-catalog-content\") pod \"community-operators-zmw7k\" (UID: \"63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c\") " pod="openshift-marketplace/community-operators-zmw7k" Oct 01 16:51:05 crc kubenswrapper[4949]: I1001 16:51:05.902975 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsfbg\" (UniqueName: \"kubernetes.io/projected/63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c-kube-api-access-wsfbg\") pod \"community-operators-zmw7k\" (UID: \"63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c\") " pod="openshift-marketplace/community-operators-zmw7k" Oct 01 16:51:06 crc kubenswrapper[4949]: I1001 16:51:06.005090 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c-catalog-content\") pod \"community-operators-zmw7k\" (UID: \"63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c\") " pod="openshift-marketplace/community-operators-zmw7k" Oct 01 16:51:06 crc kubenswrapper[4949]: I1001 16:51:06.005436 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wsfbg\" (UniqueName: \"kubernetes.io/projected/63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c-kube-api-access-wsfbg\") pod \"community-operators-zmw7k\" (UID: \"63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c\") " pod="openshift-marketplace/community-operators-zmw7k" Oct 01 16:51:06 crc kubenswrapper[4949]: I1001 16:51:06.005674 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c-utilities\") pod \"community-operators-zmw7k\" (UID: \"63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c\") " pod="openshift-marketplace/community-operators-zmw7k" Oct 01 16:51:06 crc kubenswrapper[4949]: I1001 16:51:06.005833 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c-catalog-content\") pod \"community-operators-zmw7k\" (UID: \"63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c\") " pod="openshift-marketplace/community-operators-zmw7k" Oct 01 16:51:06 crc kubenswrapper[4949]: I1001 16:51:06.006013 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c-utilities\") pod \"community-operators-zmw7k\" (UID: \"63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c\") " pod="openshift-marketplace/community-operators-zmw7k" Oct 01 16:51:06 crc kubenswrapper[4949]: I1001 16:51:06.031562 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsfbg\" (UniqueName: \"kubernetes.io/projected/63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c-kube-api-access-wsfbg\") pod \"community-operators-zmw7k\" (UID: \"63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c\") " pod="openshift-marketplace/community-operators-zmw7k" Oct 01 16:51:06 crc kubenswrapper[4949]: I1001 16:51:06.113248 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zmw7k" Oct 01 16:51:06 crc kubenswrapper[4949]: I1001 16:51:06.726672 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zmw7k"] Oct 01 16:51:07 crc kubenswrapper[4949]: I1001 16:51:07.717065 4949 generic.go:334] "Generic (PLEG): container finished" podID="63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c" containerID="6ff068bf162adffc727affd32b7bc2398a17c4dbfaf6c95cffa34cf1223f9c48" exitCode=0 Oct 01 16:51:07 crc kubenswrapper[4949]: I1001 16:51:07.718430 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zmw7k" event={"ID":"63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c","Type":"ContainerDied","Data":"6ff068bf162adffc727affd32b7bc2398a17c4dbfaf6c95cffa34cf1223f9c48"} Oct 01 16:51:07 crc kubenswrapper[4949]: I1001 16:51:07.718535 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zmw7k" event={"ID":"63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c","Type":"ContainerStarted","Data":"0f6e1a6a25363a77453b758f59f2d4878c1b349c199cbd8df3f5e0dd909e9600"} Oct 01 16:51:09 crc kubenswrapper[4949]: I1001 16:51:09.738323 4949 generic.go:334] "Generic (PLEG): container finished" podID="63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c" containerID="4392de3b9e35f86a100b68ec875ce94de4cd2bf05a41194f336cd37830c146c8" exitCode=0 Oct 01 16:51:09 crc kubenswrapper[4949]: I1001 16:51:09.738362 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zmw7k" event={"ID":"63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c","Type":"ContainerDied","Data":"4392de3b9e35f86a100b68ec875ce94de4cd2bf05a41194f336cd37830c146c8"} Oct 01 16:51:10 crc kubenswrapper[4949]: I1001 16:51:10.751530 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zmw7k" event={"ID":"63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c","Type":"ContainerStarted","Data":"b5d5559b7d438bf6ffb95bc138d8cac36ac74eb4f167459a61072a55460fda19"} Oct 01 16:51:10 crc kubenswrapper[4949]: I1001 16:51:10.783999 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zmw7k" podStartSLOduration=3.239936236 podStartE2EDuration="5.783973329s" podCreationTimestamp="2025-10-01 16:51:05 +0000 UTC" firstStartedPulling="2025-10-01 16:51:07.720605966 +0000 UTC m=+4167.026212167" lastFinishedPulling="2025-10-01 16:51:10.264643059 +0000 UTC m=+4169.570249260" observedRunningTime="2025-10-01 16:51:10.773631583 +0000 UTC m=+4170.079237774" watchObservedRunningTime="2025-10-01 16:51:10.783973329 +0000 UTC m=+4170.089579540" Oct 01 16:51:16 crc kubenswrapper[4949]: I1001 16:51:16.113730 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zmw7k" Oct 01 16:51:16 crc kubenswrapper[4949]: I1001 16:51:16.114254 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zmw7k" Oct 01 16:51:16 crc kubenswrapper[4949]: I1001 16:51:16.178575 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zmw7k" Oct 01 16:51:16 crc kubenswrapper[4949]: I1001 16:51:16.886984 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zmw7k" Oct 01 16:51:16 crc kubenswrapper[4949]: I1001 16:51:16.936666 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zmw7k"] Oct 01 16:51:18 crc kubenswrapper[4949]: I1001 16:51:18.038847 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 16:51:18 crc kubenswrapper[4949]: I1001 16:51:18.039212 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 16:51:18 crc kubenswrapper[4949]: I1001 16:51:18.827531 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zmw7k" podUID="63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c" containerName="registry-server" containerID="cri-o://b5d5559b7d438bf6ffb95bc138d8cac36ac74eb4f167459a61072a55460fda19" gracePeriod=2 Oct 01 16:51:19 crc kubenswrapper[4949]: I1001 16:51:19.478881 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zmw7k" Oct 01 16:51:19 crc kubenswrapper[4949]: I1001 16:51:19.521642 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c-utilities\") pod \"63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c\" (UID: \"63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c\") " Oct 01 16:51:19 crc kubenswrapper[4949]: I1001 16:51:19.521896 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c-catalog-content\") pod \"63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c\" (UID: \"63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c\") " Oct 01 16:51:19 crc kubenswrapper[4949]: I1001 16:51:19.522014 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsfbg\" (UniqueName: \"kubernetes.io/projected/63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c-kube-api-access-wsfbg\") pod \"63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c\" (UID: \"63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c\") " Oct 01 16:51:19 crc kubenswrapper[4949]: I1001 16:51:19.522960 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c-utilities" (OuterVolumeSpecName: "utilities") pod "63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c" (UID: "63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:51:19 crc kubenswrapper[4949]: I1001 16:51:19.534317 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c-kube-api-access-wsfbg" (OuterVolumeSpecName: "kube-api-access-wsfbg") pod "63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c" (UID: "63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c"). InnerVolumeSpecName "kube-api-access-wsfbg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:51:19 crc kubenswrapper[4949]: I1001 16:51:19.590155 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c" (UID: "63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:51:19 crc kubenswrapper[4949]: I1001 16:51:19.624051 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c-utilities\") on node \"crc\" DevicePath \"\"" Oct 01 16:51:19 crc kubenswrapper[4949]: I1001 16:51:19.624082 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 01 16:51:19 crc kubenswrapper[4949]: I1001 16:51:19.624217 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wsfbg\" (UniqueName: \"kubernetes.io/projected/63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c-kube-api-access-wsfbg\") on node \"crc\" DevicePath \"\"" Oct 01 16:51:19 crc kubenswrapper[4949]: I1001 16:51:19.841932 4949 generic.go:334] "Generic (PLEG): container finished" podID="63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c" containerID="b5d5559b7d438bf6ffb95bc138d8cac36ac74eb4f167459a61072a55460fda19" exitCode=0 Oct 01 16:51:19 crc kubenswrapper[4949]: I1001 16:51:19.841977 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zmw7k" event={"ID":"63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c","Type":"ContainerDied","Data":"b5d5559b7d438bf6ffb95bc138d8cac36ac74eb4f167459a61072a55460fda19"} Oct 01 16:51:19 crc kubenswrapper[4949]: I1001 16:51:19.842010 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zmw7k" event={"ID":"63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c","Type":"ContainerDied","Data":"0f6e1a6a25363a77453b758f59f2d4878c1b349c199cbd8df3f5e0dd909e9600"} Oct 01 16:51:19 crc kubenswrapper[4949]: I1001 16:51:19.842027 4949 scope.go:117] "RemoveContainer" containerID="b5d5559b7d438bf6ffb95bc138d8cac36ac74eb4f167459a61072a55460fda19" Oct 01 16:51:19 crc kubenswrapper[4949]: I1001 16:51:19.842048 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zmw7k" Oct 01 16:51:19 crc kubenswrapper[4949]: I1001 16:51:19.874969 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zmw7k"] Oct 01 16:51:19 crc kubenswrapper[4949]: I1001 16:51:19.877978 4949 scope.go:117] "RemoveContainer" containerID="4392de3b9e35f86a100b68ec875ce94de4cd2bf05a41194f336cd37830c146c8" Oct 01 16:51:19 crc kubenswrapper[4949]: I1001 16:51:19.881526 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zmw7k"] Oct 01 16:51:19 crc kubenswrapper[4949]: I1001 16:51:19.914854 4949 scope.go:117] "RemoveContainer" containerID="6ff068bf162adffc727affd32b7bc2398a17c4dbfaf6c95cffa34cf1223f9c48" Oct 01 16:51:19 crc kubenswrapper[4949]: I1001 16:51:19.960612 4949 scope.go:117] "RemoveContainer" containerID="b5d5559b7d438bf6ffb95bc138d8cac36ac74eb4f167459a61072a55460fda19" Oct 01 16:51:19 crc kubenswrapper[4949]: E1001 16:51:19.961532 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5d5559b7d438bf6ffb95bc138d8cac36ac74eb4f167459a61072a55460fda19\": container with ID starting with b5d5559b7d438bf6ffb95bc138d8cac36ac74eb4f167459a61072a55460fda19 not found: ID does not exist" containerID="b5d5559b7d438bf6ffb95bc138d8cac36ac74eb4f167459a61072a55460fda19" Oct 01 16:51:19 crc kubenswrapper[4949]: I1001 16:51:19.961582 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5d5559b7d438bf6ffb95bc138d8cac36ac74eb4f167459a61072a55460fda19"} err="failed to get container status \"b5d5559b7d438bf6ffb95bc138d8cac36ac74eb4f167459a61072a55460fda19\": rpc error: code = NotFound desc = could not find container \"b5d5559b7d438bf6ffb95bc138d8cac36ac74eb4f167459a61072a55460fda19\": container with ID starting with b5d5559b7d438bf6ffb95bc138d8cac36ac74eb4f167459a61072a55460fda19 not found: ID does not exist" Oct 01 16:51:19 crc kubenswrapper[4949]: I1001 16:51:19.961616 4949 scope.go:117] "RemoveContainer" containerID="4392de3b9e35f86a100b68ec875ce94de4cd2bf05a41194f336cd37830c146c8" Oct 01 16:51:19 crc kubenswrapper[4949]: E1001 16:51:19.962174 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4392de3b9e35f86a100b68ec875ce94de4cd2bf05a41194f336cd37830c146c8\": container with ID starting with 4392de3b9e35f86a100b68ec875ce94de4cd2bf05a41194f336cd37830c146c8 not found: ID does not exist" containerID="4392de3b9e35f86a100b68ec875ce94de4cd2bf05a41194f336cd37830c146c8" Oct 01 16:51:19 crc kubenswrapper[4949]: I1001 16:51:19.962305 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4392de3b9e35f86a100b68ec875ce94de4cd2bf05a41194f336cd37830c146c8"} err="failed to get container status \"4392de3b9e35f86a100b68ec875ce94de4cd2bf05a41194f336cd37830c146c8\": rpc error: code = NotFound desc = could not find container \"4392de3b9e35f86a100b68ec875ce94de4cd2bf05a41194f336cd37830c146c8\": container with ID starting with 4392de3b9e35f86a100b68ec875ce94de4cd2bf05a41194f336cd37830c146c8 not found: ID does not exist" Oct 01 16:51:19 crc kubenswrapper[4949]: I1001 16:51:19.962396 4949 scope.go:117] "RemoveContainer" containerID="6ff068bf162adffc727affd32b7bc2398a17c4dbfaf6c95cffa34cf1223f9c48" Oct 01 16:51:19 crc kubenswrapper[4949]: E1001 16:51:19.962891 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ff068bf162adffc727affd32b7bc2398a17c4dbfaf6c95cffa34cf1223f9c48\": container with ID starting with 6ff068bf162adffc727affd32b7bc2398a17c4dbfaf6c95cffa34cf1223f9c48 not found: ID does not exist" containerID="6ff068bf162adffc727affd32b7bc2398a17c4dbfaf6c95cffa34cf1223f9c48" Oct 01 16:51:19 crc kubenswrapper[4949]: I1001 16:51:19.963008 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ff068bf162adffc727affd32b7bc2398a17c4dbfaf6c95cffa34cf1223f9c48"} err="failed to get container status \"6ff068bf162adffc727affd32b7bc2398a17c4dbfaf6c95cffa34cf1223f9c48\": rpc error: code = NotFound desc = could not find container \"6ff068bf162adffc727affd32b7bc2398a17c4dbfaf6c95cffa34cf1223f9c48\": container with ID starting with 6ff068bf162adffc727affd32b7bc2398a17c4dbfaf6c95cffa34cf1223f9c48 not found: ID does not exist" Oct 01 16:51:21 crc kubenswrapper[4949]: I1001 16:51:21.641101 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c" path="/var/lib/kubelet/pods/63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c/volumes" Oct 01 16:51:48 crc kubenswrapper[4949]: I1001 16:51:48.038917 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 16:51:48 crc kubenswrapper[4949]: I1001 16:51:48.039556 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 16:52:18 crc kubenswrapper[4949]: I1001 16:52:18.038292 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 16:52:18 crc kubenswrapper[4949]: I1001 16:52:18.038932 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 16:52:18 crc kubenswrapper[4949]: I1001 16:52:18.038986 4949 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l6287" Oct 01 16:52:18 crc kubenswrapper[4949]: I1001 16:52:18.039936 4949 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8bf2e7829238a8cfa378df5f67d25a20a40582c69cfd862883190466bce008c5"} pod="openshift-machine-config-operator/machine-config-daemon-l6287" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 01 16:52:18 crc kubenswrapper[4949]: I1001 16:52:18.040002 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" containerID="cri-o://8bf2e7829238a8cfa378df5f67d25a20a40582c69cfd862883190466bce008c5" gracePeriod=600 Oct 01 16:52:18 crc kubenswrapper[4949]: E1001 16:52:18.171377 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:52:18 crc kubenswrapper[4949]: I1001 16:52:18.433744 4949 generic.go:334] "Generic (PLEG): container finished" podID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerID="8bf2e7829238a8cfa378df5f67d25a20a40582c69cfd862883190466bce008c5" exitCode=0 Oct 01 16:52:18 crc kubenswrapper[4949]: I1001 16:52:18.433794 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" event={"ID":"0e15cd67-d4ad-49b8-96a6-da114105e558","Type":"ContainerDied","Data":"8bf2e7829238a8cfa378df5f67d25a20a40582c69cfd862883190466bce008c5"} Oct 01 16:52:18 crc kubenswrapper[4949]: I1001 16:52:18.433828 4949 scope.go:117] "RemoveContainer" containerID="96294e80ce7fbdd1396feefeeb458695df351cf1bda29b0fa09dc48d23e27708" Oct 01 16:52:18 crc kubenswrapper[4949]: I1001 16:52:18.434557 4949 scope.go:117] "RemoveContainer" containerID="8bf2e7829238a8cfa378df5f67d25a20a40582c69cfd862883190466bce008c5" Oct 01 16:52:18 crc kubenswrapper[4949]: E1001 16:52:18.434795 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:52:28 crc kubenswrapper[4949]: I1001 16:52:28.194402 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-zn6cq"] Oct 01 16:52:28 crc kubenswrapper[4949]: E1001 16:52:28.195358 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c" containerName="registry-server" Oct 01 16:52:28 crc kubenswrapper[4949]: I1001 16:52:28.195374 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c" containerName="registry-server" Oct 01 16:52:28 crc kubenswrapper[4949]: E1001 16:52:28.195397 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c" containerName="extract-utilities" Oct 01 16:52:28 crc kubenswrapper[4949]: I1001 16:52:28.195405 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c" containerName="extract-utilities" Oct 01 16:52:28 crc kubenswrapper[4949]: E1001 16:52:28.195418 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c" containerName="extract-content" Oct 01 16:52:28 crc kubenswrapper[4949]: I1001 16:52:28.195426 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c" containerName="extract-content" Oct 01 16:52:28 crc kubenswrapper[4949]: I1001 16:52:28.195665 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="63b2deb4-9f9f-42b8-a8c4-a053dc4e7e9c" containerName="registry-server" Oct 01 16:52:28 crc kubenswrapper[4949]: I1001 16:52:28.198392 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zn6cq" Oct 01 16:52:28 crc kubenswrapper[4949]: I1001 16:52:28.217409 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zn6cq"] Oct 01 16:52:28 crc kubenswrapper[4949]: I1001 16:52:28.378370 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/194ffaa7-f124-4a10-a04d-b489362ea332-catalog-content\") pod \"redhat-operators-zn6cq\" (UID: \"194ffaa7-f124-4a10-a04d-b489362ea332\") " pod="openshift-marketplace/redhat-operators-zn6cq" Oct 01 16:52:28 crc kubenswrapper[4949]: I1001 16:52:28.378696 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/194ffaa7-f124-4a10-a04d-b489362ea332-utilities\") pod \"redhat-operators-zn6cq\" (UID: \"194ffaa7-f124-4a10-a04d-b489362ea332\") " pod="openshift-marketplace/redhat-operators-zn6cq" Oct 01 16:52:28 crc kubenswrapper[4949]: I1001 16:52:28.378830 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6xxg\" (UniqueName: \"kubernetes.io/projected/194ffaa7-f124-4a10-a04d-b489362ea332-kube-api-access-l6xxg\") pod \"redhat-operators-zn6cq\" (UID: \"194ffaa7-f124-4a10-a04d-b489362ea332\") " pod="openshift-marketplace/redhat-operators-zn6cq" Oct 01 16:52:28 crc kubenswrapper[4949]: I1001 16:52:28.480475 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/194ffaa7-f124-4a10-a04d-b489362ea332-catalog-content\") pod \"redhat-operators-zn6cq\" (UID: \"194ffaa7-f124-4a10-a04d-b489362ea332\") " pod="openshift-marketplace/redhat-operators-zn6cq" Oct 01 16:52:28 crc kubenswrapper[4949]: I1001 16:52:28.480831 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/194ffaa7-f124-4a10-a04d-b489362ea332-utilities\") pod \"redhat-operators-zn6cq\" (UID: \"194ffaa7-f124-4a10-a04d-b489362ea332\") " pod="openshift-marketplace/redhat-operators-zn6cq" Oct 01 16:52:28 crc kubenswrapper[4949]: I1001 16:52:28.480956 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6xxg\" (UniqueName: \"kubernetes.io/projected/194ffaa7-f124-4a10-a04d-b489362ea332-kube-api-access-l6xxg\") pod \"redhat-operators-zn6cq\" (UID: \"194ffaa7-f124-4a10-a04d-b489362ea332\") " pod="openshift-marketplace/redhat-operators-zn6cq" Oct 01 16:52:28 crc kubenswrapper[4949]: I1001 16:52:28.481061 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/194ffaa7-f124-4a10-a04d-b489362ea332-catalog-content\") pod \"redhat-operators-zn6cq\" (UID: \"194ffaa7-f124-4a10-a04d-b489362ea332\") " pod="openshift-marketplace/redhat-operators-zn6cq" Oct 01 16:52:28 crc kubenswrapper[4949]: I1001 16:52:28.481337 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/194ffaa7-f124-4a10-a04d-b489362ea332-utilities\") pod \"redhat-operators-zn6cq\" (UID: \"194ffaa7-f124-4a10-a04d-b489362ea332\") " pod="openshift-marketplace/redhat-operators-zn6cq" Oct 01 16:52:28 crc kubenswrapper[4949]: I1001 16:52:28.501073 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6xxg\" (UniqueName: \"kubernetes.io/projected/194ffaa7-f124-4a10-a04d-b489362ea332-kube-api-access-l6xxg\") pod \"redhat-operators-zn6cq\" (UID: \"194ffaa7-f124-4a10-a04d-b489362ea332\") " pod="openshift-marketplace/redhat-operators-zn6cq" Oct 01 16:52:28 crc kubenswrapper[4949]: I1001 16:52:28.533911 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zn6cq" Oct 01 16:52:29 crc kubenswrapper[4949]: I1001 16:52:29.003306 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-zn6cq"] Oct 01 16:52:29 crc kubenswrapper[4949]: I1001 16:52:29.555856 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zn6cq" event={"ID":"194ffaa7-f124-4a10-a04d-b489362ea332","Type":"ContainerStarted","Data":"e3350e597e4330e4497639fa34257cb461e4fc5c194d20f620a043ca04cc048f"} Oct 01 16:52:30 crc kubenswrapper[4949]: I1001 16:52:30.569806 4949 generic.go:334] "Generic (PLEG): container finished" podID="194ffaa7-f124-4a10-a04d-b489362ea332" containerID="dc606dfe51e9aadfe5a650fd486d09b7240d4e7bdf6d2e73f89b89989121ee68" exitCode=0 Oct 01 16:52:30 crc kubenswrapper[4949]: I1001 16:52:30.569950 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zn6cq" event={"ID":"194ffaa7-f124-4a10-a04d-b489362ea332","Type":"ContainerDied","Data":"dc606dfe51e9aadfe5a650fd486d09b7240d4e7bdf6d2e73f89b89989121ee68"} Oct 01 16:52:31 crc kubenswrapper[4949]: I1001 16:52:31.609398 4949 scope.go:117] "RemoveContainer" containerID="8bf2e7829238a8cfa378df5f67d25a20a40582c69cfd862883190466bce008c5" Oct 01 16:52:31 crc kubenswrapper[4949]: E1001 16:52:31.610206 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:52:32 crc kubenswrapper[4949]: I1001 16:52:32.610871 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zn6cq" event={"ID":"194ffaa7-f124-4a10-a04d-b489362ea332","Type":"ContainerStarted","Data":"5da79352d6af68bd8a46649584f75e0682e40474fc22d5661ff5fa485ea622c1"} Oct 01 16:52:35 crc kubenswrapper[4949]: I1001 16:52:35.643432 4949 generic.go:334] "Generic (PLEG): container finished" podID="194ffaa7-f124-4a10-a04d-b489362ea332" containerID="5da79352d6af68bd8a46649584f75e0682e40474fc22d5661ff5fa485ea622c1" exitCode=0 Oct 01 16:52:35 crc kubenswrapper[4949]: I1001 16:52:35.643489 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zn6cq" event={"ID":"194ffaa7-f124-4a10-a04d-b489362ea332","Type":"ContainerDied","Data":"5da79352d6af68bd8a46649584f75e0682e40474fc22d5661ff5fa485ea622c1"} Oct 01 16:52:37 crc kubenswrapper[4949]: I1001 16:52:37.665313 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zn6cq" event={"ID":"194ffaa7-f124-4a10-a04d-b489362ea332","Type":"ContainerStarted","Data":"d14377af744fb8d3a82b1803934707b05735215340065b2d940de3c7ce61f1dd"} Oct 01 16:52:37 crc kubenswrapper[4949]: I1001 16:52:37.702783 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-zn6cq" podStartSLOduration=3.427732684 podStartE2EDuration="9.702755843s" podCreationTimestamp="2025-10-01 16:52:28 +0000 UTC" firstStartedPulling="2025-10-01 16:52:30.573010533 +0000 UTC m=+4249.878616724" lastFinishedPulling="2025-10-01 16:52:36.848033682 +0000 UTC m=+4256.153639883" observedRunningTime="2025-10-01 16:52:37.699296156 +0000 UTC m=+4257.004902357" watchObservedRunningTime="2025-10-01 16:52:37.702755843 +0000 UTC m=+4257.008362074" Oct 01 16:52:38 crc kubenswrapper[4949]: I1001 16:52:38.534874 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-zn6cq" Oct 01 16:52:38 crc kubenswrapper[4949]: I1001 16:52:38.535225 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-zn6cq" Oct 01 16:52:39 crc kubenswrapper[4949]: I1001 16:52:39.583653 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-zn6cq" podUID="194ffaa7-f124-4a10-a04d-b489362ea332" containerName="registry-server" probeResult="failure" output=< Oct 01 16:52:39 crc kubenswrapper[4949]: timeout: failed to connect service ":50051" within 1s Oct 01 16:52:39 crc kubenswrapper[4949]: > Oct 01 16:52:42 crc kubenswrapper[4949]: I1001 16:52:42.601920 4949 scope.go:117] "RemoveContainer" containerID="8bf2e7829238a8cfa378df5f67d25a20a40582c69cfd862883190466bce008c5" Oct 01 16:52:42 crc kubenswrapper[4949]: E1001 16:52:42.602743 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:52:48 crc kubenswrapper[4949]: I1001 16:52:48.645715 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-zn6cq" Oct 01 16:52:48 crc kubenswrapper[4949]: I1001 16:52:48.726416 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-zn6cq" Oct 01 16:52:48 crc kubenswrapper[4949]: I1001 16:52:48.888044 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zn6cq"] Oct 01 16:52:49 crc kubenswrapper[4949]: I1001 16:52:49.788409 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-zn6cq" podUID="194ffaa7-f124-4a10-a04d-b489362ea332" containerName="registry-server" containerID="cri-o://d14377af744fb8d3a82b1803934707b05735215340065b2d940de3c7ce61f1dd" gracePeriod=2 Oct 01 16:52:50 crc kubenswrapper[4949]: I1001 16:52:50.799874 4949 generic.go:334] "Generic (PLEG): container finished" podID="194ffaa7-f124-4a10-a04d-b489362ea332" containerID="d14377af744fb8d3a82b1803934707b05735215340065b2d940de3c7ce61f1dd" exitCode=0 Oct 01 16:52:50 crc kubenswrapper[4949]: I1001 16:52:50.799927 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zn6cq" event={"ID":"194ffaa7-f124-4a10-a04d-b489362ea332","Type":"ContainerDied","Data":"d14377af744fb8d3a82b1803934707b05735215340065b2d940de3c7ce61f1dd"} Oct 01 16:52:50 crc kubenswrapper[4949]: I1001 16:52:50.800503 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-zn6cq" event={"ID":"194ffaa7-f124-4a10-a04d-b489362ea332","Type":"ContainerDied","Data":"e3350e597e4330e4497639fa34257cb461e4fc5c194d20f620a043ca04cc048f"} Oct 01 16:52:50 crc kubenswrapper[4949]: I1001 16:52:50.800542 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e3350e597e4330e4497639fa34257cb461e4fc5c194d20f620a043ca04cc048f" Oct 01 16:52:50 crc kubenswrapper[4949]: I1001 16:52:50.866880 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zn6cq" Oct 01 16:52:50 crc kubenswrapper[4949]: I1001 16:52:50.961155 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/194ffaa7-f124-4a10-a04d-b489362ea332-utilities\") pod \"194ffaa7-f124-4a10-a04d-b489362ea332\" (UID: \"194ffaa7-f124-4a10-a04d-b489362ea332\") " Oct 01 16:52:50 crc kubenswrapper[4949]: I1001 16:52:50.961270 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6xxg\" (UniqueName: \"kubernetes.io/projected/194ffaa7-f124-4a10-a04d-b489362ea332-kube-api-access-l6xxg\") pod \"194ffaa7-f124-4a10-a04d-b489362ea332\" (UID: \"194ffaa7-f124-4a10-a04d-b489362ea332\") " Oct 01 16:52:50 crc kubenswrapper[4949]: I1001 16:52:50.961368 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/194ffaa7-f124-4a10-a04d-b489362ea332-catalog-content\") pod \"194ffaa7-f124-4a10-a04d-b489362ea332\" (UID: \"194ffaa7-f124-4a10-a04d-b489362ea332\") " Oct 01 16:52:50 crc kubenswrapper[4949]: I1001 16:52:50.971411 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/194ffaa7-f124-4a10-a04d-b489362ea332-utilities" (OuterVolumeSpecName: "utilities") pod "194ffaa7-f124-4a10-a04d-b489362ea332" (UID: "194ffaa7-f124-4a10-a04d-b489362ea332"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:52:50 crc kubenswrapper[4949]: I1001 16:52:50.977206 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/194ffaa7-f124-4a10-a04d-b489362ea332-kube-api-access-l6xxg" (OuterVolumeSpecName: "kube-api-access-l6xxg") pod "194ffaa7-f124-4a10-a04d-b489362ea332" (UID: "194ffaa7-f124-4a10-a04d-b489362ea332"). InnerVolumeSpecName "kube-api-access-l6xxg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:52:51 crc kubenswrapper[4949]: I1001 16:52:51.034274 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/194ffaa7-f124-4a10-a04d-b489362ea332-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "194ffaa7-f124-4a10-a04d-b489362ea332" (UID: "194ffaa7-f124-4a10-a04d-b489362ea332"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:52:51 crc kubenswrapper[4949]: I1001 16:52:51.063071 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/194ffaa7-f124-4a10-a04d-b489362ea332-utilities\") on node \"crc\" DevicePath \"\"" Oct 01 16:52:51 crc kubenswrapper[4949]: I1001 16:52:51.063110 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6xxg\" (UniqueName: \"kubernetes.io/projected/194ffaa7-f124-4a10-a04d-b489362ea332-kube-api-access-l6xxg\") on node \"crc\" DevicePath \"\"" Oct 01 16:52:51 crc kubenswrapper[4949]: I1001 16:52:51.063143 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/194ffaa7-f124-4a10-a04d-b489362ea332-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 01 16:52:51 crc kubenswrapper[4949]: I1001 16:52:51.809081 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-zn6cq" Oct 01 16:52:51 crc kubenswrapper[4949]: I1001 16:52:51.845403 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-zn6cq"] Oct 01 16:52:51 crc kubenswrapper[4949]: I1001 16:52:51.861502 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-zn6cq"] Oct 01 16:52:53 crc kubenswrapper[4949]: I1001 16:52:53.617250 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="194ffaa7-f124-4a10-a04d-b489362ea332" path="/var/lib/kubelet/pods/194ffaa7-f124-4a10-a04d-b489362ea332/volumes" Oct 01 16:52:57 crc kubenswrapper[4949]: I1001 16:52:57.602430 4949 scope.go:117] "RemoveContainer" containerID="8bf2e7829238a8cfa378df5f67d25a20a40582c69cfd862883190466bce008c5" Oct 01 16:52:57 crc kubenswrapper[4949]: E1001 16:52:57.603560 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:53:08 crc kubenswrapper[4949]: I1001 16:53:08.602850 4949 scope.go:117] "RemoveContainer" containerID="8bf2e7829238a8cfa378df5f67d25a20a40582c69cfd862883190466bce008c5" Oct 01 16:53:08 crc kubenswrapper[4949]: E1001 16:53:08.603965 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:53:22 crc kubenswrapper[4949]: I1001 16:53:22.601462 4949 scope.go:117] "RemoveContainer" containerID="8bf2e7829238a8cfa378df5f67d25a20a40582c69cfd862883190466bce008c5" Oct 01 16:53:22 crc kubenswrapper[4949]: E1001 16:53:22.602181 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:53:35 crc kubenswrapper[4949]: I1001 16:53:35.602354 4949 scope.go:117] "RemoveContainer" containerID="8bf2e7829238a8cfa378df5f67d25a20a40582c69cfd862883190466bce008c5" Oct 01 16:53:35 crc kubenswrapper[4949]: E1001 16:53:35.603095 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:53:49 crc kubenswrapper[4949]: I1001 16:53:49.603242 4949 scope.go:117] "RemoveContainer" containerID="8bf2e7829238a8cfa378df5f67d25a20a40582c69cfd862883190466bce008c5" Oct 01 16:53:49 crc kubenswrapper[4949]: E1001 16:53:49.604149 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:54:00 crc kubenswrapper[4949]: I1001 16:54:00.601927 4949 scope.go:117] "RemoveContainer" containerID="8bf2e7829238a8cfa378df5f67d25a20a40582c69cfd862883190466bce008c5" Oct 01 16:54:00 crc kubenswrapper[4949]: E1001 16:54:00.603890 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:54:12 crc kubenswrapper[4949]: I1001 16:54:12.601709 4949 scope.go:117] "RemoveContainer" containerID="8bf2e7829238a8cfa378df5f67d25a20a40582c69cfd862883190466bce008c5" Oct 01 16:54:12 crc kubenswrapper[4949]: E1001 16:54:12.602652 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:54:23 crc kubenswrapper[4949]: I1001 16:54:23.601818 4949 scope.go:117] "RemoveContainer" containerID="8bf2e7829238a8cfa378df5f67d25a20a40582c69cfd862883190466bce008c5" Oct 01 16:54:23 crc kubenswrapper[4949]: E1001 16:54:23.602710 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:54:38 crc kubenswrapper[4949]: I1001 16:54:38.602284 4949 scope.go:117] "RemoveContainer" containerID="8bf2e7829238a8cfa378df5f67d25a20a40582c69cfd862883190466bce008c5" Oct 01 16:54:38 crc kubenswrapper[4949]: E1001 16:54:38.602896 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:54:49 crc kubenswrapper[4949]: I1001 16:54:49.602471 4949 scope.go:117] "RemoveContainer" containerID="8bf2e7829238a8cfa378df5f67d25a20a40582c69cfd862883190466bce008c5" Oct 01 16:54:49 crc kubenswrapper[4949]: E1001 16:54:49.603433 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:55:02 crc kubenswrapper[4949]: I1001 16:55:02.602849 4949 scope.go:117] "RemoveContainer" containerID="8bf2e7829238a8cfa378df5f67d25a20a40582c69cfd862883190466bce008c5" Oct 01 16:55:02 crc kubenswrapper[4949]: E1001 16:55:02.603786 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:55:17 crc kubenswrapper[4949]: I1001 16:55:17.602280 4949 scope.go:117] "RemoveContainer" containerID="8bf2e7829238a8cfa378df5f67d25a20a40582c69cfd862883190466bce008c5" Oct 01 16:55:17 crc kubenswrapper[4949]: E1001 16:55:17.602908 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:55:28 crc kubenswrapper[4949]: I1001 16:55:28.601625 4949 scope.go:117] "RemoveContainer" containerID="8bf2e7829238a8cfa378df5f67d25a20a40582c69cfd862883190466bce008c5" Oct 01 16:55:28 crc kubenswrapper[4949]: E1001 16:55:28.602225 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:55:41 crc kubenswrapper[4949]: I1001 16:55:41.612898 4949 scope.go:117] "RemoveContainer" containerID="8bf2e7829238a8cfa378df5f67d25a20a40582c69cfd862883190466bce008c5" Oct 01 16:55:41 crc kubenswrapper[4949]: E1001 16:55:41.613985 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:55:52 crc kubenswrapper[4949]: I1001 16:55:52.603002 4949 scope.go:117] "RemoveContainer" containerID="8bf2e7829238a8cfa378df5f67d25a20a40582c69cfd862883190466bce008c5" Oct 01 16:55:52 crc kubenswrapper[4949]: E1001 16:55:52.603923 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:56:05 crc kubenswrapper[4949]: I1001 16:56:05.601817 4949 scope.go:117] "RemoveContainer" containerID="8bf2e7829238a8cfa378df5f67d25a20a40582c69cfd862883190466bce008c5" Oct 01 16:56:05 crc kubenswrapper[4949]: E1001 16:56:05.602789 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:56:19 crc kubenswrapper[4949]: I1001 16:56:19.601767 4949 scope.go:117] "RemoveContainer" containerID="8bf2e7829238a8cfa378df5f67d25a20a40582c69cfd862883190466bce008c5" Oct 01 16:56:19 crc kubenswrapper[4949]: E1001 16:56:19.602537 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:56:34 crc kubenswrapper[4949]: I1001 16:56:34.602581 4949 scope.go:117] "RemoveContainer" containerID="8bf2e7829238a8cfa378df5f67d25a20a40582c69cfd862883190466bce008c5" Oct 01 16:56:34 crc kubenswrapper[4949]: E1001 16:56:34.603313 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:56:49 crc kubenswrapper[4949]: I1001 16:56:49.602106 4949 scope.go:117] "RemoveContainer" containerID="8bf2e7829238a8cfa378df5f67d25a20a40582c69cfd862883190466bce008c5" Oct 01 16:56:49 crc kubenswrapper[4949]: E1001 16:56:49.602893 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:57:04 crc kubenswrapper[4949]: I1001 16:57:04.602050 4949 scope.go:117] "RemoveContainer" containerID="8bf2e7829238a8cfa378df5f67d25a20a40582c69cfd862883190466bce008c5" Oct 01 16:57:04 crc kubenswrapper[4949]: E1001 16:57:04.604174 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 16:57:18 crc kubenswrapper[4949]: I1001 16:57:18.601975 4949 scope.go:117] "RemoveContainer" containerID="8bf2e7829238a8cfa378df5f67d25a20a40582c69cfd862883190466bce008c5" Oct 01 16:57:19 crc kubenswrapper[4949]: I1001 16:57:19.334898 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" event={"ID":"0e15cd67-d4ad-49b8-96a6-da114105e558","Type":"ContainerStarted","Data":"6b0a9adeb590d3e2b219b7c6db7fca17ec92b8ee94f12ad2f3b0eff4d03c455e"} Oct 01 16:58:15 crc kubenswrapper[4949]: I1001 16:58:15.327937 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lllpd"] Oct 01 16:58:15 crc kubenswrapper[4949]: E1001 16:58:15.329511 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="194ffaa7-f124-4a10-a04d-b489362ea332" containerName="extract-utilities" Oct 01 16:58:15 crc kubenswrapper[4949]: I1001 16:58:15.329535 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="194ffaa7-f124-4a10-a04d-b489362ea332" containerName="extract-utilities" Oct 01 16:58:15 crc kubenswrapper[4949]: E1001 16:58:15.329574 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="194ffaa7-f124-4a10-a04d-b489362ea332" containerName="registry-server" Oct 01 16:58:15 crc kubenswrapper[4949]: I1001 16:58:15.329586 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="194ffaa7-f124-4a10-a04d-b489362ea332" containerName="registry-server" Oct 01 16:58:15 crc kubenswrapper[4949]: E1001 16:58:15.329611 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="194ffaa7-f124-4a10-a04d-b489362ea332" containerName="extract-content" Oct 01 16:58:15 crc kubenswrapper[4949]: I1001 16:58:15.329624 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="194ffaa7-f124-4a10-a04d-b489362ea332" containerName="extract-content" Oct 01 16:58:15 crc kubenswrapper[4949]: I1001 16:58:15.329956 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="194ffaa7-f124-4a10-a04d-b489362ea332" containerName="registry-server" Oct 01 16:58:15 crc kubenswrapper[4949]: I1001 16:58:15.339514 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lllpd"] Oct 01 16:58:15 crc kubenswrapper[4949]: I1001 16:58:15.339698 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lllpd" Oct 01 16:58:15 crc kubenswrapper[4949]: I1001 16:58:15.466660 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bn6xt\" (UniqueName: \"kubernetes.io/projected/6790f681-cf74-45fb-967d-9512dfb43f39-kube-api-access-bn6xt\") pod \"redhat-marketplace-lllpd\" (UID: \"6790f681-cf74-45fb-967d-9512dfb43f39\") " pod="openshift-marketplace/redhat-marketplace-lllpd" Oct 01 16:58:15 crc kubenswrapper[4949]: I1001 16:58:15.466740 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6790f681-cf74-45fb-967d-9512dfb43f39-catalog-content\") pod \"redhat-marketplace-lllpd\" (UID: \"6790f681-cf74-45fb-967d-9512dfb43f39\") " pod="openshift-marketplace/redhat-marketplace-lllpd" Oct 01 16:58:15 crc kubenswrapper[4949]: I1001 16:58:15.466870 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6790f681-cf74-45fb-967d-9512dfb43f39-utilities\") pod \"redhat-marketplace-lllpd\" (UID: \"6790f681-cf74-45fb-967d-9512dfb43f39\") " pod="openshift-marketplace/redhat-marketplace-lllpd" Oct 01 16:58:15 crc kubenswrapper[4949]: I1001 16:58:15.568611 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6790f681-cf74-45fb-967d-9512dfb43f39-utilities\") pod \"redhat-marketplace-lllpd\" (UID: \"6790f681-cf74-45fb-967d-9512dfb43f39\") " pod="openshift-marketplace/redhat-marketplace-lllpd" Oct 01 16:58:15 crc kubenswrapper[4949]: I1001 16:58:15.568829 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bn6xt\" (UniqueName: \"kubernetes.io/projected/6790f681-cf74-45fb-967d-9512dfb43f39-kube-api-access-bn6xt\") pod \"redhat-marketplace-lllpd\" (UID: \"6790f681-cf74-45fb-967d-9512dfb43f39\") " pod="openshift-marketplace/redhat-marketplace-lllpd" Oct 01 16:58:15 crc kubenswrapper[4949]: I1001 16:58:15.568861 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6790f681-cf74-45fb-967d-9512dfb43f39-catalog-content\") pod \"redhat-marketplace-lllpd\" (UID: \"6790f681-cf74-45fb-967d-9512dfb43f39\") " pod="openshift-marketplace/redhat-marketplace-lllpd" Oct 01 16:58:15 crc kubenswrapper[4949]: I1001 16:58:15.569432 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6790f681-cf74-45fb-967d-9512dfb43f39-utilities\") pod \"redhat-marketplace-lllpd\" (UID: \"6790f681-cf74-45fb-967d-9512dfb43f39\") " pod="openshift-marketplace/redhat-marketplace-lllpd" Oct 01 16:58:15 crc kubenswrapper[4949]: I1001 16:58:15.569457 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6790f681-cf74-45fb-967d-9512dfb43f39-catalog-content\") pod \"redhat-marketplace-lllpd\" (UID: \"6790f681-cf74-45fb-967d-9512dfb43f39\") " pod="openshift-marketplace/redhat-marketplace-lllpd" Oct 01 16:58:15 crc kubenswrapper[4949]: I1001 16:58:15.593216 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bn6xt\" (UniqueName: \"kubernetes.io/projected/6790f681-cf74-45fb-967d-9512dfb43f39-kube-api-access-bn6xt\") pod \"redhat-marketplace-lllpd\" (UID: \"6790f681-cf74-45fb-967d-9512dfb43f39\") " pod="openshift-marketplace/redhat-marketplace-lllpd" Oct 01 16:58:15 crc kubenswrapper[4949]: I1001 16:58:15.674485 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lllpd" Oct 01 16:58:16 crc kubenswrapper[4949]: I1001 16:58:16.218023 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lllpd"] Oct 01 16:58:16 crc kubenswrapper[4949]: I1001 16:58:16.841167 4949 generic.go:334] "Generic (PLEG): container finished" podID="6790f681-cf74-45fb-967d-9512dfb43f39" containerID="885798d6bf5cadd31dd27a89eb313d6f3590ee0ddbe0634bab7da2b3aeb6e654" exitCode=0 Oct 01 16:58:16 crc kubenswrapper[4949]: I1001 16:58:16.841249 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lllpd" event={"ID":"6790f681-cf74-45fb-967d-9512dfb43f39","Type":"ContainerDied","Data":"885798d6bf5cadd31dd27a89eb313d6f3590ee0ddbe0634bab7da2b3aeb6e654"} Oct 01 16:58:16 crc kubenswrapper[4949]: I1001 16:58:16.841937 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lllpd" event={"ID":"6790f681-cf74-45fb-967d-9512dfb43f39","Type":"ContainerStarted","Data":"2d2eae227113f15cca74d36dc69af10a45cdeed8c8d22e2a323cfc268053ad03"} Oct 01 16:58:16 crc kubenswrapper[4949]: I1001 16:58:16.843821 4949 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 01 16:58:18 crc kubenswrapper[4949]: I1001 16:58:18.882601 4949 generic.go:334] "Generic (PLEG): container finished" podID="6790f681-cf74-45fb-967d-9512dfb43f39" containerID="31640c673f317762d4ba37ff753577117490e6cb7f99c761eff190a14d4b943a" exitCode=0 Oct 01 16:58:18 crc kubenswrapper[4949]: I1001 16:58:18.883012 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lllpd" event={"ID":"6790f681-cf74-45fb-967d-9512dfb43f39","Type":"ContainerDied","Data":"31640c673f317762d4ba37ff753577117490e6cb7f99c761eff190a14d4b943a"} Oct 01 16:58:19 crc kubenswrapper[4949]: I1001 16:58:19.897877 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lllpd" event={"ID":"6790f681-cf74-45fb-967d-9512dfb43f39","Type":"ContainerStarted","Data":"6d9edda847c17355e2547fd32d09f54222f6f59e7036b5cbea4d8de976341b0e"} Oct 01 16:58:25 crc kubenswrapper[4949]: I1001 16:58:25.674942 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lllpd" Oct 01 16:58:25 crc kubenswrapper[4949]: I1001 16:58:25.675424 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lllpd" Oct 01 16:58:25 crc kubenswrapper[4949]: I1001 16:58:25.725093 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lllpd" Oct 01 16:58:25 crc kubenswrapper[4949]: I1001 16:58:25.750252 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lllpd" podStartSLOduration=8.271246677 podStartE2EDuration="10.750231041s" podCreationTimestamp="2025-10-01 16:58:15 +0000 UTC" firstStartedPulling="2025-10-01 16:58:16.843328012 +0000 UTC m=+4596.148934243" lastFinishedPulling="2025-10-01 16:58:19.322312416 +0000 UTC m=+4598.627918607" observedRunningTime="2025-10-01 16:58:19.921385438 +0000 UTC m=+4599.226991639" watchObservedRunningTime="2025-10-01 16:58:25.750231041 +0000 UTC m=+4605.055837232" Oct 01 16:58:26 crc kubenswrapper[4949]: I1001 16:58:26.008553 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lllpd" Oct 01 16:58:27 crc kubenswrapper[4949]: I1001 16:58:27.113879 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lllpd"] Oct 01 16:58:27 crc kubenswrapper[4949]: I1001 16:58:27.982469 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lllpd" podUID="6790f681-cf74-45fb-967d-9512dfb43f39" containerName="registry-server" containerID="cri-o://6d9edda847c17355e2547fd32d09f54222f6f59e7036b5cbea4d8de976341b0e" gracePeriod=2 Oct 01 16:58:28 crc kubenswrapper[4949]: I1001 16:58:28.995734 4949 generic.go:334] "Generic (PLEG): container finished" podID="6790f681-cf74-45fb-967d-9512dfb43f39" containerID="6d9edda847c17355e2547fd32d09f54222f6f59e7036b5cbea4d8de976341b0e" exitCode=0 Oct 01 16:58:28 crc kubenswrapper[4949]: I1001 16:58:28.995833 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lllpd" event={"ID":"6790f681-cf74-45fb-967d-9512dfb43f39","Type":"ContainerDied","Data":"6d9edda847c17355e2547fd32d09f54222f6f59e7036b5cbea4d8de976341b0e"} Oct 01 16:58:28 crc kubenswrapper[4949]: I1001 16:58:28.996273 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lllpd" event={"ID":"6790f681-cf74-45fb-967d-9512dfb43f39","Type":"ContainerDied","Data":"2d2eae227113f15cca74d36dc69af10a45cdeed8c8d22e2a323cfc268053ad03"} Oct 01 16:58:28 crc kubenswrapper[4949]: I1001 16:58:28.996302 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2d2eae227113f15cca74d36dc69af10a45cdeed8c8d22e2a323cfc268053ad03" Oct 01 16:58:29 crc kubenswrapper[4949]: I1001 16:58:29.397835 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lllpd" Oct 01 16:58:29 crc kubenswrapper[4949]: I1001 16:58:29.552860 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6790f681-cf74-45fb-967d-9512dfb43f39-utilities\") pod \"6790f681-cf74-45fb-967d-9512dfb43f39\" (UID: \"6790f681-cf74-45fb-967d-9512dfb43f39\") " Oct 01 16:58:29 crc kubenswrapper[4949]: I1001 16:58:29.553018 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6790f681-cf74-45fb-967d-9512dfb43f39-catalog-content\") pod \"6790f681-cf74-45fb-967d-9512dfb43f39\" (UID: \"6790f681-cf74-45fb-967d-9512dfb43f39\") " Oct 01 16:58:29 crc kubenswrapper[4949]: I1001 16:58:29.553073 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bn6xt\" (UniqueName: \"kubernetes.io/projected/6790f681-cf74-45fb-967d-9512dfb43f39-kube-api-access-bn6xt\") pod \"6790f681-cf74-45fb-967d-9512dfb43f39\" (UID: \"6790f681-cf74-45fb-967d-9512dfb43f39\") " Oct 01 16:58:29 crc kubenswrapper[4949]: I1001 16:58:29.553729 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6790f681-cf74-45fb-967d-9512dfb43f39-utilities" (OuterVolumeSpecName: "utilities") pod "6790f681-cf74-45fb-967d-9512dfb43f39" (UID: "6790f681-cf74-45fb-967d-9512dfb43f39"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:58:29 crc kubenswrapper[4949]: I1001 16:58:29.564614 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6790f681-cf74-45fb-967d-9512dfb43f39-kube-api-access-bn6xt" (OuterVolumeSpecName: "kube-api-access-bn6xt") pod "6790f681-cf74-45fb-967d-9512dfb43f39" (UID: "6790f681-cf74-45fb-967d-9512dfb43f39"). InnerVolumeSpecName "kube-api-access-bn6xt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 16:58:29 crc kubenswrapper[4949]: I1001 16:58:29.655407 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/6790f681-cf74-45fb-967d-9512dfb43f39-utilities\") on node \"crc\" DevicePath \"\"" Oct 01 16:58:29 crc kubenswrapper[4949]: I1001 16:58:29.655442 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bn6xt\" (UniqueName: \"kubernetes.io/projected/6790f681-cf74-45fb-967d-9512dfb43f39-kube-api-access-bn6xt\") on node \"crc\" DevicePath \"\"" Oct 01 16:58:29 crc kubenswrapper[4949]: I1001 16:58:29.914077 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6790f681-cf74-45fb-967d-9512dfb43f39-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "6790f681-cf74-45fb-967d-9512dfb43f39" (UID: "6790f681-cf74-45fb-967d-9512dfb43f39"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 16:58:29 crc kubenswrapper[4949]: I1001 16:58:29.962765 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/6790f681-cf74-45fb-967d-9512dfb43f39-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 01 16:58:30 crc kubenswrapper[4949]: I1001 16:58:30.005980 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lllpd" Oct 01 16:58:30 crc kubenswrapper[4949]: I1001 16:58:30.040572 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lllpd"] Oct 01 16:58:30 crc kubenswrapper[4949]: I1001 16:58:30.051059 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lllpd"] Oct 01 16:58:31 crc kubenswrapper[4949]: I1001 16:58:31.615118 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6790f681-cf74-45fb-967d-9512dfb43f39" path="/var/lib/kubelet/pods/6790f681-cf74-45fb-967d-9512dfb43f39/volumes" Oct 01 16:58:49 crc kubenswrapper[4949]: I1001 16:58:49.118092 4949 scope.go:117] "RemoveContainer" containerID="5da79352d6af68bd8a46649584f75e0682e40474fc22d5661ff5fa485ea622c1" Oct 01 16:58:49 crc kubenswrapper[4949]: I1001 16:58:49.152426 4949 scope.go:117] "RemoveContainer" containerID="dc606dfe51e9aadfe5a650fd486d09b7240d4e7bdf6d2e73f89b89989121ee68" Oct 01 16:58:49 crc kubenswrapper[4949]: I1001 16:58:49.197654 4949 scope.go:117] "RemoveContainer" containerID="d14377af744fb8d3a82b1803934707b05735215340065b2d940de3c7ce61f1dd" Oct 01 16:59:18 crc kubenswrapper[4949]: I1001 16:59:18.038428 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 16:59:18 crc kubenswrapper[4949]: I1001 16:59:18.038938 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 16:59:48 crc kubenswrapper[4949]: I1001 16:59:48.039029 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 16:59:48 crc kubenswrapper[4949]: I1001 16:59:48.039579 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 17:00:00 crc kubenswrapper[4949]: I1001 17:00:00.151407 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29322300-w9kv4"] Oct 01 17:00:00 crc kubenswrapper[4949]: E1001 17:00:00.153868 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6790f681-cf74-45fb-967d-9512dfb43f39" containerName="extract-utilities" Oct 01 17:00:00 crc kubenswrapper[4949]: I1001 17:00:00.153892 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="6790f681-cf74-45fb-967d-9512dfb43f39" containerName="extract-utilities" Oct 01 17:00:00 crc kubenswrapper[4949]: E1001 17:00:00.153910 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6790f681-cf74-45fb-967d-9512dfb43f39" containerName="registry-server" Oct 01 17:00:00 crc kubenswrapper[4949]: I1001 17:00:00.153918 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="6790f681-cf74-45fb-967d-9512dfb43f39" containerName="registry-server" Oct 01 17:00:00 crc kubenswrapper[4949]: E1001 17:00:00.153935 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6790f681-cf74-45fb-967d-9512dfb43f39" containerName="extract-content" Oct 01 17:00:00 crc kubenswrapper[4949]: I1001 17:00:00.153944 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="6790f681-cf74-45fb-967d-9512dfb43f39" containerName="extract-content" Oct 01 17:00:00 crc kubenswrapper[4949]: I1001 17:00:00.154218 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="6790f681-cf74-45fb-967d-9512dfb43f39" containerName="registry-server" Oct 01 17:00:00 crc kubenswrapper[4949]: I1001 17:00:00.156068 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29322300-w9kv4" Oct 01 17:00:00 crc kubenswrapper[4949]: I1001 17:00:00.158514 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 01 17:00:00 crc kubenswrapper[4949]: I1001 17:00:00.158812 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 01 17:00:00 crc kubenswrapper[4949]: I1001 17:00:00.161323 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29322300-w9kv4"] Oct 01 17:00:00 crc kubenswrapper[4949]: I1001 17:00:00.299637 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8b3ed62-f84d-4ffb-bdd8-3f6416753131-config-volume\") pod \"collect-profiles-29322300-w9kv4\" (UID: \"c8b3ed62-f84d-4ffb-bdd8-3f6416753131\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322300-w9kv4" Oct 01 17:00:00 crc kubenswrapper[4949]: I1001 17:00:00.299911 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf9x8\" (UniqueName: \"kubernetes.io/projected/c8b3ed62-f84d-4ffb-bdd8-3f6416753131-kube-api-access-tf9x8\") pod \"collect-profiles-29322300-w9kv4\" (UID: \"c8b3ed62-f84d-4ffb-bdd8-3f6416753131\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322300-w9kv4" Oct 01 17:00:00 crc kubenswrapper[4949]: I1001 17:00:00.299977 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c8b3ed62-f84d-4ffb-bdd8-3f6416753131-secret-volume\") pod \"collect-profiles-29322300-w9kv4\" (UID: \"c8b3ed62-f84d-4ffb-bdd8-3f6416753131\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322300-w9kv4" Oct 01 17:00:00 crc kubenswrapper[4949]: I1001 17:00:00.411208 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c8b3ed62-f84d-4ffb-bdd8-3f6416753131-secret-volume\") pod \"collect-profiles-29322300-w9kv4\" (UID: \"c8b3ed62-f84d-4ffb-bdd8-3f6416753131\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322300-w9kv4" Oct 01 17:00:00 crc kubenswrapper[4949]: I1001 17:00:00.411391 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8b3ed62-f84d-4ffb-bdd8-3f6416753131-config-volume\") pod \"collect-profiles-29322300-w9kv4\" (UID: \"c8b3ed62-f84d-4ffb-bdd8-3f6416753131\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322300-w9kv4" Oct 01 17:00:00 crc kubenswrapper[4949]: I1001 17:00:00.411443 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tf9x8\" (UniqueName: \"kubernetes.io/projected/c8b3ed62-f84d-4ffb-bdd8-3f6416753131-kube-api-access-tf9x8\") pod \"collect-profiles-29322300-w9kv4\" (UID: \"c8b3ed62-f84d-4ffb-bdd8-3f6416753131\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322300-w9kv4" Oct 01 17:00:00 crc kubenswrapper[4949]: I1001 17:00:00.412773 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8b3ed62-f84d-4ffb-bdd8-3f6416753131-config-volume\") pod \"collect-profiles-29322300-w9kv4\" (UID: \"c8b3ed62-f84d-4ffb-bdd8-3f6416753131\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322300-w9kv4" Oct 01 17:00:00 crc kubenswrapper[4949]: I1001 17:00:00.646641 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tf9x8\" (UniqueName: \"kubernetes.io/projected/c8b3ed62-f84d-4ffb-bdd8-3f6416753131-kube-api-access-tf9x8\") pod \"collect-profiles-29322300-w9kv4\" (UID: \"c8b3ed62-f84d-4ffb-bdd8-3f6416753131\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322300-w9kv4" Oct 01 17:00:00 crc kubenswrapper[4949]: I1001 17:00:00.646880 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c8b3ed62-f84d-4ffb-bdd8-3f6416753131-secret-volume\") pod \"collect-profiles-29322300-w9kv4\" (UID: \"c8b3ed62-f84d-4ffb-bdd8-3f6416753131\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322300-w9kv4" Oct 01 17:00:00 crc kubenswrapper[4949]: I1001 17:00:00.774352 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29322300-w9kv4" Oct 01 17:00:01 crc kubenswrapper[4949]: I1001 17:00:01.278834 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29322300-w9kv4"] Oct 01 17:00:02 crc kubenswrapper[4949]: I1001 17:00:02.035982 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29322300-w9kv4" event={"ID":"c8b3ed62-f84d-4ffb-bdd8-3f6416753131","Type":"ContainerStarted","Data":"8207dd3dee4e21278892806bcf7deb0062f9cc880d831e477ea75cbe23924274"} Oct 01 17:00:02 crc kubenswrapper[4949]: I1001 17:00:02.036555 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29322300-w9kv4" event={"ID":"c8b3ed62-f84d-4ffb-bdd8-3f6416753131","Type":"ContainerStarted","Data":"8b7f91e856a32c6be36531c5ae9ab1a52b7f36506573af9b7f852b92dd7b6287"} Oct 01 17:00:02 crc kubenswrapper[4949]: I1001 17:00:02.062189 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29322300-w9kv4" podStartSLOduration=2.062165771 podStartE2EDuration="2.062165771s" podCreationTimestamp="2025-10-01 17:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 17:00:02.055007643 +0000 UTC m=+4701.360613854" watchObservedRunningTime="2025-10-01 17:00:02.062165771 +0000 UTC m=+4701.367771962" Oct 01 17:00:03 crc kubenswrapper[4949]: I1001 17:00:03.047360 4949 generic.go:334] "Generic (PLEG): container finished" podID="c8b3ed62-f84d-4ffb-bdd8-3f6416753131" containerID="8207dd3dee4e21278892806bcf7deb0062f9cc880d831e477ea75cbe23924274" exitCode=0 Oct 01 17:00:03 crc kubenswrapper[4949]: I1001 17:00:03.047687 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29322300-w9kv4" event={"ID":"c8b3ed62-f84d-4ffb-bdd8-3f6416753131","Type":"ContainerDied","Data":"8207dd3dee4e21278892806bcf7deb0062f9cc880d831e477ea75cbe23924274"} Oct 01 17:00:04 crc kubenswrapper[4949]: I1001 17:00:04.489034 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29322300-w9kv4" Oct 01 17:00:04 crc kubenswrapper[4949]: I1001 17:00:04.612280 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c8b3ed62-f84d-4ffb-bdd8-3f6416753131-secret-volume\") pod \"c8b3ed62-f84d-4ffb-bdd8-3f6416753131\" (UID: \"c8b3ed62-f84d-4ffb-bdd8-3f6416753131\") " Oct 01 17:00:04 crc kubenswrapper[4949]: I1001 17:00:04.612630 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tf9x8\" (UniqueName: \"kubernetes.io/projected/c8b3ed62-f84d-4ffb-bdd8-3f6416753131-kube-api-access-tf9x8\") pod \"c8b3ed62-f84d-4ffb-bdd8-3f6416753131\" (UID: \"c8b3ed62-f84d-4ffb-bdd8-3f6416753131\") " Oct 01 17:00:04 crc kubenswrapper[4949]: I1001 17:00:04.613220 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8b3ed62-f84d-4ffb-bdd8-3f6416753131-config-volume\") pod \"c8b3ed62-f84d-4ffb-bdd8-3f6416753131\" (UID: \"c8b3ed62-f84d-4ffb-bdd8-3f6416753131\") " Oct 01 17:00:04 crc kubenswrapper[4949]: I1001 17:00:04.613758 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c8b3ed62-f84d-4ffb-bdd8-3f6416753131-config-volume" (OuterVolumeSpecName: "config-volume") pod "c8b3ed62-f84d-4ffb-bdd8-3f6416753131" (UID: "c8b3ed62-f84d-4ffb-bdd8-3f6416753131"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 17:00:04 crc kubenswrapper[4949]: I1001 17:00:04.628071 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c8b3ed62-f84d-4ffb-bdd8-3f6416753131-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c8b3ed62-f84d-4ffb-bdd8-3f6416753131" (UID: "c8b3ed62-f84d-4ffb-bdd8-3f6416753131"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 17:00:04 crc kubenswrapper[4949]: I1001 17:00:04.628182 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c8b3ed62-f84d-4ffb-bdd8-3f6416753131-kube-api-access-tf9x8" (OuterVolumeSpecName: "kube-api-access-tf9x8") pod "c8b3ed62-f84d-4ffb-bdd8-3f6416753131" (UID: "c8b3ed62-f84d-4ffb-bdd8-3f6416753131"). InnerVolumeSpecName "kube-api-access-tf9x8". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 17:00:04 crc kubenswrapper[4949]: I1001 17:00:04.715180 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tf9x8\" (UniqueName: \"kubernetes.io/projected/c8b3ed62-f84d-4ffb-bdd8-3f6416753131-kube-api-access-tf9x8\") on node \"crc\" DevicePath \"\"" Oct 01 17:00:04 crc kubenswrapper[4949]: I1001 17:00:04.715211 4949 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c8b3ed62-f84d-4ffb-bdd8-3f6416753131-config-volume\") on node \"crc\" DevicePath \"\"" Oct 01 17:00:04 crc kubenswrapper[4949]: I1001 17:00:04.715220 4949 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c8b3ed62-f84d-4ffb-bdd8-3f6416753131-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 01 17:00:05 crc kubenswrapper[4949]: I1001 17:00:05.072152 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29322300-w9kv4" event={"ID":"c8b3ed62-f84d-4ffb-bdd8-3f6416753131","Type":"ContainerDied","Data":"8b7f91e856a32c6be36531c5ae9ab1a52b7f36506573af9b7f852b92dd7b6287"} Oct 01 17:00:05 crc kubenswrapper[4949]: I1001 17:00:05.072191 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8b7f91e856a32c6be36531c5ae9ab1a52b7f36506573af9b7f852b92dd7b6287" Oct 01 17:00:05 crc kubenswrapper[4949]: I1001 17:00:05.072282 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29322300-w9kv4" Oct 01 17:00:05 crc kubenswrapper[4949]: I1001 17:00:05.572239 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29322255-6hmtv"] Oct 01 17:00:05 crc kubenswrapper[4949]: I1001 17:00:05.594865 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29322255-6hmtv"] Oct 01 17:00:05 crc kubenswrapper[4949]: I1001 17:00:05.613598 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1fc09fb-8949-422e-80cd-e1b1de960653" path="/var/lib/kubelet/pods/d1fc09fb-8949-422e-80cd-e1b1de960653/volumes" Oct 01 17:00:18 crc kubenswrapper[4949]: I1001 17:00:18.038416 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 17:00:18 crc kubenswrapper[4949]: I1001 17:00:18.039006 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 17:00:18 crc kubenswrapper[4949]: I1001 17:00:18.039061 4949 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l6287" Oct 01 17:00:18 crc kubenswrapper[4949]: I1001 17:00:18.039848 4949 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"6b0a9adeb590d3e2b219b7c6db7fca17ec92b8ee94f12ad2f3b0eff4d03c455e"} pod="openshift-machine-config-operator/machine-config-daemon-l6287" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 01 17:00:18 crc kubenswrapper[4949]: I1001 17:00:18.039907 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" containerID="cri-o://6b0a9adeb590d3e2b219b7c6db7fca17ec92b8ee94f12ad2f3b0eff4d03c455e" gracePeriod=600 Oct 01 17:00:18 crc kubenswrapper[4949]: I1001 17:00:18.206319 4949 generic.go:334] "Generic (PLEG): container finished" podID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerID="6b0a9adeb590d3e2b219b7c6db7fca17ec92b8ee94f12ad2f3b0eff4d03c455e" exitCode=0 Oct 01 17:00:18 crc kubenswrapper[4949]: I1001 17:00:18.206374 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" event={"ID":"0e15cd67-d4ad-49b8-96a6-da114105e558","Type":"ContainerDied","Data":"6b0a9adeb590d3e2b219b7c6db7fca17ec92b8ee94f12ad2f3b0eff4d03c455e"} Oct 01 17:00:18 crc kubenswrapper[4949]: I1001 17:00:18.206412 4949 scope.go:117] "RemoveContainer" containerID="8bf2e7829238a8cfa378df5f67d25a20a40582c69cfd862883190466bce008c5" Oct 01 17:00:19 crc kubenswrapper[4949]: I1001 17:00:19.217054 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" event={"ID":"0e15cd67-d4ad-49b8-96a6-da114105e558","Type":"ContainerStarted","Data":"5247ad550b785477d0797e78873846113ccbbc9ee2a1f1cf5fcc792baaa4b1d4"} Oct 01 17:00:20 crc kubenswrapper[4949]: I1001 17:00:20.776351 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-7wvs4"] Oct 01 17:00:20 crc kubenswrapper[4949]: E1001 17:00:20.777066 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c8b3ed62-f84d-4ffb-bdd8-3f6416753131" containerName="collect-profiles" Oct 01 17:00:20 crc kubenswrapper[4949]: I1001 17:00:20.777081 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="c8b3ed62-f84d-4ffb-bdd8-3f6416753131" containerName="collect-profiles" Oct 01 17:00:20 crc kubenswrapper[4949]: I1001 17:00:20.777322 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="c8b3ed62-f84d-4ffb-bdd8-3f6416753131" containerName="collect-profiles" Oct 01 17:00:20 crc kubenswrapper[4949]: I1001 17:00:20.779713 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7wvs4" Oct 01 17:00:20 crc kubenswrapper[4949]: I1001 17:00:20.796023 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7wvs4"] Oct 01 17:00:20 crc kubenswrapper[4949]: I1001 17:00:20.850465 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8df7475f-1dd7-47e4-bfef-ad0789feb408-utilities\") pod \"certified-operators-7wvs4\" (UID: \"8df7475f-1dd7-47e4-bfef-ad0789feb408\") " pod="openshift-marketplace/certified-operators-7wvs4" Oct 01 17:00:20 crc kubenswrapper[4949]: I1001 17:00:20.850549 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8df7475f-1dd7-47e4-bfef-ad0789feb408-catalog-content\") pod \"certified-operators-7wvs4\" (UID: \"8df7475f-1dd7-47e4-bfef-ad0789feb408\") " pod="openshift-marketplace/certified-operators-7wvs4" Oct 01 17:00:20 crc kubenswrapper[4949]: I1001 17:00:20.851071 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mrvv\" (UniqueName: \"kubernetes.io/projected/8df7475f-1dd7-47e4-bfef-ad0789feb408-kube-api-access-8mrvv\") pod \"certified-operators-7wvs4\" (UID: \"8df7475f-1dd7-47e4-bfef-ad0789feb408\") " pod="openshift-marketplace/certified-operators-7wvs4" Oct 01 17:00:20 crc kubenswrapper[4949]: I1001 17:00:20.953385 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mrvv\" (UniqueName: \"kubernetes.io/projected/8df7475f-1dd7-47e4-bfef-ad0789feb408-kube-api-access-8mrvv\") pod \"certified-operators-7wvs4\" (UID: \"8df7475f-1dd7-47e4-bfef-ad0789feb408\") " pod="openshift-marketplace/certified-operators-7wvs4" Oct 01 17:00:20 crc kubenswrapper[4949]: I1001 17:00:20.953783 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8df7475f-1dd7-47e4-bfef-ad0789feb408-utilities\") pod \"certified-operators-7wvs4\" (UID: \"8df7475f-1dd7-47e4-bfef-ad0789feb408\") " pod="openshift-marketplace/certified-operators-7wvs4" Oct 01 17:00:20 crc kubenswrapper[4949]: I1001 17:00:20.953825 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8df7475f-1dd7-47e4-bfef-ad0789feb408-catalog-content\") pod \"certified-operators-7wvs4\" (UID: \"8df7475f-1dd7-47e4-bfef-ad0789feb408\") " pod="openshift-marketplace/certified-operators-7wvs4" Oct 01 17:00:20 crc kubenswrapper[4949]: I1001 17:00:20.954497 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8df7475f-1dd7-47e4-bfef-ad0789feb408-utilities\") pod \"certified-operators-7wvs4\" (UID: \"8df7475f-1dd7-47e4-bfef-ad0789feb408\") " pod="openshift-marketplace/certified-operators-7wvs4" Oct 01 17:00:20 crc kubenswrapper[4949]: I1001 17:00:20.954515 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8df7475f-1dd7-47e4-bfef-ad0789feb408-catalog-content\") pod \"certified-operators-7wvs4\" (UID: \"8df7475f-1dd7-47e4-bfef-ad0789feb408\") " pod="openshift-marketplace/certified-operators-7wvs4" Oct 01 17:00:20 crc kubenswrapper[4949]: I1001 17:00:20.979722 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mrvv\" (UniqueName: \"kubernetes.io/projected/8df7475f-1dd7-47e4-bfef-ad0789feb408-kube-api-access-8mrvv\") pod \"certified-operators-7wvs4\" (UID: \"8df7475f-1dd7-47e4-bfef-ad0789feb408\") " pod="openshift-marketplace/certified-operators-7wvs4" Oct 01 17:00:21 crc kubenswrapper[4949]: I1001 17:00:21.101052 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7wvs4" Oct 01 17:00:21 crc kubenswrapper[4949]: W1001 17:00:21.641615 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8df7475f_1dd7_47e4_bfef_ad0789feb408.slice/crio-273e3bfeeec340814a3d52620334e100465e4aef0ca2408e9f5113a1c96c3e0e WatchSource:0}: Error finding container 273e3bfeeec340814a3d52620334e100465e4aef0ca2408e9f5113a1c96c3e0e: Status 404 returned error can't find the container with id 273e3bfeeec340814a3d52620334e100465e4aef0ca2408e9f5113a1c96c3e0e Oct 01 17:00:21 crc kubenswrapper[4949]: I1001 17:00:21.653167 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-7wvs4"] Oct 01 17:00:22 crc kubenswrapper[4949]: I1001 17:00:22.262886 4949 generic.go:334] "Generic (PLEG): container finished" podID="8df7475f-1dd7-47e4-bfef-ad0789feb408" containerID="48e0390383a68ae702fbae27a5fcc81cde45e232fc34ff7845e44433fe24dac1" exitCode=0 Oct 01 17:00:22 crc kubenswrapper[4949]: I1001 17:00:22.262990 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7wvs4" event={"ID":"8df7475f-1dd7-47e4-bfef-ad0789feb408","Type":"ContainerDied","Data":"48e0390383a68ae702fbae27a5fcc81cde45e232fc34ff7845e44433fe24dac1"} Oct 01 17:00:22 crc kubenswrapper[4949]: I1001 17:00:22.263229 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7wvs4" event={"ID":"8df7475f-1dd7-47e4-bfef-ad0789feb408","Type":"ContainerStarted","Data":"273e3bfeeec340814a3d52620334e100465e4aef0ca2408e9f5113a1c96c3e0e"} Oct 01 17:00:26 crc kubenswrapper[4949]: I1001 17:00:26.311310 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7wvs4" event={"ID":"8df7475f-1dd7-47e4-bfef-ad0789feb408","Type":"ContainerStarted","Data":"aaa6aba03128befc99814ee78a17895b34ed7ce83ebef83f3eb4fc4799d2e8ee"} Oct 01 17:00:27 crc kubenswrapper[4949]: I1001 17:00:27.322551 4949 generic.go:334] "Generic (PLEG): container finished" podID="8df7475f-1dd7-47e4-bfef-ad0789feb408" containerID="aaa6aba03128befc99814ee78a17895b34ed7ce83ebef83f3eb4fc4799d2e8ee" exitCode=0 Oct 01 17:00:27 crc kubenswrapper[4949]: I1001 17:00:27.322635 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7wvs4" event={"ID":"8df7475f-1dd7-47e4-bfef-ad0789feb408","Type":"ContainerDied","Data":"aaa6aba03128befc99814ee78a17895b34ed7ce83ebef83f3eb4fc4799d2e8ee"} Oct 01 17:00:28 crc kubenswrapper[4949]: I1001 17:00:28.341078 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7wvs4" event={"ID":"8df7475f-1dd7-47e4-bfef-ad0789feb408","Type":"ContainerStarted","Data":"320c0a57ce358193465b0fce9d086a2eec91e92e90dfbf3ed7444b209e7446c1"} Oct 01 17:00:28 crc kubenswrapper[4949]: I1001 17:00:28.377025 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-7wvs4" podStartSLOduration=2.624280906 podStartE2EDuration="8.377004862s" podCreationTimestamp="2025-10-01 17:00:20 +0000 UTC" firstStartedPulling="2025-10-01 17:00:22.265163437 +0000 UTC m=+4721.570769628" lastFinishedPulling="2025-10-01 17:00:28.017887393 +0000 UTC m=+4727.323493584" observedRunningTime="2025-10-01 17:00:28.374956856 +0000 UTC m=+4727.680563067" watchObservedRunningTime="2025-10-01 17:00:28.377004862 +0000 UTC m=+4727.682611063" Oct 01 17:00:31 crc kubenswrapper[4949]: I1001 17:00:31.101988 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-7wvs4" Oct 01 17:00:31 crc kubenswrapper[4949]: I1001 17:00:31.102303 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-7wvs4" Oct 01 17:00:31 crc kubenswrapper[4949]: I1001 17:00:31.147280 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-7wvs4" Oct 01 17:00:41 crc kubenswrapper[4949]: I1001 17:00:41.180769 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-7wvs4" Oct 01 17:00:41 crc kubenswrapper[4949]: I1001 17:00:41.237040 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7wvs4"] Oct 01 17:00:41 crc kubenswrapper[4949]: I1001 17:00:41.457934 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-7wvs4" podUID="8df7475f-1dd7-47e4-bfef-ad0789feb408" containerName="registry-server" containerID="cri-o://320c0a57ce358193465b0fce9d086a2eec91e92e90dfbf3ed7444b209e7446c1" gracePeriod=2 Oct 01 17:00:42 crc kubenswrapper[4949]: I1001 17:00:42.011972 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7wvs4" Oct 01 17:00:42 crc kubenswrapper[4949]: I1001 17:00:42.064954 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8df7475f-1dd7-47e4-bfef-ad0789feb408-catalog-content\") pod \"8df7475f-1dd7-47e4-bfef-ad0789feb408\" (UID: \"8df7475f-1dd7-47e4-bfef-ad0789feb408\") " Oct 01 17:00:42 crc kubenswrapper[4949]: I1001 17:00:42.065153 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mrvv\" (UniqueName: \"kubernetes.io/projected/8df7475f-1dd7-47e4-bfef-ad0789feb408-kube-api-access-8mrvv\") pod \"8df7475f-1dd7-47e4-bfef-ad0789feb408\" (UID: \"8df7475f-1dd7-47e4-bfef-ad0789feb408\") " Oct 01 17:00:42 crc kubenswrapper[4949]: I1001 17:00:42.065252 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8df7475f-1dd7-47e4-bfef-ad0789feb408-utilities\") pod \"8df7475f-1dd7-47e4-bfef-ad0789feb408\" (UID: \"8df7475f-1dd7-47e4-bfef-ad0789feb408\") " Oct 01 17:00:42 crc kubenswrapper[4949]: I1001 17:00:42.066806 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8df7475f-1dd7-47e4-bfef-ad0789feb408-utilities" (OuterVolumeSpecName: "utilities") pod "8df7475f-1dd7-47e4-bfef-ad0789feb408" (UID: "8df7475f-1dd7-47e4-bfef-ad0789feb408"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 17:00:42 crc kubenswrapper[4949]: I1001 17:00:42.096257 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8df7475f-1dd7-47e4-bfef-ad0789feb408-kube-api-access-8mrvv" (OuterVolumeSpecName: "kube-api-access-8mrvv") pod "8df7475f-1dd7-47e4-bfef-ad0789feb408" (UID: "8df7475f-1dd7-47e4-bfef-ad0789feb408"). InnerVolumeSpecName "kube-api-access-8mrvv". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 17:00:42 crc kubenswrapper[4949]: I1001 17:00:42.116411 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8df7475f-1dd7-47e4-bfef-ad0789feb408-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8df7475f-1dd7-47e4-bfef-ad0789feb408" (UID: "8df7475f-1dd7-47e4-bfef-ad0789feb408"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 17:00:42 crc kubenswrapper[4949]: I1001 17:00:42.167118 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mrvv\" (UniqueName: \"kubernetes.io/projected/8df7475f-1dd7-47e4-bfef-ad0789feb408-kube-api-access-8mrvv\") on node \"crc\" DevicePath \"\"" Oct 01 17:00:42 crc kubenswrapper[4949]: I1001 17:00:42.167159 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8df7475f-1dd7-47e4-bfef-ad0789feb408-utilities\") on node \"crc\" DevicePath \"\"" Oct 01 17:00:42 crc kubenswrapper[4949]: I1001 17:00:42.167168 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8df7475f-1dd7-47e4-bfef-ad0789feb408-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 01 17:00:42 crc kubenswrapper[4949]: I1001 17:00:42.468719 4949 generic.go:334] "Generic (PLEG): container finished" podID="8df7475f-1dd7-47e4-bfef-ad0789feb408" containerID="320c0a57ce358193465b0fce9d086a2eec91e92e90dfbf3ed7444b209e7446c1" exitCode=0 Oct 01 17:00:42 crc kubenswrapper[4949]: I1001 17:00:42.468772 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7wvs4" event={"ID":"8df7475f-1dd7-47e4-bfef-ad0789feb408","Type":"ContainerDied","Data":"320c0a57ce358193465b0fce9d086a2eec91e92e90dfbf3ed7444b209e7446c1"} Oct 01 17:00:42 crc kubenswrapper[4949]: I1001 17:00:42.468802 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-7wvs4" event={"ID":"8df7475f-1dd7-47e4-bfef-ad0789feb408","Type":"ContainerDied","Data":"273e3bfeeec340814a3d52620334e100465e4aef0ca2408e9f5113a1c96c3e0e"} Oct 01 17:00:42 crc kubenswrapper[4949]: I1001 17:00:42.468923 4949 scope.go:117] "RemoveContainer" containerID="320c0a57ce358193465b0fce9d086a2eec91e92e90dfbf3ed7444b209e7446c1" Oct 01 17:00:42 crc kubenswrapper[4949]: I1001 17:00:42.469078 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-7wvs4" Oct 01 17:00:42 crc kubenswrapper[4949]: I1001 17:00:42.496289 4949 scope.go:117] "RemoveContainer" containerID="aaa6aba03128befc99814ee78a17895b34ed7ce83ebef83f3eb4fc4799d2e8ee" Oct 01 17:00:42 crc kubenswrapper[4949]: I1001 17:00:42.506735 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-7wvs4"] Oct 01 17:00:42 crc kubenswrapper[4949]: I1001 17:00:42.514663 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-7wvs4"] Oct 01 17:00:42 crc kubenswrapper[4949]: I1001 17:00:42.536608 4949 scope.go:117] "RemoveContainer" containerID="48e0390383a68ae702fbae27a5fcc81cde45e232fc34ff7845e44433fe24dac1" Oct 01 17:00:42 crc kubenswrapper[4949]: I1001 17:00:42.560365 4949 scope.go:117] "RemoveContainer" containerID="320c0a57ce358193465b0fce9d086a2eec91e92e90dfbf3ed7444b209e7446c1" Oct 01 17:00:42 crc kubenswrapper[4949]: E1001 17:00:42.560778 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"320c0a57ce358193465b0fce9d086a2eec91e92e90dfbf3ed7444b209e7446c1\": container with ID starting with 320c0a57ce358193465b0fce9d086a2eec91e92e90dfbf3ed7444b209e7446c1 not found: ID does not exist" containerID="320c0a57ce358193465b0fce9d086a2eec91e92e90dfbf3ed7444b209e7446c1" Oct 01 17:00:42 crc kubenswrapper[4949]: I1001 17:00:42.560819 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"320c0a57ce358193465b0fce9d086a2eec91e92e90dfbf3ed7444b209e7446c1"} err="failed to get container status \"320c0a57ce358193465b0fce9d086a2eec91e92e90dfbf3ed7444b209e7446c1\": rpc error: code = NotFound desc = could not find container \"320c0a57ce358193465b0fce9d086a2eec91e92e90dfbf3ed7444b209e7446c1\": container with ID starting with 320c0a57ce358193465b0fce9d086a2eec91e92e90dfbf3ed7444b209e7446c1 not found: ID does not exist" Oct 01 17:00:42 crc kubenswrapper[4949]: I1001 17:00:42.560842 4949 scope.go:117] "RemoveContainer" containerID="aaa6aba03128befc99814ee78a17895b34ed7ce83ebef83f3eb4fc4799d2e8ee" Oct 01 17:00:42 crc kubenswrapper[4949]: E1001 17:00:42.561367 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aaa6aba03128befc99814ee78a17895b34ed7ce83ebef83f3eb4fc4799d2e8ee\": container with ID starting with aaa6aba03128befc99814ee78a17895b34ed7ce83ebef83f3eb4fc4799d2e8ee not found: ID does not exist" containerID="aaa6aba03128befc99814ee78a17895b34ed7ce83ebef83f3eb4fc4799d2e8ee" Oct 01 17:00:42 crc kubenswrapper[4949]: I1001 17:00:42.561393 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aaa6aba03128befc99814ee78a17895b34ed7ce83ebef83f3eb4fc4799d2e8ee"} err="failed to get container status \"aaa6aba03128befc99814ee78a17895b34ed7ce83ebef83f3eb4fc4799d2e8ee\": rpc error: code = NotFound desc = could not find container \"aaa6aba03128befc99814ee78a17895b34ed7ce83ebef83f3eb4fc4799d2e8ee\": container with ID starting with aaa6aba03128befc99814ee78a17895b34ed7ce83ebef83f3eb4fc4799d2e8ee not found: ID does not exist" Oct 01 17:00:42 crc kubenswrapper[4949]: I1001 17:00:42.561411 4949 scope.go:117] "RemoveContainer" containerID="48e0390383a68ae702fbae27a5fcc81cde45e232fc34ff7845e44433fe24dac1" Oct 01 17:00:42 crc kubenswrapper[4949]: E1001 17:00:42.561694 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"48e0390383a68ae702fbae27a5fcc81cde45e232fc34ff7845e44433fe24dac1\": container with ID starting with 48e0390383a68ae702fbae27a5fcc81cde45e232fc34ff7845e44433fe24dac1 not found: ID does not exist" containerID="48e0390383a68ae702fbae27a5fcc81cde45e232fc34ff7845e44433fe24dac1" Oct 01 17:00:42 crc kubenswrapper[4949]: I1001 17:00:42.561725 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"48e0390383a68ae702fbae27a5fcc81cde45e232fc34ff7845e44433fe24dac1"} err="failed to get container status \"48e0390383a68ae702fbae27a5fcc81cde45e232fc34ff7845e44433fe24dac1\": rpc error: code = NotFound desc = could not find container \"48e0390383a68ae702fbae27a5fcc81cde45e232fc34ff7845e44433fe24dac1\": container with ID starting with 48e0390383a68ae702fbae27a5fcc81cde45e232fc34ff7845e44433fe24dac1 not found: ID does not exist" Oct 01 17:00:43 crc kubenswrapper[4949]: I1001 17:00:43.621776 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8df7475f-1dd7-47e4-bfef-ad0789feb408" path="/var/lib/kubelet/pods/8df7475f-1dd7-47e4-bfef-ad0789feb408/volumes" Oct 01 17:00:49 crc kubenswrapper[4949]: I1001 17:00:49.289413 4949 scope.go:117] "RemoveContainer" containerID="321fc938d06a3668d2c402b44b27d40ffed28a68ea9ac6311ce6fe8f1d41859a" Oct 01 17:01:00 crc kubenswrapper[4949]: I1001 17:01:00.149400 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/keystone-cron-29322301-vwh2x"] Oct 01 17:01:00 crc kubenswrapper[4949]: E1001 17:01:00.158459 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8df7475f-1dd7-47e4-bfef-ad0789feb408" containerName="registry-server" Oct 01 17:01:00 crc kubenswrapper[4949]: I1001 17:01:00.158631 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="8df7475f-1dd7-47e4-bfef-ad0789feb408" containerName="registry-server" Oct 01 17:01:00 crc kubenswrapper[4949]: E1001 17:01:00.158643 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8df7475f-1dd7-47e4-bfef-ad0789feb408" containerName="extract-content" Oct 01 17:01:00 crc kubenswrapper[4949]: I1001 17:01:00.158650 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="8df7475f-1dd7-47e4-bfef-ad0789feb408" containerName="extract-content" Oct 01 17:01:00 crc kubenswrapper[4949]: E1001 17:01:00.158687 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8df7475f-1dd7-47e4-bfef-ad0789feb408" containerName="extract-utilities" Oct 01 17:01:00 crc kubenswrapper[4949]: I1001 17:01:00.158694 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="8df7475f-1dd7-47e4-bfef-ad0789feb408" containerName="extract-utilities" Oct 01 17:01:00 crc kubenswrapper[4949]: I1001 17:01:00.158860 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="8df7475f-1dd7-47e4-bfef-ad0789feb408" containerName="registry-server" Oct 01 17:01:00 crc kubenswrapper[4949]: I1001 17:01:00.159523 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29322301-vwh2x" Oct 01 17:01:00 crc kubenswrapper[4949]: I1001 17:01:00.274660 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83bc5859-312b-4097-8bfc-0b53ea00e5a6-combined-ca-bundle\") pod \"keystone-cron-29322301-vwh2x\" (UID: \"83bc5859-312b-4097-8bfc-0b53ea00e5a6\") " pod="openstack/keystone-cron-29322301-vwh2x" Oct 01 17:01:00 crc kubenswrapper[4949]: I1001 17:01:00.274772 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx6ls\" (UniqueName: \"kubernetes.io/projected/83bc5859-312b-4097-8bfc-0b53ea00e5a6-kube-api-access-qx6ls\") pod \"keystone-cron-29322301-vwh2x\" (UID: \"83bc5859-312b-4097-8bfc-0b53ea00e5a6\") " pod="openstack/keystone-cron-29322301-vwh2x" Oct 01 17:01:00 crc kubenswrapper[4949]: I1001 17:01:00.274806 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83bc5859-312b-4097-8bfc-0b53ea00e5a6-config-data\") pod \"keystone-cron-29322301-vwh2x\" (UID: \"83bc5859-312b-4097-8bfc-0b53ea00e5a6\") " pod="openstack/keystone-cron-29322301-vwh2x" Oct 01 17:01:00 crc kubenswrapper[4949]: I1001 17:01:00.274975 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/83bc5859-312b-4097-8bfc-0b53ea00e5a6-fernet-keys\") pod \"keystone-cron-29322301-vwh2x\" (UID: \"83bc5859-312b-4097-8bfc-0b53ea00e5a6\") " pod="openstack/keystone-cron-29322301-vwh2x" Oct 01 17:01:00 crc kubenswrapper[4949]: I1001 17:01:00.278868 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29322301-vwh2x"] Oct 01 17:01:00 crc kubenswrapper[4949]: I1001 17:01:00.376649 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qx6ls\" (UniqueName: \"kubernetes.io/projected/83bc5859-312b-4097-8bfc-0b53ea00e5a6-kube-api-access-qx6ls\") pod \"keystone-cron-29322301-vwh2x\" (UID: \"83bc5859-312b-4097-8bfc-0b53ea00e5a6\") " pod="openstack/keystone-cron-29322301-vwh2x" Oct 01 17:01:00 crc kubenswrapper[4949]: I1001 17:01:00.376710 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83bc5859-312b-4097-8bfc-0b53ea00e5a6-config-data\") pod \"keystone-cron-29322301-vwh2x\" (UID: \"83bc5859-312b-4097-8bfc-0b53ea00e5a6\") " pod="openstack/keystone-cron-29322301-vwh2x" Oct 01 17:01:00 crc kubenswrapper[4949]: I1001 17:01:00.376836 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/83bc5859-312b-4097-8bfc-0b53ea00e5a6-fernet-keys\") pod \"keystone-cron-29322301-vwh2x\" (UID: \"83bc5859-312b-4097-8bfc-0b53ea00e5a6\") " pod="openstack/keystone-cron-29322301-vwh2x" Oct 01 17:01:00 crc kubenswrapper[4949]: I1001 17:01:00.376932 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83bc5859-312b-4097-8bfc-0b53ea00e5a6-combined-ca-bundle\") pod \"keystone-cron-29322301-vwh2x\" (UID: \"83bc5859-312b-4097-8bfc-0b53ea00e5a6\") " pod="openstack/keystone-cron-29322301-vwh2x" Oct 01 17:01:00 crc kubenswrapper[4949]: I1001 17:01:00.383504 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83bc5859-312b-4097-8bfc-0b53ea00e5a6-combined-ca-bundle\") pod \"keystone-cron-29322301-vwh2x\" (UID: \"83bc5859-312b-4097-8bfc-0b53ea00e5a6\") " pod="openstack/keystone-cron-29322301-vwh2x" Oct 01 17:01:00 crc kubenswrapper[4949]: I1001 17:01:00.383838 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/83bc5859-312b-4097-8bfc-0b53ea00e5a6-fernet-keys\") pod \"keystone-cron-29322301-vwh2x\" (UID: \"83bc5859-312b-4097-8bfc-0b53ea00e5a6\") " pod="openstack/keystone-cron-29322301-vwh2x" Oct 01 17:01:00 crc kubenswrapper[4949]: I1001 17:01:00.388886 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83bc5859-312b-4097-8bfc-0b53ea00e5a6-config-data\") pod \"keystone-cron-29322301-vwh2x\" (UID: \"83bc5859-312b-4097-8bfc-0b53ea00e5a6\") " pod="openstack/keystone-cron-29322301-vwh2x" Oct 01 17:01:00 crc kubenswrapper[4949]: I1001 17:01:00.395645 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx6ls\" (UniqueName: \"kubernetes.io/projected/83bc5859-312b-4097-8bfc-0b53ea00e5a6-kube-api-access-qx6ls\") pod \"keystone-cron-29322301-vwh2x\" (UID: \"83bc5859-312b-4097-8bfc-0b53ea00e5a6\") " pod="openstack/keystone-cron-29322301-vwh2x" Oct 01 17:01:00 crc kubenswrapper[4949]: I1001 17:01:00.476943 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29322301-vwh2x" Oct 01 17:01:01 crc kubenswrapper[4949]: I1001 17:01:01.084390 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/keystone-cron-29322301-vwh2x"] Oct 01 17:01:01 crc kubenswrapper[4949]: I1001 17:01:01.673090 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29322301-vwh2x" event={"ID":"83bc5859-312b-4097-8bfc-0b53ea00e5a6","Type":"ContainerStarted","Data":"28289151709b51eb5ded051621f75957858a261434268cb0f1356f4bdece54c8"} Oct 01 17:01:01 crc kubenswrapper[4949]: I1001 17:01:01.673486 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29322301-vwh2x" event={"ID":"83bc5859-312b-4097-8bfc-0b53ea00e5a6","Type":"ContainerStarted","Data":"6ce034d0d20ccdef7610a009c069156597c66ed42f180c5c270aaee350eba5fc"} Oct 01 17:01:01 crc kubenswrapper[4949]: I1001 17:01:01.691112 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/keystone-cron-29322301-vwh2x" podStartSLOduration=1.691088903 podStartE2EDuration="1.691088903s" podCreationTimestamp="2025-10-01 17:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 17:01:01.689338805 +0000 UTC m=+4760.994944996" watchObservedRunningTime="2025-10-01 17:01:01.691088903 +0000 UTC m=+4760.996695094" Oct 01 17:01:05 crc kubenswrapper[4949]: I1001 17:01:05.705935 4949 generic.go:334] "Generic (PLEG): container finished" podID="83bc5859-312b-4097-8bfc-0b53ea00e5a6" containerID="28289151709b51eb5ded051621f75957858a261434268cb0f1356f4bdece54c8" exitCode=0 Oct 01 17:01:05 crc kubenswrapper[4949]: I1001 17:01:05.706045 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29322301-vwh2x" event={"ID":"83bc5859-312b-4097-8bfc-0b53ea00e5a6","Type":"ContainerDied","Data":"28289151709b51eb5ded051621f75957858a261434268cb0f1356f4bdece54c8"} Oct 01 17:01:07 crc kubenswrapper[4949]: I1001 17:01:07.112934 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29322301-vwh2x" Oct 01 17:01:07 crc kubenswrapper[4949]: I1001 17:01:07.221984 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83bc5859-312b-4097-8bfc-0b53ea00e5a6-config-data\") pod \"83bc5859-312b-4097-8bfc-0b53ea00e5a6\" (UID: \"83bc5859-312b-4097-8bfc-0b53ea00e5a6\") " Oct 01 17:01:07 crc kubenswrapper[4949]: I1001 17:01:07.222112 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83bc5859-312b-4097-8bfc-0b53ea00e5a6-combined-ca-bundle\") pod \"83bc5859-312b-4097-8bfc-0b53ea00e5a6\" (UID: \"83bc5859-312b-4097-8bfc-0b53ea00e5a6\") " Oct 01 17:01:07 crc kubenswrapper[4949]: I1001 17:01:07.222255 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qx6ls\" (UniqueName: \"kubernetes.io/projected/83bc5859-312b-4097-8bfc-0b53ea00e5a6-kube-api-access-qx6ls\") pod \"83bc5859-312b-4097-8bfc-0b53ea00e5a6\" (UID: \"83bc5859-312b-4097-8bfc-0b53ea00e5a6\") " Oct 01 17:01:07 crc kubenswrapper[4949]: I1001 17:01:07.222326 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/83bc5859-312b-4097-8bfc-0b53ea00e5a6-fernet-keys\") pod \"83bc5859-312b-4097-8bfc-0b53ea00e5a6\" (UID: \"83bc5859-312b-4097-8bfc-0b53ea00e5a6\") " Oct 01 17:01:07 crc kubenswrapper[4949]: I1001 17:01:07.229217 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83bc5859-312b-4097-8bfc-0b53ea00e5a6-kube-api-access-qx6ls" (OuterVolumeSpecName: "kube-api-access-qx6ls") pod "83bc5859-312b-4097-8bfc-0b53ea00e5a6" (UID: "83bc5859-312b-4097-8bfc-0b53ea00e5a6"). InnerVolumeSpecName "kube-api-access-qx6ls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 17:01:07 crc kubenswrapper[4949]: I1001 17:01:07.231181 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83bc5859-312b-4097-8bfc-0b53ea00e5a6-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "83bc5859-312b-4097-8bfc-0b53ea00e5a6" (UID: "83bc5859-312b-4097-8bfc-0b53ea00e5a6"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 17:01:07 crc kubenswrapper[4949]: I1001 17:01:07.276431 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83bc5859-312b-4097-8bfc-0b53ea00e5a6-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "83bc5859-312b-4097-8bfc-0b53ea00e5a6" (UID: "83bc5859-312b-4097-8bfc-0b53ea00e5a6"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 17:01:07 crc kubenswrapper[4949]: I1001 17:01:07.292785 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83bc5859-312b-4097-8bfc-0b53ea00e5a6-config-data" (OuterVolumeSpecName: "config-data") pod "83bc5859-312b-4097-8bfc-0b53ea00e5a6" (UID: "83bc5859-312b-4097-8bfc-0b53ea00e5a6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 17:01:07 crc kubenswrapper[4949]: I1001 17:01:07.324594 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/83bc5859-312b-4097-8bfc-0b53ea00e5a6-config-data\") on node \"crc\" DevicePath \"\"" Oct 01 17:01:07 crc kubenswrapper[4949]: I1001 17:01:07.324653 4949 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/83bc5859-312b-4097-8bfc-0b53ea00e5a6-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Oct 01 17:01:07 crc kubenswrapper[4949]: I1001 17:01:07.324668 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qx6ls\" (UniqueName: \"kubernetes.io/projected/83bc5859-312b-4097-8bfc-0b53ea00e5a6-kube-api-access-qx6ls\") on node \"crc\" DevicePath \"\"" Oct 01 17:01:07 crc kubenswrapper[4949]: I1001 17:01:07.324681 4949 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/83bc5859-312b-4097-8bfc-0b53ea00e5a6-fernet-keys\") on node \"crc\" DevicePath \"\"" Oct 01 17:01:07 crc kubenswrapper[4949]: I1001 17:01:07.725205 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/keystone-cron-29322301-vwh2x" event={"ID":"83bc5859-312b-4097-8bfc-0b53ea00e5a6","Type":"ContainerDied","Data":"6ce034d0d20ccdef7610a009c069156597c66ed42f180c5c270aaee350eba5fc"} Oct 01 17:01:07 crc kubenswrapper[4949]: I1001 17:01:07.725243 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ce034d0d20ccdef7610a009c069156597c66ed42f180c5c270aaee350eba5fc" Oct 01 17:01:07 crc kubenswrapper[4949]: I1001 17:01:07.725253 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/keystone-cron-29322301-vwh2x" Oct 01 17:01:29 crc kubenswrapper[4949]: I1001 17:01:29.716089 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pp4fb"] Oct 01 17:01:29 crc kubenswrapper[4949]: E1001 17:01:29.717249 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="83bc5859-312b-4097-8bfc-0b53ea00e5a6" containerName="keystone-cron" Oct 01 17:01:29 crc kubenswrapper[4949]: I1001 17:01:29.717265 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="83bc5859-312b-4097-8bfc-0b53ea00e5a6" containerName="keystone-cron" Oct 01 17:01:29 crc kubenswrapper[4949]: I1001 17:01:29.717537 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="83bc5859-312b-4097-8bfc-0b53ea00e5a6" containerName="keystone-cron" Oct 01 17:01:29 crc kubenswrapper[4949]: I1001 17:01:29.719344 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pp4fb" Oct 01 17:01:29 crc kubenswrapper[4949]: I1001 17:01:29.735984 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pp4fb"] Oct 01 17:01:29 crc kubenswrapper[4949]: I1001 17:01:29.902036 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd-utilities\") pod \"community-operators-pp4fb\" (UID: \"cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd\") " pod="openshift-marketplace/community-operators-pp4fb" Oct 01 17:01:29 crc kubenswrapper[4949]: I1001 17:01:29.902104 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd-catalog-content\") pod \"community-operators-pp4fb\" (UID: \"cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd\") " pod="openshift-marketplace/community-operators-pp4fb" Oct 01 17:01:29 crc kubenswrapper[4949]: I1001 17:01:29.902159 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfrqb\" (UniqueName: \"kubernetes.io/projected/cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd-kube-api-access-nfrqb\") pod \"community-operators-pp4fb\" (UID: \"cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd\") " pod="openshift-marketplace/community-operators-pp4fb" Oct 01 17:01:30 crc kubenswrapper[4949]: I1001 17:01:30.003788 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd-utilities\") pod \"community-operators-pp4fb\" (UID: \"cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd\") " pod="openshift-marketplace/community-operators-pp4fb" Oct 01 17:01:30 crc kubenswrapper[4949]: I1001 17:01:30.003844 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd-catalog-content\") pod \"community-operators-pp4fb\" (UID: \"cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd\") " pod="openshift-marketplace/community-operators-pp4fb" Oct 01 17:01:30 crc kubenswrapper[4949]: I1001 17:01:30.003876 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfrqb\" (UniqueName: \"kubernetes.io/projected/cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd-kube-api-access-nfrqb\") pod \"community-operators-pp4fb\" (UID: \"cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd\") " pod="openshift-marketplace/community-operators-pp4fb" Oct 01 17:01:30 crc kubenswrapper[4949]: I1001 17:01:30.004326 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd-utilities\") pod \"community-operators-pp4fb\" (UID: \"cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd\") " pod="openshift-marketplace/community-operators-pp4fb" Oct 01 17:01:30 crc kubenswrapper[4949]: I1001 17:01:30.004792 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd-catalog-content\") pod \"community-operators-pp4fb\" (UID: \"cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd\") " pod="openshift-marketplace/community-operators-pp4fb" Oct 01 17:01:30 crc kubenswrapper[4949]: I1001 17:01:30.045950 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfrqb\" (UniqueName: \"kubernetes.io/projected/cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd-kube-api-access-nfrqb\") pod \"community-operators-pp4fb\" (UID: \"cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd\") " pod="openshift-marketplace/community-operators-pp4fb" Oct 01 17:01:30 crc kubenswrapper[4949]: I1001 17:01:30.046747 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pp4fb" Oct 01 17:01:30 crc kubenswrapper[4949]: I1001 17:01:30.570639 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pp4fb"] Oct 01 17:01:30 crc kubenswrapper[4949]: I1001 17:01:30.943834 4949 generic.go:334] "Generic (PLEG): container finished" podID="cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd" containerID="9f14d0383664c8df2f579f9fc6539eb97b546fcb000b9b7defec7fc335cfc890" exitCode=0 Oct 01 17:01:30 crc kubenswrapper[4949]: I1001 17:01:30.943938 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pp4fb" event={"ID":"cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd","Type":"ContainerDied","Data":"9f14d0383664c8df2f579f9fc6539eb97b546fcb000b9b7defec7fc335cfc890"} Oct 01 17:01:30 crc kubenswrapper[4949]: I1001 17:01:30.944118 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pp4fb" event={"ID":"cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd","Type":"ContainerStarted","Data":"4f8c53128e414ffd81d007be7f37ba7a0babcb54ea681332d12a41219a713d45"} Oct 01 17:01:33 crc kubenswrapper[4949]: I1001 17:01:33.976051 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pp4fb" event={"ID":"cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd","Type":"ContainerStarted","Data":"76c910a10f371eaf335271d70784507db60abc927f85e664959c5cebbd2fa958"} Oct 01 17:01:39 crc kubenswrapper[4949]: I1001 17:01:39.033592 4949 generic.go:334] "Generic (PLEG): container finished" podID="cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd" containerID="76c910a10f371eaf335271d70784507db60abc927f85e664959c5cebbd2fa958" exitCode=0 Oct 01 17:01:39 crc kubenswrapper[4949]: I1001 17:01:39.033825 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pp4fb" event={"ID":"cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd","Type":"ContainerDied","Data":"76c910a10f371eaf335271d70784507db60abc927f85e664959c5cebbd2fa958"} Oct 01 17:01:40 crc kubenswrapper[4949]: I1001 17:01:40.047473 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pp4fb" event={"ID":"cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd","Type":"ContainerStarted","Data":"eb38631256ae80eee511c6e7cd0cb5cd88438b3c4718822cf521dc1950dadb35"} Oct 01 17:01:40 crc kubenswrapper[4949]: I1001 17:01:40.073738 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pp4fb" podStartSLOduration=2.374800351 podStartE2EDuration="11.073706072s" podCreationTimestamp="2025-10-01 17:01:29 +0000 UTC" firstStartedPulling="2025-10-01 17:01:30.945640671 +0000 UTC m=+4790.251246882" lastFinishedPulling="2025-10-01 17:01:39.644546402 +0000 UTC m=+4798.950152603" observedRunningTime="2025-10-01 17:01:40.064917638 +0000 UTC m=+4799.370523889" watchObservedRunningTime="2025-10-01 17:01:40.073706072 +0000 UTC m=+4799.379312293" Oct 01 17:01:50 crc kubenswrapper[4949]: I1001 17:01:50.047265 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pp4fb" Oct 01 17:01:50 crc kubenswrapper[4949]: I1001 17:01:50.048700 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pp4fb" Oct 01 17:01:50 crc kubenswrapper[4949]: I1001 17:01:50.094527 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pp4fb" Oct 01 17:01:50 crc kubenswrapper[4949]: I1001 17:01:50.192020 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pp4fb" Oct 01 17:01:50 crc kubenswrapper[4949]: I1001 17:01:50.332914 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pp4fb"] Oct 01 17:01:52 crc kubenswrapper[4949]: I1001 17:01:52.164943 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pp4fb" podUID="cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd" containerName="registry-server" containerID="cri-o://eb38631256ae80eee511c6e7cd0cb5cd88438b3c4718822cf521dc1950dadb35" gracePeriod=2 Oct 01 17:01:52 crc kubenswrapper[4949]: I1001 17:01:52.678370 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pp4fb" Oct 01 17:01:52 crc kubenswrapper[4949]: I1001 17:01:52.773465 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd-catalog-content\") pod \"cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd\" (UID: \"cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd\") " Oct 01 17:01:52 crc kubenswrapper[4949]: I1001 17:01:52.773589 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd-utilities\") pod \"cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd\" (UID: \"cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd\") " Oct 01 17:01:52 crc kubenswrapper[4949]: I1001 17:01:52.773656 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nfrqb\" (UniqueName: \"kubernetes.io/projected/cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd-kube-api-access-nfrqb\") pod \"cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd\" (UID: \"cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd\") " Oct 01 17:01:52 crc kubenswrapper[4949]: I1001 17:01:52.775559 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd-utilities" (OuterVolumeSpecName: "utilities") pod "cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd" (UID: "cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 17:01:52 crc kubenswrapper[4949]: I1001 17:01:52.782339 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd-kube-api-access-nfrqb" (OuterVolumeSpecName: "kube-api-access-nfrqb") pod "cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd" (UID: "cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd"). InnerVolumeSpecName "kube-api-access-nfrqb". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 17:01:52 crc kubenswrapper[4949]: I1001 17:01:52.826775 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd" (UID: "cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 17:01:52 crc kubenswrapper[4949]: I1001 17:01:52.875806 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd-utilities\") on node \"crc\" DevicePath \"\"" Oct 01 17:01:52 crc kubenswrapper[4949]: I1001 17:01:52.875852 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nfrqb\" (UniqueName: \"kubernetes.io/projected/cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd-kube-api-access-nfrqb\") on node \"crc\" DevicePath \"\"" Oct 01 17:01:52 crc kubenswrapper[4949]: I1001 17:01:52.875862 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 01 17:01:53 crc kubenswrapper[4949]: I1001 17:01:53.177286 4949 generic.go:334] "Generic (PLEG): container finished" podID="cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd" containerID="eb38631256ae80eee511c6e7cd0cb5cd88438b3c4718822cf521dc1950dadb35" exitCode=0 Oct 01 17:01:53 crc kubenswrapper[4949]: I1001 17:01:53.177338 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pp4fb" event={"ID":"cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd","Type":"ContainerDied","Data":"eb38631256ae80eee511c6e7cd0cb5cd88438b3c4718822cf521dc1950dadb35"} Oct 01 17:01:53 crc kubenswrapper[4949]: I1001 17:01:53.177361 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pp4fb" Oct 01 17:01:53 crc kubenswrapper[4949]: I1001 17:01:53.177376 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pp4fb" event={"ID":"cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd","Type":"ContainerDied","Data":"4f8c53128e414ffd81d007be7f37ba7a0babcb54ea681332d12a41219a713d45"} Oct 01 17:01:53 crc kubenswrapper[4949]: I1001 17:01:53.177397 4949 scope.go:117] "RemoveContainer" containerID="eb38631256ae80eee511c6e7cd0cb5cd88438b3c4718822cf521dc1950dadb35" Oct 01 17:01:53 crc kubenswrapper[4949]: I1001 17:01:53.196173 4949 scope.go:117] "RemoveContainer" containerID="76c910a10f371eaf335271d70784507db60abc927f85e664959c5cebbd2fa958" Oct 01 17:01:53 crc kubenswrapper[4949]: I1001 17:01:53.224226 4949 scope.go:117] "RemoveContainer" containerID="9f14d0383664c8df2f579f9fc6539eb97b546fcb000b9b7defec7fc335cfc890" Oct 01 17:01:53 crc kubenswrapper[4949]: I1001 17:01:53.230252 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pp4fb"] Oct 01 17:01:53 crc kubenswrapper[4949]: I1001 17:01:53.239905 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pp4fb"] Oct 01 17:01:53 crc kubenswrapper[4949]: I1001 17:01:53.260428 4949 scope.go:117] "RemoveContainer" containerID="eb38631256ae80eee511c6e7cd0cb5cd88438b3c4718822cf521dc1950dadb35" Oct 01 17:01:53 crc kubenswrapper[4949]: E1001 17:01:53.261465 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eb38631256ae80eee511c6e7cd0cb5cd88438b3c4718822cf521dc1950dadb35\": container with ID starting with eb38631256ae80eee511c6e7cd0cb5cd88438b3c4718822cf521dc1950dadb35 not found: ID does not exist" containerID="eb38631256ae80eee511c6e7cd0cb5cd88438b3c4718822cf521dc1950dadb35" Oct 01 17:01:53 crc kubenswrapper[4949]: I1001 17:01:53.261520 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eb38631256ae80eee511c6e7cd0cb5cd88438b3c4718822cf521dc1950dadb35"} err="failed to get container status \"eb38631256ae80eee511c6e7cd0cb5cd88438b3c4718822cf521dc1950dadb35\": rpc error: code = NotFound desc = could not find container \"eb38631256ae80eee511c6e7cd0cb5cd88438b3c4718822cf521dc1950dadb35\": container with ID starting with eb38631256ae80eee511c6e7cd0cb5cd88438b3c4718822cf521dc1950dadb35 not found: ID does not exist" Oct 01 17:01:53 crc kubenswrapper[4949]: I1001 17:01:53.261551 4949 scope.go:117] "RemoveContainer" containerID="76c910a10f371eaf335271d70784507db60abc927f85e664959c5cebbd2fa958" Oct 01 17:01:53 crc kubenswrapper[4949]: E1001 17:01:53.261897 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76c910a10f371eaf335271d70784507db60abc927f85e664959c5cebbd2fa958\": container with ID starting with 76c910a10f371eaf335271d70784507db60abc927f85e664959c5cebbd2fa958 not found: ID does not exist" containerID="76c910a10f371eaf335271d70784507db60abc927f85e664959c5cebbd2fa958" Oct 01 17:01:53 crc kubenswrapper[4949]: I1001 17:01:53.261940 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76c910a10f371eaf335271d70784507db60abc927f85e664959c5cebbd2fa958"} err="failed to get container status \"76c910a10f371eaf335271d70784507db60abc927f85e664959c5cebbd2fa958\": rpc error: code = NotFound desc = could not find container \"76c910a10f371eaf335271d70784507db60abc927f85e664959c5cebbd2fa958\": container with ID starting with 76c910a10f371eaf335271d70784507db60abc927f85e664959c5cebbd2fa958 not found: ID does not exist" Oct 01 17:01:53 crc kubenswrapper[4949]: I1001 17:01:53.261974 4949 scope.go:117] "RemoveContainer" containerID="9f14d0383664c8df2f579f9fc6539eb97b546fcb000b9b7defec7fc335cfc890" Oct 01 17:01:53 crc kubenswrapper[4949]: E1001 17:01:53.262361 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f14d0383664c8df2f579f9fc6539eb97b546fcb000b9b7defec7fc335cfc890\": container with ID starting with 9f14d0383664c8df2f579f9fc6539eb97b546fcb000b9b7defec7fc335cfc890 not found: ID does not exist" containerID="9f14d0383664c8df2f579f9fc6539eb97b546fcb000b9b7defec7fc335cfc890" Oct 01 17:01:53 crc kubenswrapper[4949]: I1001 17:01:53.262394 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f14d0383664c8df2f579f9fc6539eb97b546fcb000b9b7defec7fc335cfc890"} err="failed to get container status \"9f14d0383664c8df2f579f9fc6539eb97b546fcb000b9b7defec7fc335cfc890\": rpc error: code = NotFound desc = could not find container \"9f14d0383664c8df2f579f9fc6539eb97b546fcb000b9b7defec7fc335cfc890\": container with ID starting with 9f14d0383664c8df2f579f9fc6539eb97b546fcb000b9b7defec7fc335cfc890 not found: ID does not exist" Oct 01 17:01:53 crc kubenswrapper[4949]: I1001 17:01:53.611814 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd" path="/var/lib/kubelet/pods/cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd/volumes" Oct 01 17:02:18 crc kubenswrapper[4949]: I1001 17:02:18.038925 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 17:02:18 crc kubenswrapper[4949]: I1001 17:02:18.039622 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 17:02:48 crc kubenswrapper[4949]: I1001 17:02:48.039296 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 17:02:48 crc kubenswrapper[4949]: I1001 17:02:48.039927 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 17:03:18 crc kubenswrapper[4949]: I1001 17:03:18.039197 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 17:03:18 crc kubenswrapper[4949]: I1001 17:03:18.039814 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 17:03:18 crc kubenswrapper[4949]: I1001 17:03:18.039871 4949 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l6287" Oct 01 17:03:18 crc kubenswrapper[4949]: I1001 17:03:18.040755 4949 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5247ad550b785477d0797e78873846113ccbbc9ee2a1f1cf5fcc792baaa4b1d4"} pod="openshift-machine-config-operator/machine-config-daemon-l6287" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 01 17:03:18 crc kubenswrapper[4949]: I1001 17:03:18.040814 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" containerID="cri-o://5247ad550b785477d0797e78873846113ccbbc9ee2a1f1cf5fcc792baaa4b1d4" gracePeriod=600 Oct 01 17:03:18 crc kubenswrapper[4949]: E1001 17:03:18.181515 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 17:03:19 crc kubenswrapper[4949]: I1001 17:03:19.010157 4949 generic.go:334] "Generic (PLEG): container finished" podID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerID="5247ad550b785477d0797e78873846113ccbbc9ee2a1f1cf5fcc792baaa4b1d4" exitCode=0 Oct 01 17:03:19 crc kubenswrapper[4949]: I1001 17:03:19.010265 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" event={"ID":"0e15cd67-d4ad-49b8-96a6-da114105e558","Type":"ContainerDied","Data":"5247ad550b785477d0797e78873846113ccbbc9ee2a1f1cf5fcc792baaa4b1d4"} Oct 01 17:03:19 crc kubenswrapper[4949]: I1001 17:03:19.010608 4949 scope.go:117] "RemoveContainer" containerID="6b0a9adeb590d3e2b219b7c6db7fca17ec92b8ee94f12ad2f3b0eff4d03c455e" Oct 01 17:03:19 crc kubenswrapper[4949]: I1001 17:03:19.011666 4949 scope.go:117] "RemoveContainer" containerID="5247ad550b785477d0797e78873846113ccbbc9ee2a1f1cf5fcc792baaa4b1d4" Oct 01 17:03:19 crc kubenswrapper[4949]: E1001 17:03:19.012228 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 17:03:34 crc kubenswrapper[4949]: I1001 17:03:34.602015 4949 scope.go:117] "RemoveContainer" containerID="5247ad550b785477d0797e78873846113ccbbc9ee2a1f1cf5fcc792baaa4b1d4" Oct 01 17:03:34 crc kubenswrapper[4949]: E1001 17:03:34.603256 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 17:03:47 crc kubenswrapper[4949]: I1001 17:03:47.601704 4949 scope.go:117] "RemoveContainer" containerID="5247ad550b785477d0797e78873846113ccbbc9ee2a1f1cf5fcc792baaa4b1d4" Oct 01 17:03:47 crc kubenswrapper[4949]: E1001 17:03:47.602460 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 17:04:01 crc kubenswrapper[4949]: I1001 17:04:01.633957 4949 scope.go:117] "RemoveContainer" containerID="5247ad550b785477d0797e78873846113ccbbc9ee2a1f1cf5fcc792baaa4b1d4" Oct 01 17:04:01 crc kubenswrapper[4949]: E1001 17:04:01.635526 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 17:04:12 crc kubenswrapper[4949]: I1001 17:04:12.601795 4949 scope.go:117] "RemoveContainer" containerID="5247ad550b785477d0797e78873846113ccbbc9ee2a1f1cf5fcc792baaa4b1d4" Oct 01 17:04:12 crc kubenswrapper[4949]: E1001 17:04:12.602545 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 17:04:26 crc kubenswrapper[4949]: I1001 17:04:26.601675 4949 scope.go:117] "RemoveContainer" containerID="5247ad550b785477d0797e78873846113ccbbc9ee2a1f1cf5fcc792baaa4b1d4" Oct 01 17:04:26 crc kubenswrapper[4949]: E1001 17:04:26.602461 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 17:04:40 crc kubenswrapper[4949]: I1001 17:04:40.601990 4949 scope.go:117] "RemoveContainer" containerID="5247ad550b785477d0797e78873846113ccbbc9ee2a1f1cf5fcc792baaa4b1d4" Oct 01 17:04:40 crc kubenswrapper[4949]: E1001 17:04:40.603546 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 17:04:49 crc kubenswrapper[4949]: I1001 17:04:49.469752 4949 scope.go:117] "RemoveContainer" containerID="885798d6bf5cadd31dd27a89eb313d6f3590ee0ddbe0634bab7da2b3aeb6e654" Oct 01 17:04:49 crc kubenswrapper[4949]: I1001 17:04:49.506100 4949 scope.go:117] "RemoveContainer" containerID="31640c673f317762d4ba37ff753577117490e6cb7f99c761eff190a14d4b943a" Oct 01 17:04:49 crc kubenswrapper[4949]: I1001 17:04:49.555939 4949 scope.go:117] "RemoveContainer" containerID="6d9edda847c17355e2547fd32d09f54222f6f59e7036b5cbea4d8de976341b0e" Oct 01 17:04:55 crc kubenswrapper[4949]: I1001 17:04:55.601808 4949 scope.go:117] "RemoveContainer" containerID="5247ad550b785477d0797e78873846113ccbbc9ee2a1f1cf5fcc792baaa4b1d4" Oct 01 17:04:55 crc kubenswrapper[4949]: E1001 17:04:55.602543 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 17:05:07 crc kubenswrapper[4949]: I1001 17:05:07.601477 4949 scope.go:117] "RemoveContainer" containerID="5247ad550b785477d0797e78873846113ccbbc9ee2a1f1cf5fcc792baaa4b1d4" Oct 01 17:05:07 crc kubenswrapper[4949]: E1001 17:05:07.602105 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 17:05:19 crc kubenswrapper[4949]: I1001 17:05:19.602616 4949 scope.go:117] "RemoveContainer" containerID="5247ad550b785477d0797e78873846113ccbbc9ee2a1f1cf5fcc792baaa4b1d4" Oct 01 17:05:19 crc kubenswrapper[4949]: E1001 17:05:19.603672 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 17:05:33 crc kubenswrapper[4949]: I1001 17:05:33.602288 4949 scope.go:117] "RemoveContainer" containerID="5247ad550b785477d0797e78873846113ccbbc9ee2a1f1cf5fcc792baaa4b1d4" Oct 01 17:05:33 crc kubenswrapper[4949]: E1001 17:05:33.603661 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 17:05:45 crc kubenswrapper[4949]: I1001 17:05:45.602405 4949 scope.go:117] "RemoveContainer" containerID="5247ad550b785477d0797e78873846113ccbbc9ee2a1f1cf5fcc792baaa4b1d4" Oct 01 17:05:45 crc kubenswrapper[4949]: E1001 17:05:45.603334 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 17:06:00 crc kubenswrapper[4949]: I1001 17:06:00.602808 4949 scope.go:117] "RemoveContainer" containerID="5247ad550b785477d0797e78873846113ccbbc9ee2a1f1cf5fcc792baaa4b1d4" Oct 01 17:06:00 crc kubenswrapper[4949]: E1001 17:06:00.604015 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 17:06:12 crc kubenswrapper[4949]: I1001 17:06:12.601826 4949 scope.go:117] "RemoveContainer" containerID="5247ad550b785477d0797e78873846113ccbbc9ee2a1f1cf5fcc792baaa4b1d4" Oct 01 17:06:12 crc kubenswrapper[4949]: E1001 17:06:12.602767 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 17:06:24 crc kubenswrapper[4949]: I1001 17:06:24.602240 4949 scope.go:117] "RemoveContainer" containerID="5247ad550b785477d0797e78873846113ccbbc9ee2a1f1cf5fcc792baaa4b1d4" Oct 01 17:06:24 crc kubenswrapper[4949]: E1001 17:06:24.604355 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 17:06:37 crc kubenswrapper[4949]: I1001 17:06:37.601551 4949 scope.go:117] "RemoveContainer" containerID="5247ad550b785477d0797e78873846113ccbbc9ee2a1f1cf5fcc792baaa4b1d4" Oct 01 17:06:37 crc kubenswrapper[4949]: E1001 17:06:37.602817 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 17:06:49 crc kubenswrapper[4949]: I1001 17:06:49.602233 4949 scope.go:117] "RemoveContainer" containerID="5247ad550b785477d0797e78873846113ccbbc9ee2a1f1cf5fcc792baaa4b1d4" Oct 01 17:06:49 crc kubenswrapper[4949]: E1001 17:06:49.603176 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 17:07:03 crc kubenswrapper[4949]: I1001 17:07:03.614639 4949 scope.go:117] "RemoveContainer" containerID="5247ad550b785477d0797e78873846113ccbbc9ee2a1f1cf5fcc792baaa4b1d4" Oct 01 17:07:03 crc kubenswrapper[4949]: E1001 17:07:03.615432 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 17:07:18 crc kubenswrapper[4949]: I1001 17:07:18.602082 4949 scope.go:117] "RemoveContainer" containerID="5247ad550b785477d0797e78873846113ccbbc9ee2a1f1cf5fcc792baaa4b1d4" Oct 01 17:07:18 crc kubenswrapper[4949]: E1001 17:07:18.602861 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 17:07:31 crc kubenswrapper[4949]: I1001 17:07:31.608782 4949 scope.go:117] "RemoveContainer" containerID="5247ad550b785477d0797e78873846113ccbbc9ee2a1f1cf5fcc792baaa4b1d4" Oct 01 17:07:31 crc kubenswrapper[4949]: E1001 17:07:31.609893 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 17:07:46 crc kubenswrapper[4949]: I1001 17:07:46.601775 4949 scope.go:117] "RemoveContainer" containerID="5247ad550b785477d0797e78873846113ccbbc9ee2a1f1cf5fcc792baaa4b1d4" Oct 01 17:07:46 crc kubenswrapper[4949]: E1001 17:07:46.602388 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 17:07:58 crc kubenswrapper[4949]: I1001 17:07:58.602026 4949 scope.go:117] "RemoveContainer" containerID="5247ad550b785477d0797e78873846113ccbbc9ee2a1f1cf5fcc792baaa4b1d4" Oct 01 17:07:58 crc kubenswrapper[4949]: E1001 17:07:58.602999 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 17:08:12 crc kubenswrapper[4949]: I1001 17:08:12.601202 4949 scope.go:117] "RemoveContainer" containerID="5247ad550b785477d0797e78873846113ccbbc9ee2a1f1cf5fcc792baaa4b1d4" Oct 01 17:08:12 crc kubenswrapper[4949]: E1001 17:08:12.601897 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 17:08:26 crc kubenswrapper[4949]: I1001 17:08:26.601730 4949 scope.go:117] "RemoveContainer" containerID="5247ad550b785477d0797e78873846113ccbbc9ee2a1f1cf5fcc792baaa4b1d4" Oct 01 17:08:27 crc kubenswrapper[4949]: I1001 17:08:27.070813 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" event={"ID":"0e15cd67-d4ad-49b8-96a6-da114105e558","Type":"ContainerStarted","Data":"ade2b444188b0955770924aec19fde4922abe57cd21ed2e7a9bcdec8233b346e"} Oct 01 17:08:47 crc kubenswrapper[4949]: I1001 17:08:47.796118 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-mm4ct"] Oct 01 17:08:47 crc kubenswrapper[4949]: E1001 17:08:47.797087 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd" containerName="registry-server" Oct 01 17:08:47 crc kubenswrapper[4949]: I1001 17:08:47.797103 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd" containerName="registry-server" Oct 01 17:08:47 crc kubenswrapper[4949]: E1001 17:08:47.797146 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd" containerName="extract-utilities" Oct 01 17:08:47 crc kubenswrapper[4949]: I1001 17:08:47.797153 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd" containerName="extract-utilities" Oct 01 17:08:47 crc kubenswrapper[4949]: E1001 17:08:47.797201 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd" containerName="extract-content" Oct 01 17:08:47 crc kubenswrapper[4949]: I1001 17:08:47.797208 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd" containerName="extract-content" Oct 01 17:08:47 crc kubenswrapper[4949]: I1001 17:08:47.797428 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbb9cd2d-acd6-462f-a2e8-c09cad8b7fdd" containerName="registry-server" Oct 01 17:08:47 crc kubenswrapper[4949]: I1001 17:08:47.798853 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mm4ct" Oct 01 17:08:47 crc kubenswrapper[4949]: I1001 17:08:47.806446 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mm4ct"] Oct 01 17:08:47 crc kubenswrapper[4949]: I1001 17:08:47.932582 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pfmcm\" (UniqueName: \"kubernetes.io/projected/e89b7b79-8151-4ca5-8f04-193ba5f19431-kube-api-access-pfmcm\") pod \"redhat-marketplace-mm4ct\" (UID: \"e89b7b79-8151-4ca5-8f04-193ba5f19431\") " pod="openshift-marketplace/redhat-marketplace-mm4ct" Oct 01 17:08:47 crc kubenswrapper[4949]: I1001 17:08:47.932685 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e89b7b79-8151-4ca5-8f04-193ba5f19431-utilities\") pod \"redhat-marketplace-mm4ct\" (UID: \"e89b7b79-8151-4ca5-8f04-193ba5f19431\") " pod="openshift-marketplace/redhat-marketplace-mm4ct" Oct 01 17:08:47 crc kubenswrapper[4949]: I1001 17:08:47.932843 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e89b7b79-8151-4ca5-8f04-193ba5f19431-catalog-content\") pod \"redhat-marketplace-mm4ct\" (UID: \"e89b7b79-8151-4ca5-8f04-193ba5f19431\") " pod="openshift-marketplace/redhat-marketplace-mm4ct" Oct 01 17:08:48 crc kubenswrapper[4949]: I1001 17:08:48.034522 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e89b7b79-8151-4ca5-8f04-193ba5f19431-utilities\") pod \"redhat-marketplace-mm4ct\" (UID: \"e89b7b79-8151-4ca5-8f04-193ba5f19431\") " pod="openshift-marketplace/redhat-marketplace-mm4ct" Oct 01 17:08:48 crc kubenswrapper[4949]: I1001 17:08:48.035262 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e89b7b79-8151-4ca5-8f04-193ba5f19431-utilities\") pod \"redhat-marketplace-mm4ct\" (UID: \"e89b7b79-8151-4ca5-8f04-193ba5f19431\") " pod="openshift-marketplace/redhat-marketplace-mm4ct" Oct 01 17:08:48 crc kubenswrapper[4949]: I1001 17:08:48.035334 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e89b7b79-8151-4ca5-8f04-193ba5f19431-catalog-content\") pod \"redhat-marketplace-mm4ct\" (UID: \"e89b7b79-8151-4ca5-8f04-193ba5f19431\") " pod="openshift-marketplace/redhat-marketplace-mm4ct" Oct 01 17:08:48 crc kubenswrapper[4949]: I1001 17:08:48.035451 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pfmcm\" (UniqueName: \"kubernetes.io/projected/e89b7b79-8151-4ca5-8f04-193ba5f19431-kube-api-access-pfmcm\") pod \"redhat-marketplace-mm4ct\" (UID: \"e89b7b79-8151-4ca5-8f04-193ba5f19431\") " pod="openshift-marketplace/redhat-marketplace-mm4ct" Oct 01 17:08:48 crc kubenswrapper[4949]: I1001 17:08:48.036100 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e89b7b79-8151-4ca5-8f04-193ba5f19431-catalog-content\") pod \"redhat-marketplace-mm4ct\" (UID: \"e89b7b79-8151-4ca5-8f04-193ba5f19431\") " pod="openshift-marketplace/redhat-marketplace-mm4ct" Oct 01 17:08:48 crc kubenswrapper[4949]: I1001 17:08:48.059745 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pfmcm\" (UniqueName: \"kubernetes.io/projected/e89b7b79-8151-4ca5-8f04-193ba5f19431-kube-api-access-pfmcm\") pod \"redhat-marketplace-mm4ct\" (UID: \"e89b7b79-8151-4ca5-8f04-193ba5f19431\") " pod="openshift-marketplace/redhat-marketplace-mm4ct" Oct 01 17:08:48 crc kubenswrapper[4949]: I1001 17:08:48.187508 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mm4ct" Oct 01 17:08:48 crc kubenswrapper[4949]: I1001 17:08:48.618838 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-mm4ct"] Oct 01 17:08:48 crc kubenswrapper[4949]: W1001 17:08:48.625867 4949 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode89b7b79_8151_4ca5_8f04_193ba5f19431.slice/crio-4077ff7e5ec619e9588a79fa5f256fced5f32545affceded8f0f5cb4effc0bb8 WatchSource:0}: Error finding container 4077ff7e5ec619e9588a79fa5f256fced5f32545affceded8f0f5cb4effc0bb8: Status 404 returned error can't find the container with id 4077ff7e5ec619e9588a79fa5f256fced5f32545affceded8f0f5cb4effc0bb8 Oct 01 17:08:49 crc kubenswrapper[4949]: I1001 17:08:49.286787 4949 generic.go:334] "Generic (PLEG): container finished" podID="e89b7b79-8151-4ca5-8f04-193ba5f19431" containerID="6e5d0c01842643286db9325aef985a9a28d136708eaf36c8a3b4b7aa41fd8415" exitCode=0 Oct 01 17:08:49 crc kubenswrapper[4949]: I1001 17:08:49.286893 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mm4ct" event={"ID":"e89b7b79-8151-4ca5-8f04-193ba5f19431","Type":"ContainerDied","Data":"6e5d0c01842643286db9325aef985a9a28d136708eaf36c8a3b4b7aa41fd8415"} Oct 01 17:08:49 crc kubenswrapper[4949]: I1001 17:08:49.287107 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mm4ct" event={"ID":"e89b7b79-8151-4ca5-8f04-193ba5f19431","Type":"ContainerStarted","Data":"4077ff7e5ec619e9588a79fa5f256fced5f32545affceded8f0f5cb4effc0bb8"} Oct 01 17:08:49 crc kubenswrapper[4949]: I1001 17:08:49.317115 4949 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 01 17:09:00 crc kubenswrapper[4949]: I1001 17:09:00.425058 4949 generic.go:334] "Generic (PLEG): container finished" podID="e89b7b79-8151-4ca5-8f04-193ba5f19431" containerID="5e9b76d6e961d0557c2f6bfeccb27b1a85c442a10517972b7a8a23aba91c96c8" exitCode=0 Oct 01 17:09:00 crc kubenswrapper[4949]: I1001 17:09:00.425728 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mm4ct" event={"ID":"e89b7b79-8151-4ca5-8f04-193ba5f19431","Type":"ContainerDied","Data":"5e9b76d6e961d0557c2f6bfeccb27b1a85c442a10517972b7a8a23aba91c96c8"} Oct 01 17:09:02 crc kubenswrapper[4949]: I1001 17:09:02.452683 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mm4ct" event={"ID":"e89b7b79-8151-4ca5-8f04-193ba5f19431","Type":"ContainerStarted","Data":"b04e062fc529f662bcb6812e6c82d614d02462582795aa71dfc50f1ba195577c"} Oct 01 17:09:02 crc kubenswrapper[4949]: I1001 17:09:02.470564 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-mm4ct" podStartSLOduration=3.028003478 podStartE2EDuration="15.470540765s" podCreationTimestamp="2025-10-01 17:08:47 +0000 UTC" firstStartedPulling="2025-10-01 17:08:49.316270288 +0000 UTC m=+5228.621876509" lastFinishedPulling="2025-10-01 17:09:01.758807605 +0000 UTC m=+5241.064413796" observedRunningTime="2025-10-01 17:09:02.468295273 +0000 UTC m=+5241.773901474" watchObservedRunningTime="2025-10-01 17:09:02.470540765 +0000 UTC m=+5241.776146956" Oct 01 17:09:08 crc kubenswrapper[4949]: I1001 17:09:08.194261 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-mm4ct" Oct 01 17:09:08 crc kubenswrapper[4949]: I1001 17:09:08.195224 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-mm4ct" Oct 01 17:09:08 crc kubenswrapper[4949]: I1001 17:09:08.278501 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-mm4ct" Oct 01 17:09:08 crc kubenswrapper[4949]: I1001 17:09:08.570335 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-mm4ct" Oct 01 17:09:08 crc kubenswrapper[4949]: I1001 17:09:08.619993 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mm4ct"] Oct 01 17:09:10 crc kubenswrapper[4949]: I1001 17:09:10.526276 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-mm4ct" podUID="e89b7b79-8151-4ca5-8f04-193ba5f19431" containerName="registry-server" containerID="cri-o://b04e062fc529f662bcb6812e6c82d614d02462582795aa71dfc50f1ba195577c" gracePeriod=2 Oct 01 17:09:11 crc kubenswrapper[4949]: I1001 17:09:11.539063 4949 generic.go:334] "Generic (PLEG): container finished" podID="e89b7b79-8151-4ca5-8f04-193ba5f19431" containerID="b04e062fc529f662bcb6812e6c82d614d02462582795aa71dfc50f1ba195577c" exitCode=0 Oct 01 17:09:11 crc kubenswrapper[4949]: I1001 17:09:11.540010 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mm4ct" event={"ID":"e89b7b79-8151-4ca5-8f04-193ba5f19431","Type":"ContainerDied","Data":"b04e062fc529f662bcb6812e6c82d614d02462582795aa71dfc50f1ba195577c"} Oct 01 17:09:11 crc kubenswrapper[4949]: I1001 17:09:11.540059 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-mm4ct" event={"ID":"e89b7b79-8151-4ca5-8f04-193ba5f19431","Type":"ContainerDied","Data":"4077ff7e5ec619e9588a79fa5f256fced5f32545affceded8f0f5cb4effc0bb8"} Oct 01 17:09:11 crc kubenswrapper[4949]: I1001 17:09:11.540074 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4077ff7e5ec619e9588a79fa5f256fced5f32545affceded8f0f5cb4effc0bb8" Oct 01 17:09:11 crc kubenswrapper[4949]: I1001 17:09:11.552080 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mm4ct" Oct 01 17:09:11 crc kubenswrapper[4949]: I1001 17:09:11.728022 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e89b7b79-8151-4ca5-8f04-193ba5f19431-catalog-content\") pod \"e89b7b79-8151-4ca5-8f04-193ba5f19431\" (UID: \"e89b7b79-8151-4ca5-8f04-193ba5f19431\") " Oct 01 17:09:11 crc kubenswrapper[4949]: I1001 17:09:11.728323 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pfmcm\" (UniqueName: \"kubernetes.io/projected/e89b7b79-8151-4ca5-8f04-193ba5f19431-kube-api-access-pfmcm\") pod \"e89b7b79-8151-4ca5-8f04-193ba5f19431\" (UID: \"e89b7b79-8151-4ca5-8f04-193ba5f19431\") " Oct 01 17:09:11 crc kubenswrapper[4949]: I1001 17:09:11.728440 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e89b7b79-8151-4ca5-8f04-193ba5f19431-utilities\") pod \"e89b7b79-8151-4ca5-8f04-193ba5f19431\" (UID: \"e89b7b79-8151-4ca5-8f04-193ba5f19431\") " Oct 01 17:09:11 crc kubenswrapper[4949]: I1001 17:09:11.730263 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e89b7b79-8151-4ca5-8f04-193ba5f19431-utilities" (OuterVolumeSpecName: "utilities") pod "e89b7b79-8151-4ca5-8f04-193ba5f19431" (UID: "e89b7b79-8151-4ca5-8f04-193ba5f19431"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 17:09:11 crc kubenswrapper[4949]: I1001 17:09:11.742330 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e89b7b79-8151-4ca5-8f04-193ba5f19431-kube-api-access-pfmcm" (OuterVolumeSpecName: "kube-api-access-pfmcm") pod "e89b7b79-8151-4ca5-8f04-193ba5f19431" (UID: "e89b7b79-8151-4ca5-8f04-193ba5f19431"). InnerVolumeSpecName "kube-api-access-pfmcm". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 17:09:11 crc kubenswrapper[4949]: I1001 17:09:11.746529 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e89b7b79-8151-4ca5-8f04-193ba5f19431-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e89b7b79-8151-4ca5-8f04-193ba5f19431" (UID: "e89b7b79-8151-4ca5-8f04-193ba5f19431"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 17:09:11 crc kubenswrapper[4949]: I1001 17:09:11.834242 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e89b7b79-8151-4ca5-8f04-193ba5f19431-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 01 17:09:11 crc kubenswrapper[4949]: I1001 17:09:11.834398 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pfmcm\" (UniqueName: \"kubernetes.io/projected/e89b7b79-8151-4ca5-8f04-193ba5f19431-kube-api-access-pfmcm\") on node \"crc\" DevicePath \"\"" Oct 01 17:09:11 crc kubenswrapper[4949]: I1001 17:09:11.834411 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e89b7b79-8151-4ca5-8f04-193ba5f19431-utilities\") on node \"crc\" DevicePath \"\"" Oct 01 17:09:12 crc kubenswrapper[4949]: I1001 17:09:12.548557 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-mm4ct" Oct 01 17:09:12 crc kubenswrapper[4949]: I1001 17:09:12.580067 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-mm4ct"] Oct 01 17:09:12 crc kubenswrapper[4949]: I1001 17:09:12.588146 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-mm4ct"] Oct 01 17:09:13 crc kubenswrapper[4949]: I1001 17:09:13.614322 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e89b7b79-8151-4ca5-8f04-193ba5f19431" path="/var/lib/kubelet/pods/e89b7b79-8151-4ca5-8f04-193ba5f19431/volumes" Oct 01 17:10:37 crc kubenswrapper[4949]: I1001 17:10:37.818461 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-lkzd4"] Oct 01 17:10:37 crc kubenswrapper[4949]: E1001 17:10:37.819361 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e89b7b79-8151-4ca5-8f04-193ba5f19431" containerName="registry-server" Oct 01 17:10:37 crc kubenswrapper[4949]: I1001 17:10:37.819373 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="e89b7b79-8151-4ca5-8f04-193ba5f19431" containerName="registry-server" Oct 01 17:10:37 crc kubenswrapper[4949]: E1001 17:10:37.819388 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e89b7b79-8151-4ca5-8f04-193ba5f19431" containerName="extract-content" Oct 01 17:10:37 crc kubenswrapper[4949]: I1001 17:10:37.819394 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="e89b7b79-8151-4ca5-8f04-193ba5f19431" containerName="extract-content" Oct 01 17:10:37 crc kubenswrapper[4949]: E1001 17:10:37.819430 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e89b7b79-8151-4ca5-8f04-193ba5f19431" containerName="extract-utilities" Oct 01 17:10:37 crc kubenswrapper[4949]: I1001 17:10:37.819437 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="e89b7b79-8151-4ca5-8f04-193ba5f19431" containerName="extract-utilities" Oct 01 17:10:37 crc kubenswrapper[4949]: I1001 17:10:37.819616 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="e89b7b79-8151-4ca5-8f04-193ba5f19431" containerName="registry-server" Oct 01 17:10:37 crc kubenswrapper[4949]: I1001 17:10:37.821019 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lkzd4" Oct 01 17:10:37 crc kubenswrapper[4949]: I1001 17:10:37.833531 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lkzd4"] Oct 01 17:10:37 crc kubenswrapper[4949]: I1001 17:10:37.967893 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13f8e94c-6a79-4a64-acc9-4f31a7d1ec11-catalog-content\") pod \"redhat-operators-lkzd4\" (UID: \"13f8e94c-6a79-4a64-acc9-4f31a7d1ec11\") " pod="openshift-marketplace/redhat-operators-lkzd4" Oct 01 17:10:37 crc kubenswrapper[4949]: I1001 17:10:37.968256 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13f8e94c-6a79-4a64-acc9-4f31a7d1ec11-utilities\") pod \"redhat-operators-lkzd4\" (UID: \"13f8e94c-6a79-4a64-acc9-4f31a7d1ec11\") " pod="openshift-marketplace/redhat-operators-lkzd4" Oct 01 17:10:37 crc kubenswrapper[4949]: I1001 17:10:37.968376 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cvdgf\" (UniqueName: \"kubernetes.io/projected/13f8e94c-6a79-4a64-acc9-4f31a7d1ec11-kube-api-access-cvdgf\") pod \"redhat-operators-lkzd4\" (UID: \"13f8e94c-6a79-4a64-acc9-4f31a7d1ec11\") " pod="openshift-marketplace/redhat-operators-lkzd4" Oct 01 17:10:38 crc kubenswrapper[4949]: I1001 17:10:38.071224 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13f8e94c-6a79-4a64-acc9-4f31a7d1ec11-catalog-content\") pod \"redhat-operators-lkzd4\" (UID: \"13f8e94c-6a79-4a64-acc9-4f31a7d1ec11\") " pod="openshift-marketplace/redhat-operators-lkzd4" Oct 01 17:10:38 crc kubenswrapper[4949]: I1001 17:10:38.071298 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13f8e94c-6a79-4a64-acc9-4f31a7d1ec11-utilities\") pod \"redhat-operators-lkzd4\" (UID: \"13f8e94c-6a79-4a64-acc9-4f31a7d1ec11\") " pod="openshift-marketplace/redhat-operators-lkzd4" Oct 01 17:10:38 crc kubenswrapper[4949]: I1001 17:10:38.071334 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cvdgf\" (UniqueName: \"kubernetes.io/projected/13f8e94c-6a79-4a64-acc9-4f31a7d1ec11-kube-api-access-cvdgf\") pod \"redhat-operators-lkzd4\" (UID: \"13f8e94c-6a79-4a64-acc9-4f31a7d1ec11\") " pod="openshift-marketplace/redhat-operators-lkzd4" Oct 01 17:10:38 crc kubenswrapper[4949]: I1001 17:10:38.071699 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13f8e94c-6a79-4a64-acc9-4f31a7d1ec11-catalog-content\") pod \"redhat-operators-lkzd4\" (UID: \"13f8e94c-6a79-4a64-acc9-4f31a7d1ec11\") " pod="openshift-marketplace/redhat-operators-lkzd4" Oct 01 17:10:38 crc kubenswrapper[4949]: I1001 17:10:38.071805 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13f8e94c-6a79-4a64-acc9-4f31a7d1ec11-utilities\") pod \"redhat-operators-lkzd4\" (UID: \"13f8e94c-6a79-4a64-acc9-4f31a7d1ec11\") " pod="openshift-marketplace/redhat-operators-lkzd4" Oct 01 17:10:38 crc kubenswrapper[4949]: I1001 17:10:38.092156 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cvdgf\" (UniqueName: \"kubernetes.io/projected/13f8e94c-6a79-4a64-acc9-4f31a7d1ec11-kube-api-access-cvdgf\") pod \"redhat-operators-lkzd4\" (UID: \"13f8e94c-6a79-4a64-acc9-4f31a7d1ec11\") " pod="openshift-marketplace/redhat-operators-lkzd4" Oct 01 17:10:38 crc kubenswrapper[4949]: I1001 17:10:38.147421 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lkzd4" Oct 01 17:10:38 crc kubenswrapper[4949]: I1001 17:10:38.650366 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-lkzd4"] Oct 01 17:10:39 crc kubenswrapper[4949]: I1001 17:10:39.518251 4949 generic.go:334] "Generic (PLEG): container finished" podID="13f8e94c-6a79-4a64-acc9-4f31a7d1ec11" containerID="8000826694c9686e59e0bfe3d3ea17b18a7aa0a30027e0fc7d1f1034c0b93d30" exitCode=0 Oct 01 17:10:39 crc kubenswrapper[4949]: I1001 17:10:39.518569 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lkzd4" event={"ID":"13f8e94c-6a79-4a64-acc9-4f31a7d1ec11","Type":"ContainerDied","Data":"8000826694c9686e59e0bfe3d3ea17b18a7aa0a30027e0fc7d1f1034c0b93d30"} Oct 01 17:10:39 crc kubenswrapper[4949]: I1001 17:10:39.518596 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lkzd4" event={"ID":"13f8e94c-6a79-4a64-acc9-4f31a7d1ec11","Type":"ContainerStarted","Data":"f4a6b4be650a5f267e773c7792eeabd516cb52453348f122f955166db844aba8"} Oct 01 17:10:40 crc kubenswrapper[4949]: I1001 17:10:40.533999 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lkzd4" event={"ID":"13f8e94c-6a79-4a64-acc9-4f31a7d1ec11","Type":"ContainerStarted","Data":"511681f1eae4118dfc47744ce44fd048e1460948fbdb7cdb51bd7f5ffa4ce264"} Oct 01 17:10:42 crc kubenswrapper[4949]: I1001 17:10:42.617040 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zlrfn"] Oct 01 17:10:42 crc kubenswrapper[4949]: I1001 17:10:42.620949 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zlrfn" Oct 01 17:10:42 crc kubenswrapper[4949]: I1001 17:10:42.655477 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zlrfn"] Oct 01 17:10:42 crc kubenswrapper[4949]: I1001 17:10:42.793625 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4f27569-6592-45a5-a4c9-9eca176513a4-catalog-content\") pod \"certified-operators-zlrfn\" (UID: \"b4f27569-6592-45a5-a4c9-9eca176513a4\") " pod="openshift-marketplace/certified-operators-zlrfn" Oct 01 17:10:42 crc kubenswrapper[4949]: I1001 17:10:42.793705 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4f27569-6592-45a5-a4c9-9eca176513a4-utilities\") pod \"certified-operators-zlrfn\" (UID: \"b4f27569-6592-45a5-a4c9-9eca176513a4\") " pod="openshift-marketplace/certified-operators-zlrfn" Oct 01 17:10:42 crc kubenswrapper[4949]: I1001 17:10:42.793771 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zqlqg\" (UniqueName: \"kubernetes.io/projected/b4f27569-6592-45a5-a4c9-9eca176513a4-kube-api-access-zqlqg\") pod \"certified-operators-zlrfn\" (UID: \"b4f27569-6592-45a5-a4c9-9eca176513a4\") " pod="openshift-marketplace/certified-operators-zlrfn" Oct 01 17:10:42 crc kubenswrapper[4949]: I1001 17:10:42.896190 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4f27569-6592-45a5-a4c9-9eca176513a4-catalog-content\") pod \"certified-operators-zlrfn\" (UID: \"b4f27569-6592-45a5-a4c9-9eca176513a4\") " pod="openshift-marketplace/certified-operators-zlrfn" Oct 01 17:10:42 crc kubenswrapper[4949]: I1001 17:10:42.896296 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4f27569-6592-45a5-a4c9-9eca176513a4-utilities\") pod \"certified-operators-zlrfn\" (UID: \"b4f27569-6592-45a5-a4c9-9eca176513a4\") " pod="openshift-marketplace/certified-operators-zlrfn" Oct 01 17:10:42 crc kubenswrapper[4949]: I1001 17:10:42.896392 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zqlqg\" (UniqueName: \"kubernetes.io/projected/b4f27569-6592-45a5-a4c9-9eca176513a4-kube-api-access-zqlqg\") pod \"certified-operators-zlrfn\" (UID: \"b4f27569-6592-45a5-a4c9-9eca176513a4\") " pod="openshift-marketplace/certified-operators-zlrfn" Oct 01 17:10:42 crc kubenswrapper[4949]: I1001 17:10:42.897067 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4f27569-6592-45a5-a4c9-9eca176513a4-catalog-content\") pod \"certified-operators-zlrfn\" (UID: \"b4f27569-6592-45a5-a4c9-9eca176513a4\") " pod="openshift-marketplace/certified-operators-zlrfn" Oct 01 17:10:42 crc kubenswrapper[4949]: I1001 17:10:42.897185 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4f27569-6592-45a5-a4c9-9eca176513a4-utilities\") pod \"certified-operators-zlrfn\" (UID: \"b4f27569-6592-45a5-a4c9-9eca176513a4\") " pod="openshift-marketplace/certified-operators-zlrfn" Oct 01 17:10:42 crc kubenswrapper[4949]: I1001 17:10:42.917731 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zqlqg\" (UniqueName: \"kubernetes.io/projected/b4f27569-6592-45a5-a4c9-9eca176513a4-kube-api-access-zqlqg\") pod \"certified-operators-zlrfn\" (UID: \"b4f27569-6592-45a5-a4c9-9eca176513a4\") " pod="openshift-marketplace/certified-operators-zlrfn" Oct 01 17:10:42 crc kubenswrapper[4949]: I1001 17:10:42.959967 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zlrfn" Oct 01 17:10:43 crc kubenswrapper[4949]: I1001 17:10:43.519477 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zlrfn"] Oct 01 17:10:43 crc kubenswrapper[4949]: I1001 17:10:43.562066 4949 generic.go:334] "Generic (PLEG): container finished" podID="13f8e94c-6a79-4a64-acc9-4f31a7d1ec11" containerID="511681f1eae4118dfc47744ce44fd048e1460948fbdb7cdb51bd7f5ffa4ce264" exitCode=0 Oct 01 17:10:43 crc kubenswrapper[4949]: I1001 17:10:43.562111 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lkzd4" event={"ID":"13f8e94c-6a79-4a64-acc9-4f31a7d1ec11","Type":"ContainerDied","Data":"511681f1eae4118dfc47744ce44fd048e1460948fbdb7cdb51bd7f5ffa4ce264"} Oct 01 17:10:44 crc kubenswrapper[4949]: I1001 17:10:44.577151 4949 generic.go:334] "Generic (PLEG): container finished" podID="b4f27569-6592-45a5-a4c9-9eca176513a4" containerID="feb7464476636b403a037906249d45cf322270f4050b213c8637846b2c2e9925" exitCode=0 Oct 01 17:10:44 crc kubenswrapper[4949]: I1001 17:10:44.577266 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zlrfn" event={"ID":"b4f27569-6592-45a5-a4c9-9eca176513a4","Type":"ContainerDied","Data":"feb7464476636b403a037906249d45cf322270f4050b213c8637846b2c2e9925"} Oct 01 17:10:44 crc kubenswrapper[4949]: I1001 17:10:44.577726 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zlrfn" event={"ID":"b4f27569-6592-45a5-a4c9-9eca176513a4","Type":"ContainerStarted","Data":"1755696f7fffdcd783f7b421e34dd3ec70bc9c783acf810c0af1b0bc4ffac896"} Oct 01 17:10:45 crc kubenswrapper[4949]: I1001 17:10:45.587687 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lkzd4" event={"ID":"13f8e94c-6a79-4a64-acc9-4f31a7d1ec11","Type":"ContainerStarted","Data":"4ffb94ddab60b18e283a228f27374a0f79a1b7e185253fb8c49e710a1370bcb4"} Oct 01 17:10:45 crc kubenswrapper[4949]: I1001 17:10:45.610040 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-lkzd4" podStartSLOduration=3.621440681 podStartE2EDuration="8.610026377s" podCreationTimestamp="2025-10-01 17:10:37 +0000 UTC" firstStartedPulling="2025-10-01 17:10:39.520641353 +0000 UTC m=+5338.826247554" lastFinishedPulling="2025-10-01 17:10:44.509227039 +0000 UTC m=+5343.814833250" observedRunningTime="2025-10-01 17:10:45.603952188 +0000 UTC m=+5344.909558399" watchObservedRunningTime="2025-10-01 17:10:45.610026377 +0000 UTC m=+5344.915632568" Oct 01 17:10:46 crc kubenswrapper[4949]: I1001 17:10:46.608493 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zlrfn" event={"ID":"b4f27569-6592-45a5-a4c9-9eca176513a4","Type":"ContainerStarted","Data":"3a35a5c989009fae27a54e3cd2d046e87f96c5f5c0ea28c227b5f06c28e1c579"} Oct 01 17:10:48 crc kubenswrapper[4949]: I1001 17:10:48.038544 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 17:10:48 crc kubenswrapper[4949]: I1001 17:10:48.038879 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 17:10:48 crc kubenswrapper[4949]: I1001 17:10:48.147659 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-lkzd4" Oct 01 17:10:48 crc kubenswrapper[4949]: I1001 17:10:48.147805 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-lkzd4" Oct 01 17:10:49 crc kubenswrapper[4949]: I1001 17:10:49.196039 4949 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-lkzd4" podUID="13f8e94c-6a79-4a64-acc9-4f31a7d1ec11" containerName="registry-server" probeResult="failure" output=< Oct 01 17:10:49 crc kubenswrapper[4949]: timeout: failed to connect service ":50051" within 1s Oct 01 17:10:49 crc kubenswrapper[4949]: > Oct 01 17:10:49 crc kubenswrapper[4949]: I1001 17:10:49.633822 4949 generic.go:334] "Generic (PLEG): container finished" podID="b4f27569-6592-45a5-a4c9-9eca176513a4" containerID="3a35a5c989009fae27a54e3cd2d046e87f96c5f5c0ea28c227b5f06c28e1c579" exitCode=0 Oct 01 17:10:49 crc kubenswrapper[4949]: I1001 17:10:49.633861 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zlrfn" event={"ID":"b4f27569-6592-45a5-a4c9-9eca176513a4","Type":"ContainerDied","Data":"3a35a5c989009fae27a54e3cd2d046e87f96c5f5c0ea28c227b5f06c28e1c579"} Oct 01 17:10:51 crc kubenswrapper[4949]: I1001 17:10:51.657925 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zlrfn" event={"ID":"b4f27569-6592-45a5-a4c9-9eca176513a4","Type":"ContainerStarted","Data":"bbc8b7c4a181b9412b0b158f38f5a6849029a785b33b3d1efb62635031e59780"} Oct 01 17:10:51 crc kubenswrapper[4949]: I1001 17:10:51.694430 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zlrfn" podStartSLOduration=3.664797633 podStartE2EDuration="9.694406414s" podCreationTimestamp="2025-10-01 17:10:42 +0000 UTC" firstStartedPulling="2025-10-01 17:10:44.579793212 +0000 UTC m=+5343.885399413" lastFinishedPulling="2025-10-01 17:10:50.609401993 +0000 UTC m=+5349.915008194" observedRunningTime="2025-10-01 17:10:51.693931051 +0000 UTC m=+5350.999537242" watchObservedRunningTime="2025-10-01 17:10:51.694406414 +0000 UTC m=+5351.000012605" Oct 01 17:10:52 crc kubenswrapper[4949]: I1001 17:10:52.960812 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zlrfn" Oct 01 17:10:52 crc kubenswrapper[4949]: I1001 17:10:52.961307 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zlrfn" Oct 01 17:10:53 crc kubenswrapper[4949]: I1001 17:10:53.024794 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zlrfn" Oct 01 17:10:58 crc kubenswrapper[4949]: I1001 17:10:58.209059 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-lkzd4" Oct 01 17:10:58 crc kubenswrapper[4949]: I1001 17:10:58.260975 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-lkzd4" Oct 01 17:10:58 crc kubenswrapper[4949]: I1001 17:10:58.449463 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lkzd4"] Oct 01 17:10:59 crc kubenswrapper[4949]: I1001 17:10:59.727099 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-lkzd4" podUID="13f8e94c-6a79-4a64-acc9-4f31a7d1ec11" containerName="registry-server" containerID="cri-o://4ffb94ddab60b18e283a228f27374a0f79a1b7e185253fb8c49e710a1370bcb4" gracePeriod=2 Oct 01 17:11:00 crc kubenswrapper[4949]: I1001 17:11:00.737401 4949 generic.go:334] "Generic (PLEG): container finished" podID="13f8e94c-6a79-4a64-acc9-4f31a7d1ec11" containerID="4ffb94ddab60b18e283a228f27374a0f79a1b7e185253fb8c49e710a1370bcb4" exitCode=0 Oct 01 17:11:00 crc kubenswrapper[4949]: I1001 17:11:00.737467 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lkzd4" event={"ID":"13f8e94c-6a79-4a64-acc9-4f31a7d1ec11","Type":"ContainerDied","Data":"4ffb94ddab60b18e283a228f27374a0f79a1b7e185253fb8c49e710a1370bcb4"} Oct 01 17:11:00 crc kubenswrapper[4949]: I1001 17:11:00.893507 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lkzd4" Oct 01 17:11:01 crc kubenswrapper[4949]: I1001 17:11:01.078978 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13f8e94c-6a79-4a64-acc9-4f31a7d1ec11-utilities\") pod \"13f8e94c-6a79-4a64-acc9-4f31a7d1ec11\" (UID: \"13f8e94c-6a79-4a64-acc9-4f31a7d1ec11\") " Oct 01 17:11:01 crc kubenswrapper[4949]: I1001 17:11:01.079223 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13f8e94c-6a79-4a64-acc9-4f31a7d1ec11-catalog-content\") pod \"13f8e94c-6a79-4a64-acc9-4f31a7d1ec11\" (UID: \"13f8e94c-6a79-4a64-acc9-4f31a7d1ec11\") " Oct 01 17:11:01 crc kubenswrapper[4949]: I1001 17:11:01.079305 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cvdgf\" (UniqueName: \"kubernetes.io/projected/13f8e94c-6a79-4a64-acc9-4f31a7d1ec11-kube-api-access-cvdgf\") pod \"13f8e94c-6a79-4a64-acc9-4f31a7d1ec11\" (UID: \"13f8e94c-6a79-4a64-acc9-4f31a7d1ec11\") " Oct 01 17:11:01 crc kubenswrapper[4949]: I1001 17:11:01.079976 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13f8e94c-6a79-4a64-acc9-4f31a7d1ec11-utilities" (OuterVolumeSpecName: "utilities") pod "13f8e94c-6a79-4a64-acc9-4f31a7d1ec11" (UID: "13f8e94c-6a79-4a64-acc9-4f31a7d1ec11"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 17:11:01 crc kubenswrapper[4949]: I1001 17:11:01.080381 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/13f8e94c-6a79-4a64-acc9-4f31a7d1ec11-utilities\") on node \"crc\" DevicePath \"\"" Oct 01 17:11:01 crc kubenswrapper[4949]: I1001 17:11:01.087490 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/13f8e94c-6a79-4a64-acc9-4f31a7d1ec11-kube-api-access-cvdgf" (OuterVolumeSpecName: "kube-api-access-cvdgf") pod "13f8e94c-6a79-4a64-acc9-4f31a7d1ec11" (UID: "13f8e94c-6a79-4a64-acc9-4f31a7d1ec11"). InnerVolumeSpecName "kube-api-access-cvdgf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 17:11:01 crc kubenswrapper[4949]: I1001 17:11:01.157799 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/13f8e94c-6a79-4a64-acc9-4f31a7d1ec11-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "13f8e94c-6a79-4a64-acc9-4f31a7d1ec11" (UID: "13f8e94c-6a79-4a64-acc9-4f31a7d1ec11"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 17:11:01 crc kubenswrapper[4949]: I1001 17:11:01.182797 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/13f8e94c-6a79-4a64-acc9-4f31a7d1ec11-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 01 17:11:01 crc kubenswrapper[4949]: I1001 17:11:01.182840 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cvdgf\" (UniqueName: \"kubernetes.io/projected/13f8e94c-6a79-4a64-acc9-4f31a7d1ec11-kube-api-access-cvdgf\") on node \"crc\" DevicePath \"\"" Oct 01 17:11:01 crc kubenswrapper[4949]: I1001 17:11:01.752841 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-lkzd4" event={"ID":"13f8e94c-6a79-4a64-acc9-4f31a7d1ec11","Type":"ContainerDied","Data":"f4a6b4be650a5f267e773c7792eeabd516cb52453348f122f955166db844aba8"} Oct 01 17:11:01 crc kubenswrapper[4949]: I1001 17:11:01.752938 4949 scope.go:117] "RemoveContainer" containerID="4ffb94ddab60b18e283a228f27374a0f79a1b7e185253fb8c49e710a1370bcb4" Oct 01 17:11:01 crc kubenswrapper[4949]: I1001 17:11:01.752958 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-lkzd4" Oct 01 17:11:01 crc kubenswrapper[4949]: I1001 17:11:01.787333 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-lkzd4"] Oct 01 17:11:01 crc kubenswrapper[4949]: I1001 17:11:01.788931 4949 scope.go:117] "RemoveContainer" containerID="511681f1eae4118dfc47744ce44fd048e1460948fbdb7cdb51bd7f5ffa4ce264" Oct 01 17:11:01 crc kubenswrapper[4949]: I1001 17:11:01.805046 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-lkzd4"] Oct 01 17:11:01 crc kubenswrapper[4949]: I1001 17:11:01.820326 4949 scope.go:117] "RemoveContainer" containerID="8000826694c9686e59e0bfe3d3ea17b18a7aa0a30027e0fc7d1f1034c0b93d30" Oct 01 17:11:03 crc kubenswrapper[4949]: I1001 17:11:03.010377 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zlrfn" Oct 01 17:11:03 crc kubenswrapper[4949]: I1001 17:11:03.617165 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="13f8e94c-6a79-4a64-acc9-4f31a7d1ec11" path="/var/lib/kubelet/pods/13f8e94c-6a79-4a64-acc9-4f31a7d1ec11/volumes" Oct 01 17:11:03 crc kubenswrapper[4949]: I1001 17:11:03.844015 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zlrfn"] Oct 01 17:11:03 crc kubenswrapper[4949]: I1001 17:11:03.844463 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zlrfn" podUID="b4f27569-6592-45a5-a4c9-9eca176513a4" containerName="registry-server" containerID="cri-o://bbc8b7c4a181b9412b0b158f38f5a6849029a785b33b3d1efb62635031e59780" gracePeriod=2 Oct 01 17:11:04 crc kubenswrapper[4949]: I1001 17:11:04.426764 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zlrfn" Oct 01 17:11:04 crc kubenswrapper[4949]: I1001 17:11:04.464188 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zqlqg\" (UniqueName: \"kubernetes.io/projected/b4f27569-6592-45a5-a4c9-9eca176513a4-kube-api-access-zqlqg\") pod \"b4f27569-6592-45a5-a4c9-9eca176513a4\" (UID: \"b4f27569-6592-45a5-a4c9-9eca176513a4\") " Oct 01 17:11:04 crc kubenswrapper[4949]: I1001 17:11:04.464349 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4f27569-6592-45a5-a4c9-9eca176513a4-utilities\") pod \"b4f27569-6592-45a5-a4c9-9eca176513a4\" (UID: \"b4f27569-6592-45a5-a4c9-9eca176513a4\") " Oct 01 17:11:04 crc kubenswrapper[4949]: I1001 17:11:04.464448 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4f27569-6592-45a5-a4c9-9eca176513a4-catalog-content\") pod \"b4f27569-6592-45a5-a4c9-9eca176513a4\" (UID: \"b4f27569-6592-45a5-a4c9-9eca176513a4\") " Oct 01 17:11:04 crc kubenswrapper[4949]: I1001 17:11:04.465279 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4f27569-6592-45a5-a4c9-9eca176513a4-utilities" (OuterVolumeSpecName: "utilities") pod "b4f27569-6592-45a5-a4c9-9eca176513a4" (UID: "b4f27569-6592-45a5-a4c9-9eca176513a4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 17:11:04 crc kubenswrapper[4949]: I1001 17:11:04.469977 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4f27569-6592-45a5-a4c9-9eca176513a4-kube-api-access-zqlqg" (OuterVolumeSpecName: "kube-api-access-zqlqg") pod "b4f27569-6592-45a5-a4c9-9eca176513a4" (UID: "b4f27569-6592-45a5-a4c9-9eca176513a4"). InnerVolumeSpecName "kube-api-access-zqlqg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 17:11:04 crc kubenswrapper[4949]: I1001 17:11:04.527989 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b4f27569-6592-45a5-a4c9-9eca176513a4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b4f27569-6592-45a5-a4c9-9eca176513a4" (UID: "b4f27569-6592-45a5-a4c9-9eca176513a4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 17:11:04 crc kubenswrapper[4949]: I1001 17:11:04.565741 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b4f27569-6592-45a5-a4c9-9eca176513a4-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 01 17:11:04 crc kubenswrapper[4949]: I1001 17:11:04.565776 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zqlqg\" (UniqueName: \"kubernetes.io/projected/b4f27569-6592-45a5-a4c9-9eca176513a4-kube-api-access-zqlqg\") on node \"crc\" DevicePath \"\"" Oct 01 17:11:04 crc kubenswrapper[4949]: I1001 17:11:04.565788 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b4f27569-6592-45a5-a4c9-9eca176513a4-utilities\") on node \"crc\" DevicePath \"\"" Oct 01 17:11:04 crc kubenswrapper[4949]: I1001 17:11:04.797938 4949 generic.go:334] "Generic (PLEG): container finished" podID="b4f27569-6592-45a5-a4c9-9eca176513a4" containerID="bbc8b7c4a181b9412b0b158f38f5a6849029a785b33b3d1efb62635031e59780" exitCode=0 Oct 01 17:11:04 crc kubenswrapper[4949]: I1001 17:11:04.798020 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zlrfn" Oct 01 17:11:04 crc kubenswrapper[4949]: I1001 17:11:04.798055 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zlrfn" event={"ID":"b4f27569-6592-45a5-a4c9-9eca176513a4","Type":"ContainerDied","Data":"bbc8b7c4a181b9412b0b158f38f5a6849029a785b33b3d1efb62635031e59780"} Oct 01 17:11:04 crc kubenswrapper[4949]: I1001 17:11:04.798114 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zlrfn" event={"ID":"b4f27569-6592-45a5-a4c9-9eca176513a4","Type":"ContainerDied","Data":"1755696f7fffdcd783f7b421e34dd3ec70bc9c783acf810c0af1b0bc4ffac896"} Oct 01 17:11:04 crc kubenswrapper[4949]: I1001 17:11:04.798167 4949 scope.go:117] "RemoveContainer" containerID="bbc8b7c4a181b9412b0b158f38f5a6849029a785b33b3d1efb62635031e59780" Oct 01 17:11:04 crc kubenswrapper[4949]: I1001 17:11:04.827411 4949 scope.go:117] "RemoveContainer" containerID="3a35a5c989009fae27a54e3cd2d046e87f96c5f5c0ea28c227b5f06c28e1c579" Oct 01 17:11:04 crc kubenswrapper[4949]: I1001 17:11:04.843741 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zlrfn"] Oct 01 17:11:04 crc kubenswrapper[4949]: I1001 17:11:04.852886 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zlrfn"] Oct 01 17:11:04 crc kubenswrapper[4949]: I1001 17:11:04.876314 4949 scope.go:117] "RemoveContainer" containerID="feb7464476636b403a037906249d45cf322270f4050b213c8637846b2c2e9925" Oct 01 17:11:04 crc kubenswrapper[4949]: I1001 17:11:04.918360 4949 scope.go:117] "RemoveContainer" containerID="bbc8b7c4a181b9412b0b158f38f5a6849029a785b33b3d1efb62635031e59780" Oct 01 17:11:04 crc kubenswrapper[4949]: E1001 17:11:04.918821 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bbc8b7c4a181b9412b0b158f38f5a6849029a785b33b3d1efb62635031e59780\": container with ID starting with bbc8b7c4a181b9412b0b158f38f5a6849029a785b33b3d1efb62635031e59780 not found: ID does not exist" containerID="bbc8b7c4a181b9412b0b158f38f5a6849029a785b33b3d1efb62635031e59780" Oct 01 17:11:04 crc kubenswrapper[4949]: I1001 17:11:04.918854 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bbc8b7c4a181b9412b0b158f38f5a6849029a785b33b3d1efb62635031e59780"} err="failed to get container status \"bbc8b7c4a181b9412b0b158f38f5a6849029a785b33b3d1efb62635031e59780\": rpc error: code = NotFound desc = could not find container \"bbc8b7c4a181b9412b0b158f38f5a6849029a785b33b3d1efb62635031e59780\": container with ID starting with bbc8b7c4a181b9412b0b158f38f5a6849029a785b33b3d1efb62635031e59780 not found: ID does not exist" Oct 01 17:11:04 crc kubenswrapper[4949]: I1001 17:11:04.918880 4949 scope.go:117] "RemoveContainer" containerID="3a35a5c989009fae27a54e3cd2d046e87f96c5f5c0ea28c227b5f06c28e1c579" Oct 01 17:11:04 crc kubenswrapper[4949]: E1001 17:11:04.919151 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a35a5c989009fae27a54e3cd2d046e87f96c5f5c0ea28c227b5f06c28e1c579\": container with ID starting with 3a35a5c989009fae27a54e3cd2d046e87f96c5f5c0ea28c227b5f06c28e1c579 not found: ID does not exist" containerID="3a35a5c989009fae27a54e3cd2d046e87f96c5f5c0ea28c227b5f06c28e1c579" Oct 01 17:11:04 crc kubenswrapper[4949]: I1001 17:11:04.919183 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a35a5c989009fae27a54e3cd2d046e87f96c5f5c0ea28c227b5f06c28e1c579"} err="failed to get container status \"3a35a5c989009fae27a54e3cd2d046e87f96c5f5c0ea28c227b5f06c28e1c579\": rpc error: code = NotFound desc = could not find container \"3a35a5c989009fae27a54e3cd2d046e87f96c5f5c0ea28c227b5f06c28e1c579\": container with ID starting with 3a35a5c989009fae27a54e3cd2d046e87f96c5f5c0ea28c227b5f06c28e1c579 not found: ID does not exist" Oct 01 17:11:04 crc kubenswrapper[4949]: I1001 17:11:04.919201 4949 scope.go:117] "RemoveContainer" containerID="feb7464476636b403a037906249d45cf322270f4050b213c8637846b2c2e9925" Oct 01 17:11:04 crc kubenswrapper[4949]: E1001 17:11:04.919510 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"feb7464476636b403a037906249d45cf322270f4050b213c8637846b2c2e9925\": container with ID starting with feb7464476636b403a037906249d45cf322270f4050b213c8637846b2c2e9925 not found: ID does not exist" containerID="feb7464476636b403a037906249d45cf322270f4050b213c8637846b2c2e9925" Oct 01 17:11:04 crc kubenswrapper[4949]: I1001 17:11:04.919537 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"feb7464476636b403a037906249d45cf322270f4050b213c8637846b2c2e9925"} err="failed to get container status \"feb7464476636b403a037906249d45cf322270f4050b213c8637846b2c2e9925\": rpc error: code = NotFound desc = could not find container \"feb7464476636b403a037906249d45cf322270f4050b213c8637846b2c2e9925\": container with ID starting with feb7464476636b403a037906249d45cf322270f4050b213c8637846b2c2e9925 not found: ID does not exist" Oct 01 17:11:05 crc kubenswrapper[4949]: I1001 17:11:05.617974 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4f27569-6592-45a5-a4c9-9eca176513a4" path="/var/lib/kubelet/pods/b4f27569-6592-45a5-a4c9-9eca176513a4/volumes" Oct 01 17:11:18 crc kubenswrapper[4949]: I1001 17:11:18.039401 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 17:11:18 crc kubenswrapper[4949]: I1001 17:11:18.040254 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 17:11:48 crc kubenswrapper[4949]: I1001 17:11:48.038436 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 17:11:48 crc kubenswrapper[4949]: I1001 17:11:48.038985 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 17:11:48 crc kubenswrapper[4949]: I1001 17:11:48.039037 4949 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l6287" Oct 01 17:11:48 crc kubenswrapper[4949]: I1001 17:11:48.039940 4949 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ade2b444188b0955770924aec19fde4922abe57cd21ed2e7a9bcdec8233b346e"} pod="openshift-machine-config-operator/machine-config-daemon-l6287" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 01 17:11:48 crc kubenswrapper[4949]: I1001 17:11:48.040002 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" containerID="cri-o://ade2b444188b0955770924aec19fde4922abe57cd21ed2e7a9bcdec8233b346e" gracePeriod=600 Oct 01 17:11:48 crc kubenswrapper[4949]: I1001 17:11:48.239925 4949 generic.go:334] "Generic (PLEG): container finished" podID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerID="ade2b444188b0955770924aec19fde4922abe57cd21ed2e7a9bcdec8233b346e" exitCode=0 Oct 01 17:11:48 crc kubenswrapper[4949]: I1001 17:11:48.240102 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" event={"ID":"0e15cd67-d4ad-49b8-96a6-da114105e558","Type":"ContainerDied","Data":"ade2b444188b0955770924aec19fde4922abe57cd21ed2e7a9bcdec8233b346e"} Oct 01 17:11:48 crc kubenswrapper[4949]: I1001 17:11:48.240198 4949 scope.go:117] "RemoveContainer" containerID="5247ad550b785477d0797e78873846113ccbbc9ee2a1f1cf5fcc792baaa4b1d4" Oct 01 17:11:49 crc kubenswrapper[4949]: I1001 17:11:49.250865 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" event={"ID":"0e15cd67-d4ad-49b8-96a6-da114105e558","Type":"ContainerStarted","Data":"430799dac3ea9ee99ed16103e75915d512c98cde0242ee3607fc1dd10b2b3cf0"} Oct 01 17:12:42 crc kubenswrapper[4949]: I1001 17:12:42.880424 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-4jnbv"] Oct 01 17:12:42 crc kubenswrapper[4949]: E1001 17:12:42.881583 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13f8e94c-6a79-4a64-acc9-4f31a7d1ec11" containerName="registry-server" Oct 01 17:12:42 crc kubenswrapper[4949]: I1001 17:12:42.881609 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="13f8e94c-6a79-4a64-acc9-4f31a7d1ec11" containerName="registry-server" Oct 01 17:12:42 crc kubenswrapper[4949]: E1001 17:12:42.881654 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4f27569-6592-45a5-a4c9-9eca176513a4" containerName="registry-server" Oct 01 17:12:42 crc kubenswrapper[4949]: I1001 17:12:42.881667 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4f27569-6592-45a5-a4c9-9eca176513a4" containerName="registry-server" Oct 01 17:12:42 crc kubenswrapper[4949]: E1001 17:12:42.881692 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4f27569-6592-45a5-a4c9-9eca176513a4" containerName="extract-utilities" Oct 01 17:12:42 crc kubenswrapper[4949]: I1001 17:12:42.881706 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4f27569-6592-45a5-a4c9-9eca176513a4" containerName="extract-utilities" Oct 01 17:12:42 crc kubenswrapper[4949]: E1001 17:12:42.881737 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13f8e94c-6a79-4a64-acc9-4f31a7d1ec11" containerName="extract-content" Oct 01 17:12:42 crc kubenswrapper[4949]: I1001 17:12:42.881748 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="13f8e94c-6a79-4a64-acc9-4f31a7d1ec11" containerName="extract-content" Oct 01 17:12:42 crc kubenswrapper[4949]: E1001 17:12:42.881764 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="13f8e94c-6a79-4a64-acc9-4f31a7d1ec11" containerName="extract-utilities" Oct 01 17:12:42 crc kubenswrapper[4949]: I1001 17:12:42.881777 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="13f8e94c-6a79-4a64-acc9-4f31a7d1ec11" containerName="extract-utilities" Oct 01 17:12:42 crc kubenswrapper[4949]: E1001 17:12:42.881803 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b4f27569-6592-45a5-a4c9-9eca176513a4" containerName="extract-content" Oct 01 17:12:42 crc kubenswrapper[4949]: I1001 17:12:42.881814 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="b4f27569-6592-45a5-a4c9-9eca176513a4" containerName="extract-content" Oct 01 17:12:42 crc kubenswrapper[4949]: I1001 17:12:42.882181 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="13f8e94c-6a79-4a64-acc9-4f31a7d1ec11" containerName="registry-server" Oct 01 17:12:42 crc kubenswrapper[4949]: I1001 17:12:42.882204 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="b4f27569-6592-45a5-a4c9-9eca176513a4" containerName="registry-server" Oct 01 17:12:42 crc kubenswrapper[4949]: I1001 17:12:42.884590 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4jnbv" Oct 01 17:12:42 crc kubenswrapper[4949]: I1001 17:12:42.901396 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4jnbv"] Oct 01 17:12:43 crc kubenswrapper[4949]: I1001 17:12:43.060364 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb997f71-78ab-43b2-8186-ca747b2198c4-catalog-content\") pod \"community-operators-4jnbv\" (UID: \"eb997f71-78ab-43b2-8186-ca747b2198c4\") " pod="openshift-marketplace/community-operators-4jnbv" Oct 01 17:12:43 crc kubenswrapper[4949]: I1001 17:12:43.060451 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v949s\" (UniqueName: \"kubernetes.io/projected/eb997f71-78ab-43b2-8186-ca747b2198c4-kube-api-access-v949s\") pod \"community-operators-4jnbv\" (UID: \"eb997f71-78ab-43b2-8186-ca747b2198c4\") " pod="openshift-marketplace/community-operators-4jnbv" Oct 01 17:12:43 crc kubenswrapper[4949]: I1001 17:12:43.060504 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb997f71-78ab-43b2-8186-ca747b2198c4-utilities\") pod \"community-operators-4jnbv\" (UID: \"eb997f71-78ab-43b2-8186-ca747b2198c4\") " pod="openshift-marketplace/community-operators-4jnbv" Oct 01 17:12:43 crc kubenswrapper[4949]: I1001 17:12:43.162418 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v949s\" (UniqueName: \"kubernetes.io/projected/eb997f71-78ab-43b2-8186-ca747b2198c4-kube-api-access-v949s\") pod \"community-operators-4jnbv\" (UID: \"eb997f71-78ab-43b2-8186-ca747b2198c4\") " pod="openshift-marketplace/community-operators-4jnbv" Oct 01 17:12:43 crc kubenswrapper[4949]: I1001 17:12:43.162987 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb997f71-78ab-43b2-8186-ca747b2198c4-utilities\") pod \"community-operators-4jnbv\" (UID: \"eb997f71-78ab-43b2-8186-ca747b2198c4\") " pod="openshift-marketplace/community-operators-4jnbv" Oct 01 17:12:43 crc kubenswrapper[4949]: I1001 17:12:43.163320 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb997f71-78ab-43b2-8186-ca747b2198c4-catalog-content\") pod \"community-operators-4jnbv\" (UID: \"eb997f71-78ab-43b2-8186-ca747b2198c4\") " pod="openshift-marketplace/community-operators-4jnbv" Oct 01 17:12:43 crc kubenswrapper[4949]: I1001 17:12:43.163627 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb997f71-78ab-43b2-8186-ca747b2198c4-utilities\") pod \"community-operators-4jnbv\" (UID: \"eb997f71-78ab-43b2-8186-ca747b2198c4\") " pod="openshift-marketplace/community-operators-4jnbv" Oct 01 17:12:43 crc kubenswrapper[4949]: I1001 17:12:43.163772 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb997f71-78ab-43b2-8186-ca747b2198c4-catalog-content\") pod \"community-operators-4jnbv\" (UID: \"eb997f71-78ab-43b2-8186-ca747b2198c4\") " pod="openshift-marketplace/community-operators-4jnbv" Oct 01 17:12:43 crc kubenswrapper[4949]: I1001 17:12:43.203151 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v949s\" (UniqueName: \"kubernetes.io/projected/eb997f71-78ab-43b2-8186-ca747b2198c4-kube-api-access-v949s\") pod \"community-operators-4jnbv\" (UID: \"eb997f71-78ab-43b2-8186-ca747b2198c4\") " pod="openshift-marketplace/community-operators-4jnbv" Oct 01 17:12:43 crc kubenswrapper[4949]: I1001 17:12:43.214457 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4jnbv" Oct 01 17:12:43 crc kubenswrapper[4949]: I1001 17:12:43.754389 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-4jnbv"] Oct 01 17:12:43 crc kubenswrapper[4949]: I1001 17:12:43.826062 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4jnbv" event={"ID":"eb997f71-78ab-43b2-8186-ca747b2198c4","Type":"ContainerStarted","Data":"6c2e747cf6b0ab18e5cdcc771169176fc35690629a153c97c5dd0bc8176441ae"} Oct 01 17:12:44 crc kubenswrapper[4949]: I1001 17:12:44.835986 4949 generic.go:334] "Generic (PLEG): container finished" podID="eb997f71-78ab-43b2-8186-ca747b2198c4" containerID="f1e7586d9b541f8be548c2d0b60d8be67784d04f6c2e3138cccbb462bcd3acf3" exitCode=0 Oct 01 17:12:44 crc kubenswrapper[4949]: I1001 17:12:44.836030 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4jnbv" event={"ID":"eb997f71-78ab-43b2-8186-ca747b2198c4","Type":"ContainerDied","Data":"f1e7586d9b541f8be548c2d0b60d8be67784d04f6c2e3138cccbb462bcd3acf3"} Oct 01 17:12:46 crc kubenswrapper[4949]: I1001 17:12:46.856628 4949 generic.go:334] "Generic (PLEG): container finished" podID="eb997f71-78ab-43b2-8186-ca747b2198c4" containerID="f07d9099a1950029589acc8017b1661f22df7a9a8c4f4403d43b978f4bfa0493" exitCode=0 Oct 01 17:12:46 crc kubenswrapper[4949]: I1001 17:12:46.856778 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4jnbv" event={"ID":"eb997f71-78ab-43b2-8186-ca747b2198c4","Type":"ContainerDied","Data":"f07d9099a1950029589acc8017b1661f22df7a9a8c4f4403d43b978f4bfa0493"} Oct 01 17:12:48 crc kubenswrapper[4949]: I1001 17:12:48.888031 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4jnbv" event={"ID":"eb997f71-78ab-43b2-8186-ca747b2198c4","Type":"ContainerStarted","Data":"78c1ea68f6073bb69de37223674feb701988cf335fcdd4fbeb63f45bac3c066c"} Oct 01 17:12:48 crc kubenswrapper[4949]: I1001 17:12:48.904232 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-4jnbv" podStartSLOduration=3.838452304 podStartE2EDuration="6.904213539s" podCreationTimestamp="2025-10-01 17:12:42 +0000 UTC" firstStartedPulling="2025-10-01 17:12:44.83788901 +0000 UTC m=+5464.143495201" lastFinishedPulling="2025-10-01 17:12:47.903650245 +0000 UTC m=+5467.209256436" observedRunningTime="2025-10-01 17:12:48.904183948 +0000 UTC m=+5468.209790159" watchObservedRunningTime="2025-10-01 17:12:48.904213539 +0000 UTC m=+5468.209819730" Oct 01 17:12:53 crc kubenswrapper[4949]: I1001 17:12:53.215198 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-4jnbv" Oct 01 17:12:53 crc kubenswrapper[4949]: I1001 17:12:53.215594 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-4jnbv" Oct 01 17:12:53 crc kubenswrapper[4949]: I1001 17:12:53.296993 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-4jnbv" Oct 01 17:12:53 crc kubenswrapper[4949]: I1001 17:12:53.980742 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-4jnbv" Oct 01 17:12:54 crc kubenswrapper[4949]: I1001 17:12:54.029664 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4jnbv"] Oct 01 17:12:55 crc kubenswrapper[4949]: I1001 17:12:55.954315 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-4jnbv" podUID="eb997f71-78ab-43b2-8186-ca747b2198c4" containerName="registry-server" containerID="cri-o://78c1ea68f6073bb69de37223674feb701988cf335fcdd4fbeb63f45bac3c066c" gracePeriod=2 Oct 01 17:12:56 crc kubenswrapper[4949]: I1001 17:12:56.463775 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4jnbv" Oct 01 17:12:56 crc kubenswrapper[4949]: I1001 17:12:56.629749 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb997f71-78ab-43b2-8186-ca747b2198c4-utilities\") pod \"eb997f71-78ab-43b2-8186-ca747b2198c4\" (UID: \"eb997f71-78ab-43b2-8186-ca747b2198c4\") " Oct 01 17:12:56 crc kubenswrapper[4949]: I1001 17:12:56.629867 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v949s\" (UniqueName: \"kubernetes.io/projected/eb997f71-78ab-43b2-8186-ca747b2198c4-kube-api-access-v949s\") pod \"eb997f71-78ab-43b2-8186-ca747b2198c4\" (UID: \"eb997f71-78ab-43b2-8186-ca747b2198c4\") " Oct 01 17:12:56 crc kubenswrapper[4949]: I1001 17:12:56.629944 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb997f71-78ab-43b2-8186-ca747b2198c4-catalog-content\") pod \"eb997f71-78ab-43b2-8186-ca747b2198c4\" (UID: \"eb997f71-78ab-43b2-8186-ca747b2198c4\") " Oct 01 17:12:56 crc kubenswrapper[4949]: I1001 17:12:56.630659 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb997f71-78ab-43b2-8186-ca747b2198c4-utilities" (OuterVolumeSpecName: "utilities") pod "eb997f71-78ab-43b2-8186-ca747b2198c4" (UID: "eb997f71-78ab-43b2-8186-ca747b2198c4"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 17:12:56 crc kubenswrapper[4949]: I1001 17:12:56.632020 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb997f71-78ab-43b2-8186-ca747b2198c4-utilities\") on node \"crc\" DevicePath \"\"" Oct 01 17:12:56 crc kubenswrapper[4949]: I1001 17:12:56.636329 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb997f71-78ab-43b2-8186-ca747b2198c4-kube-api-access-v949s" (OuterVolumeSpecName: "kube-api-access-v949s") pod "eb997f71-78ab-43b2-8186-ca747b2198c4" (UID: "eb997f71-78ab-43b2-8186-ca747b2198c4"). InnerVolumeSpecName "kube-api-access-v949s". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 17:12:56 crc kubenswrapper[4949]: I1001 17:12:56.733939 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v949s\" (UniqueName: \"kubernetes.io/projected/eb997f71-78ab-43b2-8186-ca747b2198c4-kube-api-access-v949s\") on node \"crc\" DevicePath \"\"" Oct 01 17:12:56 crc kubenswrapper[4949]: I1001 17:12:56.967048 4949 generic.go:334] "Generic (PLEG): container finished" podID="eb997f71-78ab-43b2-8186-ca747b2198c4" containerID="78c1ea68f6073bb69de37223674feb701988cf335fcdd4fbeb63f45bac3c066c" exitCode=0 Oct 01 17:12:56 crc kubenswrapper[4949]: I1001 17:12:56.967106 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4jnbv" event={"ID":"eb997f71-78ab-43b2-8186-ca747b2198c4","Type":"ContainerDied","Data":"78c1ea68f6073bb69de37223674feb701988cf335fcdd4fbeb63f45bac3c066c"} Oct 01 17:12:56 crc kubenswrapper[4949]: I1001 17:12:56.967168 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-4jnbv" event={"ID":"eb997f71-78ab-43b2-8186-ca747b2198c4","Type":"ContainerDied","Data":"6c2e747cf6b0ab18e5cdcc771169176fc35690629a153c97c5dd0bc8176441ae"} Oct 01 17:12:56 crc kubenswrapper[4949]: I1001 17:12:56.967165 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-4jnbv" Oct 01 17:12:56 crc kubenswrapper[4949]: I1001 17:12:56.967186 4949 scope.go:117] "RemoveContainer" containerID="78c1ea68f6073bb69de37223674feb701988cf335fcdd4fbeb63f45bac3c066c" Oct 01 17:12:56 crc kubenswrapper[4949]: I1001 17:12:56.992502 4949 scope.go:117] "RemoveContainer" containerID="f07d9099a1950029589acc8017b1661f22df7a9a8c4f4403d43b978f4bfa0493" Oct 01 17:12:57 crc kubenswrapper[4949]: I1001 17:12:57.030104 4949 scope.go:117] "RemoveContainer" containerID="f1e7586d9b541f8be548c2d0b60d8be67784d04f6c2e3138cccbb462bcd3acf3" Oct 01 17:12:57 crc kubenswrapper[4949]: I1001 17:12:57.067219 4949 scope.go:117] "RemoveContainer" containerID="78c1ea68f6073bb69de37223674feb701988cf335fcdd4fbeb63f45bac3c066c" Oct 01 17:12:57 crc kubenswrapper[4949]: E1001 17:12:57.073355 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"78c1ea68f6073bb69de37223674feb701988cf335fcdd4fbeb63f45bac3c066c\": container with ID starting with 78c1ea68f6073bb69de37223674feb701988cf335fcdd4fbeb63f45bac3c066c not found: ID does not exist" containerID="78c1ea68f6073bb69de37223674feb701988cf335fcdd4fbeb63f45bac3c066c" Oct 01 17:12:57 crc kubenswrapper[4949]: I1001 17:12:57.073402 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"78c1ea68f6073bb69de37223674feb701988cf335fcdd4fbeb63f45bac3c066c"} err="failed to get container status \"78c1ea68f6073bb69de37223674feb701988cf335fcdd4fbeb63f45bac3c066c\": rpc error: code = NotFound desc = could not find container \"78c1ea68f6073bb69de37223674feb701988cf335fcdd4fbeb63f45bac3c066c\": container with ID starting with 78c1ea68f6073bb69de37223674feb701988cf335fcdd4fbeb63f45bac3c066c not found: ID does not exist" Oct 01 17:12:57 crc kubenswrapper[4949]: I1001 17:12:57.073430 4949 scope.go:117] "RemoveContainer" containerID="f07d9099a1950029589acc8017b1661f22df7a9a8c4f4403d43b978f4bfa0493" Oct 01 17:12:57 crc kubenswrapper[4949]: E1001 17:12:57.073767 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f07d9099a1950029589acc8017b1661f22df7a9a8c4f4403d43b978f4bfa0493\": container with ID starting with f07d9099a1950029589acc8017b1661f22df7a9a8c4f4403d43b978f4bfa0493 not found: ID does not exist" containerID="f07d9099a1950029589acc8017b1661f22df7a9a8c4f4403d43b978f4bfa0493" Oct 01 17:12:57 crc kubenswrapper[4949]: I1001 17:12:57.073787 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f07d9099a1950029589acc8017b1661f22df7a9a8c4f4403d43b978f4bfa0493"} err="failed to get container status \"f07d9099a1950029589acc8017b1661f22df7a9a8c4f4403d43b978f4bfa0493\": rpc error: code = NotFound desc = could not find container \"f07d9099a1950029589acc8017b1661f22df7a9a8c4f4403d43b978f4bfa0493\": container with ID starting with f07d9099a1950029589acc8017b1661f22df7a9a8c4f4403d43b978f4bfa0493 not found: ID does not exist" Oct 01 17:12:57 crc kubenswrapper[4949]: I1001 17:12:57.073802 4949 scope.go:117] "RemoveContainer" containerID="f1e7586d9b541f8be548c2d0b60d8be67784d04f6c2e3138cccbb462bcd3acf3" Oct 01 17:12:57 crc kubenswrapper[4949]: E1001 17:12:57.074186 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1e7586d9b541f8be548c2d0b60d8be67784d04f6c2e3138cccbb462bcd3acf3\": container with ID starting with f1e7586d9b541f8be548c2d0b60d8be67784d04f6c2e3138cccbb462bcd3acf3 not found: ID does not exist" containerID="f1e7586d9b541f8be548c2d0b60d8be67784d04f6c2e3138cccbb462bcd3acf3" Oct 01 17:12:57 crc kubenswrapper[4949]: I1001 17:12:57.074229 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1e7586d9b541f8be548c2d0b60d8be67784d04f6c2e3138cccbb462bcd3acf3"} err="failed to get container status \"f1e7586d9b541f8be548c2d0b60d8be67784d04f6c2e3138cccbb462bcd3acf3\": rpc error: code = NotFound desc = could not find container \"f1e7586d9b541f8be548c2d0b60d8be67784d04f6c2e3138cccbb462bcd3acf3\": container with ID starting with f1e7586d9b541f8be548c2d0b60d8be67784d04f6c2e3138cccbb462bcd3acf3 not found: ID does not exist" Oct 01 17:12:57 crc kubenswrapper[4949]: I1001 17:12:57.767804 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb997f71-78ab-43b2-8186-ca747b2198c4-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eb997f71-78ab-43b2-8186-ca747b2198c4" (UID: "eb997f71-78ab-43b2-8186-ca747b2198c4"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 17:12:57 crc kubenswrapper[4949]: I1001 17:12:57.858455 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb997f71-78ab-43b2-8186-ca747b2198c4-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 01 17:12:57 crc kubenswrapper[4949]: I1001 17:12:57.913099 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-4jnbv"] Oct 01 17:12:57 crc kubenswrapper[4949]: I1001 17:12:57.921293 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-4jnbv"] Oct 01 17:12:59 crc kubenswrapper[4949]: I1001 17:12:59.626176 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb997f71-78ab-43b2-8186-ca747b2198c4" path="/var/lib/kubelet/pods/eb997f71-78ab-43b2-8186-ca747b2198c4/volumes" Oct 01 17:13:48 crc kubenswrapper[4949]: I1001 17:13:48.495243 4949 generic.go:334] "Generic (PLEG): container finished" podID="375603dd-5dd3-4d2f-ac58-5335ebc721c0" containerID="90ccc406714e8d1f7386a089e60c53f461963176e960fc92e32aca72f4c72c3b" exitCode=1 Oct 01 17:13:48 crc kubenswrapper[4949]: I1001 17:13:48.495347 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"375603dd-5dd3-4d2f-ac58-5335ebc721c0","Type":"ContainerDied","Data":"90ccc406714e8d1f7386a089e60c53f461963176e960fc92e32aca72f4c72c3b"} Oct 01 17:13:49 crc kubenswrapper[4949]: I1001 17:13:49.875596 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Oct 01 17:13:50 crc kubenswrapper[4949]: I1001 17:13:50.065994 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/375603dd-5dd3-4d2f-ac58-5335ebc721c0-ssh-key\") pod \"375603dd-5dd3-4d2f-ac58-5335ebc721c0\" (UID: \"375603dd-5dd3-4d2f-ac58-5335ebc721c0\") " Oct 01 17:13:50 crc kubenswrapper[4949]: I1001 17:13:50.066212 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-logs\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"375603dd-5dd3-4d2f-ac58-5335ebc721c0\" (UID: \"375603dd-5dd3-4d2f-ac58-5335ebc721c0\") " Oct 01 17:13:50 crc kubenswrapper[4949]: I1001 17:13:50.066269 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/375603dd-5dd3-4d2f-ac58-5335ebc721c0-ca-certs\") pod \"375603dd-5dd3-4d2f-ac58-5335ebc721c0\" (UID: \"375603dd-5dd3-4d2f-ac58-5335ebc721c0\") " Oct 01 17:13:50 crc kubenswrapper[4949]: I1001 17:13:50.066342 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/375603dd-5dd3-4d2f-ac58-5335ebc721c0-test-operator-ephemeral-workdir\") pod \"375603dd-5dd3-4d2f-ac58-5335ebc721c0\" (UID: \"375603dd-5dd3-4d2f-ac58-5335ebc721c0\") " Oct 01 17:13:50 crc kubenswrapper[4949]: I1001 17:13:50.066396 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/375603dd-5dd3-4d2f-ac58-5335ebc721c0-openstack-config\") pod \"375603dd-5dd3-4d2f-ac58-5335ebc721c0\" (UID: \"375603dd-5dd3-4d2f-ac58-5335ebc721c0\") " Oct 01 17:13:50 crc kubenswrapper[4949]: I1001 17:13:50.066459 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/375603dd-5dd3-4d2f-ac58-5335ebc721c0-test-operator-ephemeral-temporary\") pod \"375603dd-5dd3-4d2f-ac58-5335ebc721c0\" (UID: \"375603dd-5dd3-4d2f-ac58-5335ebc721c0\") " Oct 01 17:13:50 crc kubenswrapper[4949]: I1001 17:13:50.066528 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/375603dd-5dd3-4d2f-ac58-5335ebc721c0-config-data\") pod \"375603dd-5dd3-4d2f-ac58-5335ebc721c0\" (UID: \"375603dd-5dd3-4d2f-ac58-5335ebc721c0\") " Oct 01 17:13:50 crc kubenswrapper[4949]: I1001 17:13:50.066572 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8r6nh\" (UniqueName: \"kubernetes.io/projected/375603dd-5dd3-4d2f-ac58-5335ebc721c0-kube-api-access-8r6nh\") pod \"375603dd-5dd3-4d2f-ac58-5335ebc721c0\" (UID: \"375603dd-5dd3-4d2f-ac58-5335ebc721c0\") " Oct 01 17:13:50 crc kubenswrapper[4949]: I1001 17:13:50.066606 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/375603dd-5dd3-4d2f-ac58-5335ebc721c0-openstack-config-secret\") pod \"375603dd-5dd3-4d2f-ac58-5335ebc721c0\" (UID: \"375603dd-5dd3-4d2f-ac58-5335ebc721c0\") " Oct 01 17:13:50 crc kubenswrapper[4949]: I1001 17:13:50.067367 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/375603dd-5dd3-4d2f-ac58-5335ebc721c0-test-operator-ephemeral-temporary" (OuterVolumeSpecName: "test-operator-ephemeral-temporary") pod "375603dd-5dd3-4d2f-ac58-5335ebc721c0" (UID: "375603dd-5dd3-4d2f-ac58-5335ebc721c0"). InnerVolumeSpecName "test-operator-ephemeral-temporary". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 17:13:50 crc kubenswrapper[4949]: I1001 17:13:50.067621 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/375603dd-5dd3-4d2f-ac58-5335ebc721c0-config-data" (OuterVolumeSpecName: "config-data") pod "375603dd-5dd3-4d2f-ac58-5335ebc721c0" (UID: "375603dd-5dd3-4d2f-ac58-5335ebc721c0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 17:13:50 crc kubenswrapper[4949]: I1001 17:13:50.068047 4949 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-temporary\" (UniqueName: \"kubernetes.io/empty-dir/375603dd-5dd3-4d2f-ac58-5335ebc721c0-test-operator-ephemeral-temporary\") on node \"crc\" DevicePath \"\"" Oct 01 17:13:50 crc kubenswrapper[4949]: I1001 17:13:50.068089 4949 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/375603dd-5dd3-4d2f-ac58-5335ebc721c0-config-data\") on node \"crc\" DevicePath \"\"" Oct 01 17:13:50 crc kubenswrapper[4949]: I1001 17:13:50.070697 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/375603dd-5dd3-4d2f-ac58-5335ebc721c0-test-operator-ephemeral-workdir" (OuterVolumeSpecName: "test-operator-ephemeral-workdir") pod "375603dd-5dd3-4d2f-ac58-5335ebc721c0" (UID: "375603dd-5dd3-4d2f-ac58-5335ebc721c0"). InnerVolumeSpecName "test-operator-ephemeral-workdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 17:13:50 crc kubenswrapper[4949]: I1001 17:13:50.071991 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "test-operator-logs") pod "375603dd-5dd3-4d2f-ac58-5335ebc721c0" (UID: "375603dd-5dd3-4d2f-ac58-5335ebc721c0"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Oct 01 17:13:50 crc kubenswrapper[4949]: I1001 17:13:50.073505 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/375603dd-5dd3-4d2f-ac58-5335ebc721c0-kube-api-access-8r6nh" (OuterVolumeSpecName: "kube-api-access-8r6nh") pod "375603dd-5dd3-4d2f-ac58-5335ebc721c0" (UID: "375603dd-5dd3-4d2f-ac58-5335ebc721c0"). InnerVolumeSpecName "kube-api-access-8r6nh". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 17:13:50 crc kubenswrapper[4949]: I1001 17:13:50.098243 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/375603dd-5dd3-4d2f-ac58-5335ebc721c0-ssh-key" (OuterVolumeSpecName: "ssh-key") pod "375603dd-5dd3-4d2f-ac58-5335ebc721c0" (UID: "375603dd-5dd3-4d2f-ac58-5335ebc721c0"). InnerVolumeSpecName "ssh-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 17:13:50 crc kubenswrapper[4949]: I1001 17:13:50.100682 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/375603dd-5dd3-4d2f-ac58-5335ebc721c0-ca-certs" (OuterVolumeSpecName: "ca-certs") pod "375603dd-5dd3-4d2f-ac58-5335ebc721c0" (UID: "375603dd-5dd3-4d2f-ac58-5335ebc721c0"). InnerVolumeSpecName "ca-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 17:13:50 crc kubenswrapper[4949]: I1001 17:13:50.102679 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/375603dd-5dd3-4d2f-ac58-5335ebc721c0-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "375603dd-5dd3-4d2f-ac58-5335ebc721c0" (UID: "375603dd-5dd3-4d2f-ac58-5335ebc721c0"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 17:13:50 crc kubenswrapper[4949]: I1001 17:13:50.124341 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/375603dd-5dd3-4d2f-ac58-5335ebc721c0-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "375603dd-5dd3-4d2f-ac58-5335ebc721c0" (UID: "375603dd-5dd3-4d2f-ac58-5335ebc721c0"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 17:13:50 crc kubenswrapper[4949]: I1001 17:13:50.169710 4949 reconciler_common.go:293] "Volume detached for volume \"ssh-key\" (UniqueName: \"kubernetes.io/secret/375603dd-5dd3-4d2f-ac58-5335ebc721c0-ssh-key\") on node \"crc\" DevicePath \"\"" Oct 01 17:13:50 crc kubenswrapper[4949]: I1001 17:13:50.169777 4949 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Oct 01 17:13:50 crc kubenswrapper[4949]: I1001 17:13:50.169788 4949 reconciler_common.go:293] "Volume detached for volume \"ca-certs\" (UniqueName: \"kubernetes.io/secret/375603dd-5dd3-4d2f-ac58-5335ebc721c0-ca-certs\") on node \"crc\" DevicePath \"\"" Oct 01 17:13:50 crc kubenswrapper[4949]: I1001 17:13:50.169800 4949 reconciler_common.go:293] "Volume detached for volume \"test-operator-ephemeral-workdir\" (UniqueName: \"kubernetes.io/empty-dir/375603dd-5dd3-4d2f-ac58-5335ebc721c0-test-operator-ephemeral-workdir\") on node \"crc\" DevicePath \"\"" Oct 01 17:13:50 crc kubenswrapper[4949]: I1001 17:13:50.169816 4949 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/375603dd-5dd3-4d2f-ac58-5335ebc721c0-openstack-config\") on node \"crc\" DevicePath \"\"" Oct 01 17:13:50 crc kubenswrapper[4949]: I1001 17:13:50.169828 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8r6nh\" (UniqueName: \"kubernetes.io/projected/375603dd-5dd3-4d2f-ac58-5335ebc721c0-kube-api-access-8r6nh\") on node \"crc\" DevicePath \"\"" Oct 01 17:13:50 crc kubenswrapper[4949]: I1001 17:13:50.169840 4949 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/375603dd-5dd3-4d2f-ac58-5335ebc721c0-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Oct 01 17:13:50 crc kubenswrapper[4949]: I1001 17:13:50.193188 4949 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Oct 01 17:13:50 crc kubenswrapper[4949]: I1001 17:13:50.271377 4949 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Oct 01 17:13:50 crc kubenswrapper[4949]: I1001 17:13:50.523971 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack/tempest-tests-tempest" Oct 01 17:13:50 crc kubenswrapper[4949]: I1001 17:13:50.523814 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/tempest-tests-tempest" event={"ID":"375603dd-5dd3-4d2f-ac58-5335ebc721c0","Type":"ContainerDied","Data":"d3f2d8c199bb8f9d4df2aba6de1909a436af752c4ea0d6e776c06e052ea5184e"} Oct 01 17:13:50 crc kubenswrapper[4949]: I1001 17:13:50.524533 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d3f2d8c199bb8f9d4df2aba6de1909a436af752c4ea0d6e776c06e052ea5184e" Oct 01 17:13:56 crc kubenswrapper[4949]: I1001 17:13:56.734789 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Oct 01 17:13:56 crc kubenswrapper[4949]: E1001 17:13:56.735999 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb997f71-78ab-43b2-8186-ca747b2198c4" containerName="extract-content" Oct 01 17:13:56 crc kubenswrapper[4949]: I1001 17:13:56.736021 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb997f71-78ab-43b2-8186-ca747b2198c4" containerName="extract-content" Oct 01 17:13:56 crc kubenswrapper[4949]: E1001 17:13:56.736049 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb997f71-78ab-43b2-8186-ca747b2198c4" containerName="registry-server" Oct 01 17:13:56 crc kubenswrapper[4949]: I1001 17:13:56.736063 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb997f71-78ab-43b2-8186-ca747b2198c4" containerName="registry-server" Oct 01 17:13:56 crc kubenswrapper[4949]: E1001 17:13:56.736098 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb997f71-78ab-43b2-8186-ca747b2198c4" containerName="extract-utilities" Oct 01 17:13:56 crc kubenswrapper[4949]: I1001 17:13:56.736110 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb997f71-78ab-43b2-8186-ca747b2198c4" containerName="extract-utilities" Oct 01 17:13:56 crc kubenswrapper[4949]: E1001 17:13:56.736170 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="375603dd-5dd3-4d2f-ac58-5335ebc721c0" containerName="tempest-tests-tempest-tests-runner" Oct 01 17:13:56 crc kubenswrapper[4949]: I1001 17:13:56.736197 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="375603dd-5dd3-4d2f-ac58-5335ebc721c0" containerName="tempest-tests-tempest-tests-runner" Oct 01 17:13:56 crc kubenswrapper[4949]: I1001 17:13:56.736628 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb997f71-78ab-43b2-8186-ca747b2198c4" containerName="registry-server" Oct 01 17:13:56 crc kubenswrapper[4949]: I1001 17:13:56.736667 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="375603dd-5dd3-4d2f-ac58-5335ebc721c0" containerName="tempest-tests-tempest-tests-runner" Oct 01 17:13:56 crc kubenswrapper[4949]: I1001 17:13:56.737859 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 01 17:13:56 crc kubenswrapper[4949]: I1001 17:13:56.740080 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openstack"/"default-dockercfg-fx25w" Oct 01 17:13:56 crc kubenswrapper[4949]: I1001 17:13:56.748157 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Oct 01 17:13:56 crc kubenswrapper[4949]: I1001 17:13:56.918806 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxtv9\" (UniqueName: \"kubernetes.io/projected/30fe503b-a2f7-4eb5-8bbc-c0872754b32d-kube-api-access-rxtv9\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"30fe503b-a2f7-4eb5-8bbc-c0872754b32d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 01 17:13:56 crc kubenswrapper[4949]: I1001 17:13:56.919269 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"30fe503b-a2f7-4eb5-8bbc-c0872754b32d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 01 17:13:57 crc kubenswrapper[4949]: I1001 17:13:57.021030 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"30fe503b-a2f7-4eb5-8bbc-c0872754b32d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 01 17:13:57 crc kubenswrapper[4949]: I1001 17:13:57.021186 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rxtv9\" (UniqueName: \"kubernetes.io/projected/30fe503b-a2f7-4eb5-8bbc-c0872754b32d-kube-api-access-rxtv9\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"30fe503b-a2f7-4eb5-8bbc-c0872754b32d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 01 17:13:57 crc kubenswrapper[4949]: I1001 17:13:57.021665 4949 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"30fe503b-a2f7-4eb5-8bbc-c0872754b32d\") device mount path \"/mnt/openstack/pv07\"" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 01 17:13:57 crc kubenswrapper[4949]: I1001 17:13:57.046603 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxtv9\" (UniqueName: \"kubernetes.io/projected/30fe503b-a2f7-4eb5-8bbc-c0872754b32d-kube-api-access-rxtv9\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"30fe503b-a2f7-4eb5-8bbc-c0872754b32d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 01 17:13:57 crc kubenswrapper[4949]: I1001 17:13:57.056777 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"test-operator-logs-pod-tempest-tempest-tests-tempest\" (UID: \"30fe503b-a2f7-4eb5-8bbc-c0872754b32d\") " pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 01 17:13:57 crc kubenswrapper[4949]: I1001 17:13:57.063075 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" Oct 01 17:13:57 crc kubenswrapper[4949]: I1001 17:13:57.537852 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack/test-operator-logs-pod-tempest-tempest-tests-tempest"] Oct 01 17:13:57 crc kubenswrapper[4949]: I1001 17:13:57.548483 4949 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 01 17:13:57 crc kubenswrapper[4949]: I1001 17:13:57.618871 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"30fe503b-a2f7-4eb5-8bbc-c0872754b32d","Type":"ContainerStarted","Data":"d1ef7271d81642e88c0764aaaee94f8c0bce8fb3b7fce6325e8e632255cfd81f"} Oct 01 17:13:59 crc kubenswrapper[4949]: I1001 17:13:59.654271 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" event={"ID":"30fe503b-a2f7-4eb5-8bbc-c0872754b32d","Type":"ContainerStarted","Data":"6b3166562c5594d70518896c6e92d5db5f15f21b23a17ee373f387ef8a8c4bdd"} Oct 01 17:13:59 crc kubenswrapper[4949]: I1001 17:13:59.671497 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack/test-operator-logs-pod-tempest-tempest-tests-tempest" podStartSLOduration=2.232582165 podStartE2EDuration="3.671482169s" podCreationTimestamp="2025-10-01 17:13:56 +0000 UTC" firstStartedPulling="2025-10-01 17:13:57.548281397 +0000 UTC m=+5536.853887588" lastFinishedPulling="2025-10-01 17:13:58.987181401 +0000 UTC m=+5538.292787592" observedRunningTime="2025-10-01 17:13:59.670652457 +0000 UTC m=+5538.976258648" watchObservedRunningTime="2025-10-01 17:13:59.671482169 +0000 UTC m=+5538.977088350" Oct 01 17:14:18 crc kubenswrapper[4949]: I1001 17:14:18.039248 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 17:14:18 crc kubenswrapper[4949]: I1001 17:14:18.040997 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 17:14:36 crc kubenswrapper[4949]: I1001 17:14:36.939263 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-c4svq/must-gather-dqdst"] Oct 01 17:14:36 crc kubenswrapper[4949]: I1001 17:14:36.941426 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c4svq/must-gather-dqdst" Oct 01 17:14:36 crc kubenswrapper[4949]: I1001 17:14:36.942821 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-must-gather-c4svq"/"default-dockercfg-8v4fd" Oct 01 17:14:36 crc kubenswrapper[4949]: I1001 17:14:36.943103 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-c4svq"/"openshift-service-ca.crt" Oct 01 17:14:36 crc kubenswrapper[4949]: I1001 17:14:36.943296 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-c4svq"/"kube-root-ca.crt" Oct 01 17:14:36 crc kubenswrapper[4949]: I1001 17:14:36.948513 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-c4svq/must-gather-dqdst"] Oct 01 17:14:37 crc kubenswrapper[4949]: I1001 17:14:37.140116 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a49cf713-89e1-4955-bd33-3b894ebdd935-must-gather-output\") pod \"must-gather-dqdst\" (UID: \"a49cf713-89e1-4955-bd33-3b894ebdd935\") " pod="openshift-must-gather-c4svq/must-gather-dqdst" Oct 01 17:14:37 crc kubenswrapper[4949]: I1001 17:14:37.140183 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d62sj\" (UniqueName: \"kubernetes.io/projected/a49cf713-89e1-4955-bd33-3b894ebdd935-kube-api-access-d62sj\") pod \"must-gather-dqdst\" (UID: \"a49cf713-89e1-4955-bd33-3b894ebdd935\") " pod="openshift-must-gather-c4svq/must-gather-dqdst" Oct 01 17:14:37 crc kubenswrapper[4949]: I1001 17:14:37.242527 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a49cf713-89e1-4955-bd33-3b894ebdd935-must-gather-output\") pod \"must-gather-dqdst\" (UID: \"a49cf713-89e1-4955-bd33-3b894ebdd935\") " pod="openshift-must-gather-c4svq/must-gather-dqdst" Oct 01 17:14:37 crc kubenswrapper[4949]: I1001 17:14:37.242594 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d62sj\" (UniqueName: \"kubernetes.io/projected/a49cf713-89e1-4955-bd33-3b894ebdd935-kube-api-access-d62sj\") pod \"must-gather-dqdst\" (UID: \"a49cf713-89e1-4955-bd33-3b894ebdd935\") " pod="openshift-must-gather-c4svq/must-gather-dqdst" Oct 01 17:14:37 crc kubenswrapper[4949]: I1001 17:14:37.243734 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a49cf713-89e1-4955-bd33-3b894ebdd935-must-gather-output\") pod \"must-gather-dqdst\" (UID: \"a49cf713-89e1-4955-bd33-3b894ebdd935\") " pod="openshift-must-gather-c4svq/must-gather-dqdst" Oct 01 17:14:37 crc kubenswrapper[4949]: I1001 17:14:37.261114 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d62sj\" (UniqueName: \"kubernetes.io/projected/a49cf713-89e1-4955-bd33-3b894ebdd935-kube-api-access-d62sj\") pod \"must-gather-dqdst\" (UID: \"a49cf713-89e1-4955-bd33-3b894ebdd935\") " pod="openshift-must-gather-c4svq/must-gather-dqdst" Oct 01 17:14:37 crc kubenswrapper[4949]: I1001 17:14:37.261626 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c4svq/must-gather-dqdst" Oct 01 17:14:37 crc kubenswrapper[4949]: I1001 17:14:37.815042 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-c4svq/must-gather-dqdst"] Oct 01 17:14:38 crc kubenswrapper[4949]: I1001 17:14:38.074537 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c4svq/must-gather-dqdst" event={"ID":"a49cf713-89e1-4955-bd33-3b894ebdd935","Type":"ContainerStarted","Data":"817a92e7e2893eba24a4f781bd9c7a3297c1004fcbeba670723a74272f4991e9"} Oct 01 17:14:48 crc kubenswrapper[4949]: I1001 17:14:48.038984 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 17:14:48 crc kubenswrapper[4949]: I1001 17:14:48.039449 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 17:14:49 crc kubenswrapper[4949]: I1001 17:14:49.865885 4949 scope.go:117] "RemoveContainer" containerID="6e5d0c01842643286db9325aef985a9a28d136708eaf36c8a3b4b7aa41fd8415" Oct 01 17:14:52 crc kubenswrapper[4949]: I1001 17:14:52.236348 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c4svq/must-gather-dqdst" event={"ID":"a49cf713-89e1-4955-bd33-3b894ebdd935","Type":"ContainerStarted","Data":"669778ae6302344e4d24e22f9a3547684ba4721eb7acf6f01c8eebfbc31109f5"} Oct 01 17:14:53 crc kubenswrapper[4949]: I1001 17:14:53.246664 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c4svq/must-gather-dqdst" event={"ID":"a49cf713-89e1-4955-bd33-3b894ebdd935","Type":"ContainerStarted","Data":"1cd6f8311f70e08803c88c149023082f889731d69316539ee1b4a5c9bbcf8c9b"} Oct 01 17:14:53 crc kubenswrapper[4949]: I1001 17:14:53.263975 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-c4svq/must-gather-dqdst" podStartSLOduration=3.497561599 podStartE2EDuration="17.263958634s" podCreationTimestamp="2025-10-01 17:14:36 +0000 UTC" firstStartedPulling="2025-10-01 17:14:37.826242546 +0000 UTC m=+5577.131848747" lastFinishedPulling="2025-10-01 17:14:51.592639601 +0000 UTC m=+5590.898245782" observedRunningTime="2025-10-01 17:14:53.260489497 +0000 UTC m=+5592.566095698" watchObservedRunningTime="2025-10-01 17:14:53.263958634 +0000 UTC m=+5592.569564825" Oct 01 17:14:56 crc kubenswrapper[4949]: I1001 17:14:56.875596 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-c4svq/crc-debug-g4g2s"] Oct 01 17:14:56 crc kubenswrapper[4949]: I1001 17:14:56.877412 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c4svq/crc-debug-g4g2s" Oct 01 17:14:56 crc kubenswrapper[4949]: I1001 17:14:56.945738 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5b95a0aa-aec2-4a1d-ae11-14485377a8e5-host\") pod \"crc-debug-g4g2s\" (UID: \"5b95a0aa-aec2-4a1d-ae11-14485377a8e5\") " pod="openshift-must-gather-c4svq/crc-debug-g4g2s" Oct 01 17:14:56 crc kubenswrapper[4949]: I1001 17:14:56.946184 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrjmr\" (UniqueName: \"kubernetes.io/projected/5b95a0aa-aec2-4a1d-ae11-14485377a8e5-kube-api-access-jrjmr\") pod \"crc-debug-g4g2s\" (UID: \"5b95a0aa-aec2-4a1d-ae11-14485377a8e5\") " pod="openshift-must-gather-c4svq/crc-debug-g4g2s" Oct 01 17:14:57 crc kubenswrapper[4949]: I1001 17:14:57.047966 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jrjmr\" (UniqueName: \"kubernetes.io/projected/5b95a0aa-aec2-4a1d-ae11-14485377a8e5-kube-api-access-jrjmr\") pod \"crc-debug-g4g2s\" (UID: \"5b95a0aa-aec2-4a1d-ae11-14485377a8e5\") " pod="openshift-must-gather-c4svq/crc-debug-g4g2s" Oct 01 17:14:57 crc kubenswrapper[4949]: I1001 17:14:57.048018 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5b95a0aa-aec2-4a1d-ae11-14485377a8e5-host\") pod \"crc-debug-g4g2s\" (UID: \"5b95a0aa-aec2-4a1d-ae11-14485377a8e5\") " pod="openshift-must-gather-c4svq/crc-debug-g4g2s" Oct 01 17:14:57 crc kubenswrapper[4949]: I1001 17:14:57.048143 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5b95a0aa-aec2-4a1d-ae11-14485377a8e5-host\") pod \"crc-debug-g4g2s\" (UID: \"5b95a0aa-aec2-4a1d-ae11-14485377a8e5\") " pod="openshift-must-gather-c4svq/crc-debug-g4g2s" Oct 01 17:14:57 crc kubenswrapper[4949]: I1001 17:14:57.068027 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrjmr\" (UniqueName: \"kubernetes.io/projected/5b95a0aa-aec2-4a1d-ae11-14485377a8e5-kube-api-access-jrjmr\") pod \"crc-debug-g4g2s\" (UID: \"5b95a0aa-aec2-4a1d-ae11-14485377a8e5\") " pod="openshift-must-gather-c4svq/crc-debug-g4g2s" Oct 01 17:14:57 crc kubenswrapper[4949]: I1001 17:14:57.195695 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c4svq/crc-debug-g4g2s" Oct 01 17:14:57 crc kubenswrapper[4949]: I1001 17:14:57.283761 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c4svq/crc-debug-g4g2s" event={"ID":"5b95a0aa-aec2-4a1d-ae11-14485377a8e5","Type":"ContainerStarted","Data":"9f7e55d48aaa4975f25951ce99ab85925f615bf55ddddb4ce6630257377287ae"} Oct 01 17:15:00 crc kubenswrapper[4949]: I1001 17:15:00.151559 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29322315-z5gl9"] Oct 01 17:15:00 crc kubenswrapper[4949]: I1001 17:15:00.153202 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29322315-z5gl9" Oct 01 17:15:00 crc kubenswrapper[4949]: I1001 17:15:00.159055 4949 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Oct 01 17:15:00 crc kubenswrapper[4949]: I1001 17:15:00.159701 4949 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Oct 01 17:15:00 crc kubenswrapper[4949]: I1001 17:15:00.178191 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29322315-z5gl9"] Oct 01 17:15:00 crc kubenswrapper[4949]: I1001 17:15:00.217188 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e510e9d0-4a8a-4fe8-a83d-e0d4ebc70940-config-volume\") pod \"collect-profiles-29322315-z5gl9\" (UID: \"e510e9d0-4a8a-4fe8-a83d-e0d4ebc70940\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322315-z5gl9" Oct 01 17:15:00 crc kubenswrapper[4949]: I1001 17:15:00.217350 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl825\" (UniqueName: \"kubernetes.io/projected/e510e9d0-4a8a-4fe8-a83d-e0d4ebc70940-kube-api-access-nl825\") pod \"collect-profiles-29322315-z5gl9\" (UID: \"e510e9d0-4a8a-4fe8-a83d-e0d4ebc70940\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322315-z5gl9" Oct 01 17:15:00 crc kubenswrapper[4949]: I1001 17:15:00.217393 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e510e9d0-4a8a-4fe8-a83d-e0d4ebc70940-secret-volume\") pod \"collect-profiles-29322315-z5gl9\" (UID: \"e510e9d0-4a8a-4fe8-a83d-e0d4ebc70940\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322315-z5gl9" Oct 01 17:15:00 crc kubenswrapper[4949]: I1001 17:15:00.320229 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nl825\" (UniqueName: \"kubernetes.io/projected/e510e9d0-4a8a-4fe8-a83d-e0d4ebc70940-kube-api-access-nl825\") pod \"collect-profiles-29322315-z5gl9\" (UID: \"e510e9d0-4a8a-4fe8-a83d-e0d4ebc70940\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322315-z5gl9" Oct 01 17:15:00 crc kubenswrapper[4949]: I1001 17:15:00.320304 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e510e9d0-4a8a-4fe8-a83d-e0d4ebc70940-secret-volume\") pod \"collect-profiles-29322315-z5gl9\" (UID: \"e510e9d0-4a8a-4fe8-a83d-e0d4ebc70940\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322315-z5gl9" Oct 01 17:15:00 crc kubenswrapper[4949]: I1001 17:15:00.320408 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e510e9d0-4a8a-4fe8-a83d-e0d4ebc70940-config-volume\") pod \"collect-profiles-29322315-z5gl9\" (UID: \"e510e9d0-4a8a-4fe8-a83d-e0d4ebc70940\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322315-z5gl9" Oct 01 17:15:00 crc kubenswrapper[4949]: I1001 17:15:00.321357 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e510e9d0-4a8a-4fe8-a83d-e0d4ebc70940-config-volume\") pod \"collect-profiles-29322315-z5gl9\" (UID: \"e510e9d0-4a8a-4fe8-a83d-e0d4ebc70940\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322315-z5gl9" Oct 01 17:15:00 crc kubenswrapper[4949]: I1001 17:15:00.328403 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e510e9d0-4a8a-4fe8-a83d-e0d4ebc70940-secret-volume\") pod \"collect-profiles-29322315-z5gl9\" (UID: \"e510e9d0-4a8a-4fe8-a83d-e0d4ebc70940\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322315-z5gl9" Oct 01 17:15:00 crc kubenswrapper[4949]: I1001 17:15:00.337650 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl825\" (UniqueName: \"kubernetes.io/projected/e510e9d0-4a8a-4fe8-a83d-e0d4ebc70940-kube-api-access-nl825\") pod \"collect-profiles-29322315-z5gl9\" (UID: \"e510e9d0-4a8a-4fe8-a83d-e0d4ebc70940\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29322315-z5gl9" Oct 01 17:15:00 crc kubenswrapper[4949]: I1001 17:15:00.491354 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29322315-z5gl9" Oct 01 17:15:01 crc kubenswrapper[4949]: I1001 17:15:01.055050 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29322315-z5gl9"] Oct 01 17:15:01 crc kubenswrapper[4949]: I1001 17:15:01.358022 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29322315-z5gl9" event={"ID":"e510e9d0-4a8a-4fe8-a83d-e0d4ebc70940","Type":"ContainerStarted","Data":"4bd0a9a3413f46d79e3af26ef916986fc04755189a953344db5d9fa4169324dd"} Oct 01 17:15:01 crc kubenswrapper[4949]: I1001 17:15:01.358391 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29322315-z5gl9" event={"ID":"e510e9d0-4a8a-4fe8-a83d-e0d4ebc70940","Type":"ContainerStarted","Data":"ed2df9748c51cb298409d65a588c06cb8aff5d54fc8a0474bbe9aacd8490096f"} Oct 01 17:15:02 crc kubenswrapper[4949]: I1001 17:15:02.368574 4949 generic.go:334] "Generic (PLEG): container finished" podID="e510e9d0-4a8a-4fe8-a83d-e0d4ebc70940" containerID="4bd0a9a3413f46d79e3af26ef916986fc04755189a953344db5d9fa4169324dd" exitCode=0 Oct 01 17:15:02 crc kubenswrapper[4949]: I1001 17:15:02.368631 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29322315-z5gl9" event={"ID":"e510e9d0-4a8a-4fe8-a83d-e0d4ebc70940","Type":"ContainerDied","Data":"4bd0a9a3413f46d79e3af26ef916986fc04755189a953344db5d9fa4169324dd"} Oct 01 17:15:03 crc kubenswrapper[4949]: I1001 17:15:03.807961 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29322315-z5gl9" Oct 01 17:15:04 crc kubenswrapper[4949]: I1001 17:15:04.007678 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e510e9d0-4a8a-4fe8-a83d-e0d4ebc70940-config-volume\") pod \"e510e9d0-4a8a-4fe8-a83d-e0d4ebc70940\" (UID: \"e510e9d0-4a8a-4fe8-a83d-e0d4ebc70940\") " Oct 01 17:15:04 crc kubenswrapper[4949]: I1001 17:15:04.007734 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nl825\" (UniqueName: \"kubernetes.io/projected/e510e9d0-4a8a-4fe8-a83d-e0d4ebc70940-kube-api-access-nl825\") pod \"e510e9d0-4a8a-4fe8-a83d-e0d4ebc70940\" (UID: \"e510e9d0-4a8a-4fe8-a83d-e0d4ebc70940\") " Oct 01 17:15:04 crc kubenswrapper[4949]: I1001 17:15:04.007817 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e510e9d0-4a8a-4fe8-a83d-e0d4ebc70940-secret-volume\") pod \"e510e9d0-4a8a-4fe8-a83d-e0d4ebc70940\" (UID: \"e510e9d0-4a8a-4fe8-a83d-e0d4ebc70940\") " Oct 01 17:15:04 crc kubenswrapper[4949]: I1001 17:15:04.008399 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e510e9d0-4a8a-4fe8-a83d-e0d4ebc70940-config-volume" (OuterVolumeSpecName: "config-volume") pod "e510e9d0-4a8a-4fe8-a83d-e0d4ebc70940" (UID: "e510e9d0-4a8a-4fe8-a83d-e0d4ebc70940"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 01 17:15:04 crc kubenswrapper[4949]: I1001 17:15:04.025598 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e510e9d0-4a8a-4fe8-a83d-e0d4ebc70940-kube-api-access-nl825" (OuterVolumeSpecName: "kube-api-access-nl825") pod "e510e9d0-4a8a-4fe8-a83d-e0d4ebc70940" (UID: "e510e9d0-4a8a-4fe8-a83d-e0d4ebc70940"). InnerVolumeSpecName "kube-api-access-nl825". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 17:15:04 crc kubenswrapper[4949]: I1001 17:15:04.038249 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e510e9d0-4a8a-4fe8-a83d-e0d4ebc70940-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e510e9d0-4a8a-4fe8-a83d-e0d4ebc70940" (UID: "e510e9d0-4a8a-4fe8-a83d-e0d4ebc70940"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 01 17:15:04 crc kubenswrapper[4949]: I1001 17:15:04.110175 4949 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e510e9d0-4a8a-4fe8-a83d-e0d4ebc70940-config-volume\") on node \"crc\" DevicePath \"\"" Oct 01 17:15:04 crc kubenswrapper[4949]: I1001 17:15:04.110208 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nl825\" (UniqueName: \"kubernetes.io/projected/e510e9d0-4a8a-4fe8-a83d-e0d4ebc70940-kube-api-access-nl825\") on node \"crc\" DevicePath \"\"" Oct 01 17:15:04 crc kubenswrapper[4949]: I1001 17:15:04.110219 4949 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e510e9d0-4a8a-4fe8-a83d-e0d4ebc70940-secret-volume\") on node \"crc\" DevicePath \"\"" Oct 01 17:15:04 crc kubenswrapper[4949]: I1001 17:15:04.393432 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29322315-z5gl9" event={"ID":"e510e9d0-4a8a-4fe8-a83d-e0d4ebc70940","Type":"ContainerDied","Data":"ed2df9748c51cb298409d65a588c06cb8aff5d54fc8a0474bbe9aacd8490096f"} Oct 01 17:15:04 crc kubenswrapper[4949]: I1001 17:15:04.393479 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ed2df9748c51cb298409d65a588c06cb8aff5d54fc8a0474bbe9aacd8490096f" Oct 01 17:15:04 crc kubenswrapper[4949]: I1001 17:15:04.393542 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29322315-z5gl9" Oct 01 17:15:04 crc kubenswrapper[4949]: I1001 17:15:04.891246 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29322270-7grzs"] Oct 01 17:15:04 crc kubenswrapper[4949]: I1001 17:15:04.899998 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29322270-7grzs"] Oct 01 17:15:05 crc kubenswrapper[4949]: I1001 17:15:05.614190 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9525d427-6873-416a-bec5-e747a0ac944b" path="/var/lib/kubelet/pods/9525d427-6873-416a-bec5-e747a0ac944b/volumes" Oct 01 17:15:18 crc kubenswrapper[4949]: I1001 17:15:18.038932 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 17:15:18 crc kubenswrapper[4949]: I1001 17:15:18.039426 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 17:15:18 crc kubenswrapper[4949]: I1001 17:15:18.039471 4949 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l6287" Oct 01 17:15:18 crc kubenswrapper[4949]: I1001 17:15:18.040218 4949 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"430799dac3ea9ee99ed16103e75915d512c98cde0242ee3607fc1dd10b2b3cf0"} pod="openshift-machine-config-operator/machine-config-daemon-l6287" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 01 17:15:18 crc kubenswrapper[4949]: I1001 17:15:18.040264 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" containerID="cri-o://430799dac3ea9ee99ed16103e75915d512c98cde0242ee3607fc1dd10b2b3cf0" gracePeriod=600 Oct 01 17:15:18 crc kubenswrapper[4949]: E1001 17:15:18.614505 4949 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296" Oct 01 17:15:18 crc kubenswrapper[4949]: E1001 17:15:18.614944 4949 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:container-00,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296,Command:[chroot /host bash -c echo 'TOOLBOX_NAME=toolbox-osp' > /root/.toolboxrc ; rm -rf \"/var/tmp/sos-osp\" && mkdir -p \"/var/tmp/sos-osp\" && sudo podman rm --force toolbox-osp; sudo --preserve-env podman pull --authfile /var/lib/kubelet/config.json registry.redhat.io/rhel9/support-tools && toolbox sos report --batch --all-logs --only-plugins block,cifs,crio,devicemapper,devices,iscsi,lvm2,memory,multipath,nfs,nis,nvme,podman,process,processor,selinux,scsi,udev,logs,crypto --tmp-dir=\"/var/tmp/sos-osp\" && if [[ \"$(ls /var/log/pods/*/{*.log.*,*/*.log.*} 2>/dev/null)\" != '' ]]; then tar --ignore-failed-read --warning=no-file-changed -cJf \"/var/tmp/sos-osp/podlogs.tar.xz\" --transform 's,^,podlogs/,' /var/log/pods/*/{*.log.*,*/*.log.*} || true; fi],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:TMOUT,Value:900,ValueFrom:nil,},EnvVar{Name:HOST,Value:/host,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:host,ReadOnly:false,MountPath:/host,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jrjmr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod crc-debug-g4g2s_openshift-must-gather-c4svq(5b95a0aa-aec2-4a1d-ae11-14485377a8e5): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Oct 01 17:15:18 crc kubenswrapper[4949]: E1001 17:15:18.616200 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="openshift-must-gather-c4svq/crc-debug-g4g2s" podUID="5b95a0aa-aec2-4a1d-ae11-14485377a8e5" Oct 01 17:15:19 crc kubenswrapper[4949]: I1001 17:15:19.555246 4949 generic.go:334] "Generic (PLEG): container finished" podID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerID="430799dac3ea9ee99ed16103e75915d512c98cde0242ee3607fc1dd10b2b3cf0" exitCode=0 Oct 01 17:15:19 crc kubenswrapper[4949]: I1001 17:15:19.555316 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" event={"ID":"0e15cd67-d4ad-49b8-96a6-da114105e558","Type":"ContainerDied","Data":"430799dac3ea9ee99ed16103e75915d512c98cde0242ee3607fc1dd10b2b3cf0"} Oct 01 17:15:19 crc kubenswrapper[4949]: I1001 17:15:19.555589 4949 scope.go:117] "RemoveContainer" containerID="ade2b444188b0955770924aec19fde4922abe57cd21ed2e7a9bcdec8233b346e" Oct 01 17:15:19 crc kubenswrapper[4949]: E1001 17:15:19.557376 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"container-00\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6ab858aed98e4fe57e6b144da8e90ad5d6698bb4cc5521206f5c05809f0f9296\\\"\"" pod="openshift-must-gather-c4svq/crc-debug-g4g2s" podUID="5b95a0aa-aec2-4a1d-ae11-14485377a8e5" Oct 01 17:15:25 crc kubenswrapper[4949]: E1001 17:15:25.229022 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 17:15:25 crc kubenswrapper[4949]: I1001 17:15:25.611609 4949 scope.go:117] "RemoveContainer" containerID="430799dac3ea9ee99ed16103e75915d512c98cde0242ee3607fc1dd10b2b3cf0" Oct 01 17:15:25 crc kubenswrapper[4949]: E1001 17:15:25.611994 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 17:15:34 crc kubenswrapper[4949]: I1001 17:15:34.693824 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c4svq/crc-debug-g4g2s" event={"ID":"5b95a0aa-aec2-4a1d-ae11-14485377a8e5","Type":"ContainerStarted","Data":"925cfeb7095a68ef36491b415541990c31ae6336f6a03b03138dcbfb6b8fbfd2"} Oct 01 17:15:34 crc kubenswrapper[4949]: I1001 17:15:34.707439 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-c4svq/crc-debug-g4g2s" podStartSLOduration=2.469179911 podStartE2EDuration="38.707417711s" podCreationTimestamp="2025-10-01 17:14:56 +0000 UTC" firstStartedPulling="2025-10-01 17:14:57.279225354 +0000 UTC m=+5596.584831535" lastFinishedPulling="2025-10-01 17:15:33.517463124 +0000 UTC m=+5632.823069335" observedRunningTime="2025-10-01 17:15:34.705569249 +0000 UTC m=+5634.011175440" watchObservedRunningTime="2025-10-01 17:15:34.707417711 +0000 UTC m=+5634.013023902" Oct 01 17:15:37 crc kubenswrapper[4949]: I1001 17:15:37.602772 4949 scope.go:117] "RemoveContainer" containerID="430799dac3ea9ee99ed16103e75915d512c98cde0242ee3607fc1dd10b2b3cf0" Oct 01 17:15:37 crc kubenswrapper[4949]: E1001 17:15:37.603695 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 17:15:51 crc kubenswrapper[4949]: I1001 17:15:51.579324 4949 scope.go:117] "RemoveContainer" containerID="9f489ad3ab453cef7d9b794ca5b1a1eb87386140c9c9a7dc2e55d8b6fb47c5e1" Oct 01 17:15:52 crc kubenswrapper[4949]: I1001 17:15:52.160943 4949 scope.go:117] "RemoveContainer" containerID="b04e062fc529f662bcb6812e6c82d614d02462582795aa71dfc50f1ba195577c" Oct 01 17:15:52 crc kubenswrapper[4949]: I1001 17:15:52.184299 4949 scope.go:117] "RemoveContainer" containerID="5e9b76d6e961d0557c2f6bfeccb27b1a85c442a10517972b7a8a23aba91c96c8" Oct 01 17:15:52 crc kubenswrapper[4949]: I1001 17:15:52.604649 4949 scope.go:117] "RemoveContainer" containerID="430799dac3ea9ee99ed16103e75915d512c98cde0242ee3607fc1dd10b2b3cf0" Oct 01 17:15:52 crc kubenswrapper[4949]: E1001 17:15:52.606880 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 17:15:59 crc kubenswrapper[4949]: I1001 17:15:59.131254 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-b577bcff4-r4wxv_b22f7df5-f2f0-485a-b277-6196948f9cee/barbican-api/0.log" Oct 01 17:15:59 crc kubenswrapper[4949]: I1001 17:15:59.384484 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-api-b577bcff4-r4wxv_b22f7df5-f2f0-485a-b277-6196948f9cee/barbican-api-log/0.log" Oct 01 17:15:59 crc kubenswrapper[4949]: I1001 17:15:59.553548 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-54c68d78fd-k7v8v_f5c8bcf5-419a-4094-ac1c-bed8d1610faf/barbican-keystone-listener/0.log" Oct 01 17:15:59 crc kubenswrapper[4949]: I1001 17:15:59.762495 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-keystone-listener-54c68d78fd-k7v8v_f5c8bcf5-419a-4094-ac1c-bed8d1610faf/barbican-keystone-listener-log/0.log" Oct 01 17:15:59 crc kubenswrapper[4949]: I1001 17:15:59.926965 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5cdfb75847-bw4vd_36bab499-5905-4a12-baf4-dbbcd1422864/barbican-worker/0.log" Oct 01 17:15:59 crc kubenswrapper[4949]: I1001 17:15:59.975320 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_barbican-worker-5cdfb75847-bw4vd_36bab499-5905-4a12-baf4-dbbcd1422864/barbican-worker-log/0.log" Oct 01 17:16:00 crc kubenswrapper[4949]: I1001 17:16:00.197420 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_bootstrap-edpm-deployment-openstack-edpm-ipam-nw9r5_bf83f788-14e7-4c60-bdb0-174b3d343b75/bootstrap-edpm-deployment-openstack-edpm-ipam/0.log" Oct 01 17:16:00 crc kubenswrapper[4949]: I1001 17:16:00.403230 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8bc079a4-d954-4112-8a29-06b54b15b8a0/ceilometer-central-agent/0.log" Oct 01 17:16:00 crc kubenswrapper[4949]: I1001 17:16:00.553546 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8bc079a4-d954-4112-8a29-06b54b15b8a0/ceilometer-notification-agent/0.log" Oct 01 17:16:00 crc kubenswrapper[4949]: I1001 17:16:00.581552 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8bc079a4-d954-4112-8a29-06b54b15b8a0/proxy-httpd/0.log" Oct 01 17:16:00 crc kubenswrapper[4949]: I1001 17:16:00.712415 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceilometer-0_8bc079a4-d954-4112-8a29-06b54b15b8a0/sg-core/0.log" Oct 01 17:16:00 crc kubenswrapper[4949]: I1001 17:16:00.883417 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-client-edpm-deployment-openstack-edpm-ipam-dwfvq_be7b111a-f1ee-4539-8fa1-c16aa1bcf5e9/ceph-client-edpm-deployment-openstack-edpm-ipam/0.log" Oct 01 17:16:01 crc kubenswrapper[4949]: I1001 17:16:01.079196 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ceph-hci-pre-edpm-deployment-openstack-edpm-ipam-hqzm7_46ad0a13-d88e-4d0f-baa4-45b795d6f204/ceph-hci-pre-edpm-deployment-openstack-edpm-ipam/0.log" Oct 01 17:16:01 crc kubenswrapper[4949]: I1001 17:16:01.861139 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a/cinder-api/0.log" Oct 01 17:16:01 crc kubenswrapper[4949]: I1001 17:16:01.989284 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-api-0_2cc5b8af-7ba7-430a-8eab-4f56cb4cd26a/cinder-api-log/0.log" Oct 01 17:16:02 crc kubenswrapper[4949]: I1001 17:16:02.145782 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_bdb84d6a-de04-4107-917f-c2a6599ed2dc/probe/0.log" Oct 01 17:16:02 crc kubenswrapper[4949]: I1001 17:16:02.389797 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_9621ba2f-9a4b-4a89-9e20-fd7f54600e34/cinder-scheduler/0.log" Oct 01 17:16:02 crc kubenswrapper[4949]: I1001 17:16:02.516591 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-scheduler-0_9621ba2f-9a4b-4a89-9e20-fd7f54600e34/probe/0.log" Oct 01 17:16:02 crc kubenswrapper[4949]: I1001 17:16:02.935618 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_a9aee921-7753-4725-a42d-ee8161afd631/probe/0.log" Oct 01 17:16:03 crc kubenswrapper[4949]: I1001 17:16:03.311039 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-network-edpm-deployment-openstack-edpm-ipam-mhfqj_05704de2-46b5-4cce-bea8-9e1a00e0d2a5/configure-network-edpm-deployment-openstack-edpm-ipam/0.log" Oct 01 17:16:03 crc kubenswrapper[4949]: I1001 17:16:03.648602 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-backup-0_bdb84d6a-de04-4107-917f-c2a6599ed2dc/cinder-backup/0.log" Oct 01 17:16:03 crc kubenswrapper[4949]: I1001 17:16:03.747280 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_configure-os-edpm-deployment-openstack-edpm-ipam-nghnz_5638925a-de61-4d2c-8863-88658f9bb7fd/configure-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 01 17:16:03 crc kubenswrapper[4949]: I1001 17:16:03.945414 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-xzkvd_251bdf57-7cc2-4c4d-b6d7-d5579e3e9341/init/0.log" Oct 01 17:16:04 crc kubenswrapper[4949]: I1001 17:16:04.150806 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-xzkvd_251bdf57-7cc2-4c4d-b6d7-d5579e3e9341/init/0.log" Oct 01 17:16:04 crc kubenswrapper[4949]: I1001 17:16:04.311277 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_dnsmasq-dns-76b5fdb995-xzkvd_251bdf57-7cc2-4c4d-b6d7-d5579e3e9341/dnsmasq-dns/0.log" Oct 01 17:16:04 crc kubenswrapper[4949]: I1001 17:16:04.530837 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_b2d87d62-452d-44c4-8cfd-5cfaa8bc1157/glance-httpd/0.log" Oct 01 17:16:04 crc kubenswrapper[4949]: I1001 17:16:04.601230 4949 scope.go:117] "RemoveContainer" containerID="430799dac3ea9ee99ed16103e75915d512c98cde0242ee3607fc1dd10b2b3cf0" Oct 01 17:16:04 crc kubenswrapper[4949]: E1001 17:16:04.601611 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 17:16:04 crc kubenswrapper[4949]: I1001 17:16:04.604862 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-external-api-0_b2d87d62-452d-44c4-8cfd-5cfaa8bc1157/glance-log/0.log" Oct 01 17:16:04 crc kubenswrapper[4949]: I1001 17:16:04.805638 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_29e77b45-e692-415f-8966-6e74d30b4d7b/glance-httpd/0.log" Oct 01 17:16:04 crc kubenswrapper[4949]: I1001 17:16:04.978053 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_glance-default-internal-api-0_29e77b45-e692-415f-8966-6e74d30b4d7b/glance-log/0.log" Oct 01 17:16:05 crc kubenswrapper[4949]: I1001 17:16:05.352228 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-797b4b5c88-m9tdj_3ff4ae7d-bc42-404d-ab53-e189c6d9a00a/horizon/0.log" Oct 01 17:16:05 crc kubenswrapper[4949]: I1001 17:16:05.555924 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_horizon-797b4b5c88-m9tdj_3ff4ae7d-bc42-404d-ab53-e189c6d9a00a/horizon-log/0.log" Oct 01 17:16:05 crc kubenswrapper[4949]: I1001 17:16:05.780651 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-certs-edpm-deployment-openstack-edpm-ipam-bp7jr_eff7bc51-9128-4d18-8e2e-05c8b779e7ba/install-certs-edpm-deployment-openstack-edpm-ipam/0.log" Oct 01 17:16:06 crc kubenswrapper[4949]: I1001 17:16:06.105632 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_install-os-edpm-deployment-openstack-edpm-ipam-279tn_bd38b0c3-ae51-4ad3-ae6c-2c926614301c/install-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 01 17:16:06 crc kubenswrapper[4949]: I1001 17:16:06.642602 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29322241-shjwz_5972192f-ecd3-4cfa-8f79-c8a3874f9c65/keystone-cron/0.log" Oct 01 17:16:07 crc kubenswrapper[4949]: I1001 17:16:07.046689 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-cron-29322301-vwh2x_83bc5859-312b-4097-8bfc-0b53ea00e5a6/keystone-cron/0.log" Oct 01 17:16:07 crc kubenswrapper[4949]: I1001 17:16:07.281730 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_keystone-6544c97df6-6skzg_764dea52-7d14-4da5-a50d-2fa41001e2b4/keystone-api/0.log" Oct 01 17:16:07 crc kubenswrapper[4949]: I1001 17:16:07.437539 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_kube-state-metrics-0_d3cf67ff-4577-4cea-9cae-bf40fce7d527/kube-state-metrics/0.log" Oct 01 17:16:07 crc kubenswrapper[4949]: I1001 17:16:07.774921 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_libvirt-edpm-deployment-openstack-edpm-ipam-qt7mp_b26e9d4b-3eb7-4325-b59b-dde6b2b7d2f4/libvirt-edpm-deployment-openstack-edpm-ipam/0.log" Oct 01 17:16:07 crc kubenswrapper[4949]: I1001 17:16:07.949720 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_510ed0b0-9c2c-4f54-8323-755cd65b4393/manila-api-log/0.log" Oct 01 17:16:08 crc kubenswrapper[4949]: I1001 17:16:08.190251 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-api-0_510ed0b0-9c2c-4f54-8323-755cd65b4393/manila-api/0.log" Oct 01 17:16:08 crc kubenswrapper[4949]: I1001 17:16:08.396556 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_32632bf5-03f4-494c-8e79-3cb86d093629/probe/0.log" Oct 01 17:16:08 crc kubenswrapper[4949]: I1001 17:16:08.478325 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-scheduler-0_32632bf5-03f4-494c-8e79-3cb86d093629/manila-scheduler/0.log" Oct 01 17:16:08 crc kubenswrapper[4949]: I1001 17:16:08.881828 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_a6094af2-c113-4b8f-9de7-a8bc511523d6/probe/0.log" Oct 01 17:16:08 crc kubenswrapper[4949]: I1001 17:16:08.958460 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_manila-share-share1-0_a6094af2-c113-4b8f-9de7-a8bc511523d6/manila-share/0.log" Oct 01 17:16:09 crc kubenswrapper[4949]: I1001 17:16:09.691215 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_cinder-volume-volume1-0_a9aee921-7753-4725-a42d-ee8161afd631/cinder-volume/0.log" Oct 01 17:16:10 crc kubenswrapper[4949]: I1001 17:16:10.109314 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-559fd97bd5-6zst2_8e733c32-7a78-4088-b089-cfe1a37bb3e4/neutron-api/0.log" Oct 01 17:16:10 crc kubenswrapper[4949]: I1001 17:16:10.711612 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-metadata-edpm-deployment-openstack-edpm-ipam-wmwxz_2b9b0f35-85c4-4286-9283-e3af60933d81/neutron-metadata-edpm-deployment-openstack-edpm-ipam/0.log" Oct 01 17:16:10 crc kubenswrapper[4949]: I1001 17:16:10.715582 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_neutron-559fd97bd5-6zst2_8e733c32-7a78-4088-b089-cfe1a37bb3e4/neutron-httpd/0.log" Oct 01 17:16:12 crc kubenswrapper[4949]: I1001 17:16:12.537707 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_9fb393d4-7b75-432e-aaec-767addd7eb30/nova-api-log/0.log" Oct 01 17:16:12 crc kubenswrapper[4949]: I1001 17:16:12.538834 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-api-0_9fb393d4-7b75-432e-aaec-767addd7eb30/nova-api-api/0.log" Oct 01 17:16:12 crc kubenswrapper[4949]: I1001 17:16:12.914593 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell0-conductor-0_f6a7962e-d000-4077-aeb6-fbc55876a90d/nova-cell0-conductor-conductor/0.log" Oct 01 17:16:13 crc kubenswrapper[4949]: I1001 17:16:13.295772 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-conductor-0_dffab7bc-2015-4dbe-9c81-b3e61eeface6/nova-cell1-conductor-conductor/0.log" Oct 01 17:16:13 crc kubenswrapper[4949]: I1001 17:16:13.640982 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-cell1-novncproxy-0_ae61c1e7-7ee2-4610-9cc5-e7df710424c7/nova-cell1-novncproxy-novncproxy/0.log" Oct 01 17:16:13 crc kubenswrapper[4949]: I1001 17:16:13.967628 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-custom-ceph-edpm-deployment-openstack-edpm-ipam-zrbw5_3c32efc4-95e1-4528-981f-0055372e12db/nova-custom-ceph-edpm-deployment-openstack-edpm-ipam/0.log" Oct 01 17:16:14 crc kubenswrapper[4949]: I1001 17:16:14.151622 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_562eed2a-1a27-4c6d-8c8c-675924006456/nova-metadata-log/0.log" Oct 01 17:16:14 crc kubenswrapper[4949]: I1001 17:16:14.858287 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-scheduler-0_53891933-1769-4a81-b239-f5b4a02cbe81/nova-scheduler-scheduler/0.log" Oct 01 17:16:15 crc kubenswrapper[4949]: I1001 17:16:15.484213 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_fc01020a-ebfd-4c4b-b211-d7da1f9aa357/mysql-bootstrap/0.log" Oct 01 17:16:15 crc kubenswrapper[4949]: I1001 17:16:15.772141 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_fc01020a-ebfd-4c4b-b211-d7da1f9aa357/mysql-bootstrap/0.log" Oct 01 17:16:16 crc kubenswrapper[4949]: I1001 17:16:16.087473 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-cell1-galera-0_fc01020a-ebfd-4c4b-b211-d7da1f9aa357/galera/0.log" Oct 01 17:16:16 crc kubenswrapper[4949]: I1001 17:16:16.583560 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_d5cdf223-529e-4d39-bfc1-7483fbd94a69/mysql-bootstrap/0.log" Oct 01 17:16:16 crc kubenswrapper[4949]: I1001 17:16:16.826798 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_d5cdf223-529e-4d39-bfc1-7483fbd94a69/mysql-bootstrap/0.log" Oct 01 17:16:17 crc kubenswrapper[4949]: I1001 17:16:17.008721 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_nova-metadata-0_562eed2a-1a27-4c6d-8c8c-675924006456/nova-metadata-metadata/0.log" Oct 01 17:16:17 crc kubenswrapper[4949]: I1001 17:16:17.058692 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstack-galera-0_d5cdf223-529e-4d39-bfc1-7483fbd94a69/galera/0.log" Oct 01 17:16:17 crc kubenswrapper[4949]: I1001 17:16:17.287092 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_openstackclient_b0b87944-8572-4efb-b446-46b0aa47a9ed/openstackclient/0.log" Oct 01 17:16:17 crc kubenswrapper[4949]: I1001 17:16:17.591614 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-4kkzs_230fbfcd-f990-42cf-88bb-9e4c4ae45a7d/ovn-controller/0.log" Oct 01 17:16:17 crc kubenswrapper[4949]: I1001 17:16:17.601321 4949 scope.go:117] "RemoveContainer" containerID="430799dac3ea9ee99ed16103e75915d512c98cde0242ee3607fc1dd10b2b3cf0" Oct 01 17:16:17 crc kubenswrapper[4949]: E1001 17:16:17.601700 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 17:16:18 crc kubenswrapper[4949]: I1001 17:16:18.058930 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-metrics-dcnsm_6b840653-c566-4109-afbe-c5733092d91d/openstack-network-exporter/0.log" Oct 01 17:16:18 crc kubenswrapper[4949]: I1001 17:16:18.372664 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-248dp_9db16e40-d8f9-4ee2-bcfc-b093ecdacc7c/ovsdb-server-init/0.log" Oct 01 17:16:18 crc kubenswrapper[4949]: I1001 17:16:18.609913 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-248dp_9db16e40-d8f9-4ee2-bcfc-b093ecdacc7c/ovs-vswitchd/0.log" Oct 01 17:16:18 crc kubenswrapper[4949]: I1001 17:16:18.638166 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-248dp_9db16e40-d8f9-4ee2-bcfc-b093ecdacc7c/ovsdb-server-init/0.log" Oct 01 17:16:18 crc kubenswrapper[4949]: I1001 17:16:18.933522 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-controller-ovs-248dp_9db16e40-d8f9-4ee2-bcfc-b093ecdacc7c/ovsdb-server/0.log" Oct 01 17:16:19 crc kubenswrapper[4949]: I1001 17:16:19.214386 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-edpm-deployment-openstack-edpm-ipam-zl9vq_ea1b02ac-6468-4c2f-989f-2f0ff9c10d1e/ovn-edpm-deployment-openstack-edpm-ipam/0.log" Oct 01 17:16:19 crc kubenswrapper[4949]: I1001 17:16:19.468641 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_271ab239-a9b6-47bf-a4a1-424db4c922a5/ovn-northd/0.log" Oct 01 17:16:19 crc kubenswrapper[4949]: I1001 17:16:19.502934 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovn-northd-0_271ab239-a9b6-47bf-a4a1-424db4c922a5/openstack-network-exporter/0.log" Oct 01 17:16:19 crc kubenswrapper[4949]: I1001 17:16:19.709118 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_321b87e2-0290-4062-9ae3-a7370005b2e4/openstack-network-exporter/0.log" Oct 01 17:16:19 crc kubenswrapper[4949]: I1001 17:16:19.886759 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-nb-0_321b87e2-0290-4062-9ae3-a7370005b2e4/ovsdbserver-nb/0.log" Oct 01 17:16:20 crc kubenswrapper[4949]: I1001 17:16:20.184282 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_394391aa-ba00-4197-870b-33f881a1afda/openstack-network-exporter/0.log" Oct 01 17:16:20 crc kubenswrapper[4949]: I1001 17:16:20.189663 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ovsdbserver-sb-0_394391aa-ba00-4197-870b-33f881a1afda/ovsdbserver-sb/0.log" Oct 01 17:16:20 crc kubenswrapper[4949]: I1001 17:16:20.531806 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-575569c7bd-g6srl_d6bde34f-c88d-4a70-ab54-084c4727d46d/placement-api/0.log" Oct 01 17:16:20 crc kubenswrapper[4949]: I1001 17:16:20.803761 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_placement-575569c7bd-g6srl_d6bde34f-c88d-4a70-ab54-084c4727d46d/placement-log/0.log" Oct 01 17:16:21 crc kubenswrapper[4949]: I1001 17:16:21.043636 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_620a0468-6462-442e-bfcf-ca26669a638a/setup-container/0.log" Oct 01 17:16:21 crc kubenswrapper[4949]: I1001 17:16:21.423976 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_620a0468-6462-442e-bfcf-ca26669a638a/setup-container/0.log" Oct 01 17:16:21 crc kubenswrapper[4949]: I1001 17:16:21.468616 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-cell1-server-0_620a0468-6462-442e-bfcf-ca26669a638a/rabbitmq/0.log" Oct 01 17:16:21 crc kubenswrapper[4949]: I1001 17:16:21.701896 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_3e904978-9466-4e56-8e31-c4e06b6f49e2/setup-container/0.log" Oct 01 17:16:21 crc kubenswrapper[4949]: I1001 17:16:21.961312 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_3e904978-9466-4e56-8e31-c4e06b6f49e2/setup-container/0.log" Oct 01 17:16:22 crc kubenswrapper[4949]: I1001 17:16:22.086836 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_rabbitmq-server-0_3e904978-9466-4e56-8e31-c4e06b6f49e2/rabbitmq/0.log" Oct 01 17:16:22 crc kubenswrapper[4949]: I1001 17:16:22.319349 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_reboot-os-edpm-deployment-openstack-edpm-ipam-km6ld_a6c07eb6-177e-415a-b24d-0ad1e81e3ab9/reboot-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 01 17:16:22 crc kubenswrapper[4949]: I1001 17:16:22.587717 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_repo-setup-edpm-deployment-openstack-edpm-ipam-sxsg2_7071f30b-0ed1-46d1-a2a7-c37d584f1ef4/repo-setup-edpm-deployment-openstack-edpm-ipam/0.log" Oct 01 17:16:22 crc kubenswrapper[4949]: I1001 17:16:22.795005 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_run-os-edpm-deployment-openstack-edpm-ipam-h6htf_cd7ebc74-d7fc-479d-9707-077690577317/run-os-edpm-deployment-openstack-edpm-ipam/0.log" Oct 01 17:16:23 crc kubenswrapper[4949]: I1001 17:16:23.015801 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_ssh-known-hosts-edpm-deployment-7g2xq_d73263fe-3cef-4a57-92b8-2d70f128c8d4/ssh-known-hosts-edpm-deployment/0.log" Oct 01 17:16:23 crc kubenswrapper[4949]: I1001 17:16:23.494545 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_tempest-tests-tempest_375603dd-5dd3-4d2f-ac58-5335ebc721c0/tempest-tests-tempest-tests-runner/0.log" Oct 01 17:16:23 crc kubenswrapper[4949]: I1001 17:16:23.756444 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_test-operator-logs-pod-tempest-tempest-tests-tempest_30fe503b-a2f7-4eb5-8bbc-c0872754b32d/test-operator-logs-container/0.log" Oct 01 17:16:24 crc kubenswrapper[4949]: I1001 17:16:24.148573 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_validate-network-edpm-deployment-openstack-edpm-ipam-x2gzz_410ce0aa-73a6-4d01-998e-bcd51879be8e/validate-network-edpm-deployment-openstack-edpm-ipam/0.log" Oct 01 17:16:29 crc kubenswrapper[4949]: I1001 17:16:29.605226 4949 scope.go:117] "RemoveContainer" containerID="430799dac3ea9ee99ed16103e75915d512c98cde0242ee3607fc1dd10b2b3cf0" Oct 01 17:16:29 crc kubenswrapper[4949]: E1001 17:16:29.605973 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 17:16:35 crc kubenswrapper[4949]: I1001 17:16:35.318154 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack_memcached-0_ecccaacc-6b05-4bbb-bba1-523c5b3de332/memcached/0.log" Oct 01 17:16:44 crc kubenswrapper[4949]: I1001 17:16:44.602439 4949 scope.go:117] "RemoveContainer" containerID="430799dac3ea9ee99ed16103e75915d512c98cde0242ee3607fc1dd10b2b3cf0" Oct 01 17:16:44 crc kubenswrapper[4949]: E1001 17:16:44.603173 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 17:16:55 crc kubenswrapper[4949]: I1001 17:16:55.601681 4949 scope.go:117] "RemoveContainer" containerID="430799dac3ea9ee99ed16103e75915d512c98cde0242ee3607fc1dd10b2b3cf0" Oct 01 17:16:55 crc kubenswrapper[4949]: E1001 17:16:55.602385 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 17:17:10 crc kubenswrapper[4949]: I1001 17:17:10.602539 4949 scope.go:117] "RemoveContainer" containerID="430799dac3ea9ee99ed16103e75915d512c98cde0242ee3607fc1dd10b2b3cf0" Oct 01 17:17:10 crc kubenswrapper[4949]: E1001 17:17:10.603546 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 17:17:24 crc kubenswrapper[4949]: I1001 17:17:24.602167 4949 scope.go:117] "RemoveContainer" containerID="430799dac3ea9ee99ed16103e75915d512c98cde0242ee3607fc1dd10b2b3cf0" Oct 01 17:17:24 crc kubenswrapper[4949]: E1001 17:17:24.602879 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 17:17:38 crc kubenswrapper[4949]: I1001 17:17:38.601801 4949 scope.go:117] "RemoveContainer" containerID="430799dac3ea9ee99ed16103e75915d512c98cde0242ee3607fc1dd10b2b3cf0" Oct 01 17:17:38 crc kubenswrapper[4949]: E1001 17:17:38.602470 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 17:17:49 crc kubenswrapper[4949]: I1001 17:17:49.601372 4949 scope.go:117] "RemoveContainer" containerID="430799dac3ea9ee99ed16103e75915d512c98cde0242ee3607fc1dd10b2b3cf0" Oct 01 17:17:49 crc kubenswrapper[4949]: E1001 17:17:49.602160 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 17:18:00 crc kubenswrapper[4949]: I1001 17:18:00.183623 4949 generic.go:334] "Generic (PLEG): container finished" podID="5b95a0aa-aec2-4a1d-ae11-14485377a8e5" containerID="925cfeb7095a68ef36491b415541990c31ae6336f6a03b03138dcbfb6b8fbfd2" exitCode=0 Oct 01 17:18:00 crc kubenswrapper[4949]: I1001 17:18:00.183704 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c4svq/crc-debug-g4g2s" event={"ID":"5b95a0aa-aec2-4a1d-ae11-14485377a8e5","Type":"ContainerDied","Data":"925cfeb7095a68ef36491b415541990c31ae6336f6a03b03138dcbfb6b8fbfd2"} Oct 01 17:18:01 crc kubenswrapper[4949]: I1001 17:18:01.310623 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c4svq/crc-debug-g4g2s" Oct 01 17:18:01 crc kubenswrapper[4949]: I1001 17:18:01.340578 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-c4svq/crc-debug-g4g2s"] Oct 01 17:18:01 crc kubenswrapper[4949]: I1001 17:18:01.348476 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-c4svq/crc-debug-g4g2s"] Oct 01 17:18:01 crc kubenswrapper[4949]: I1001 17:18:01.427009 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5b95a0aa-aec2-4a1d-ae11-14485377a8e5-host\") pod \"5b95a0aa-aec2-4a1d-ae11-14485377a8e5\" (UID: \"5b95a0aa-aec2-4a1d-ae11-14485377a8e5\") " Oct 01 17:18:01 crc kubenswrapper[4949]: I1001 17:18:01.427158 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5b95a0aa-aec2-4a1d-ae11-14485377a8e5-host" (OuterVolumeSpecName: "host") pod "5b95a0aa-aec2-4a1d-ae11-14485377a8e5" (UID: "5b95a0aa-aec2-4a1d-ae11-14485377a8e5"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 01 17:18:01 crc kubenswrapper[4949]: I1001 17:18:01.427521 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jrjmr\" (UniqueName: \"kubernetes.io/projected/5b95a0aa-aec2-4a1d-ae11-14485377a8e5-kube-api-access-jrjmr\") pod \"5b95a0aa-aec2-4a1d-ae11-14485377a8e5\" (UID: \"5b95a0aa-aec2-4a1d-ae11-14485377a8e5\") " Oct 01 17:18:01 crc kubenswrapper[4949]: I1001 17:18:01.428017 4949 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/5b95a0aa-aec2-4a1d-ae11-14485377a8e5-host\") on node \"crc\" DevicePath \"\"" Oct 01 17:18:01 crc kubenswrapper[4949]: I1001 17:18:01.437981 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b95a0aa-aec2-4a1d-ae11-14485377a8e5-kube-api-access-jrjmr" (OuterVolumeSpecName: "kube-api-access-jrjmr") pod "5b95a0aa-aec2-4a1d-ae11-14485377a8e5" (UID: "5b95a0aa-aec2-4a1d-ae11-14485377a8e5"). InnerVolumeSpecName "kube-api-access-jrjmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 17:18:01 crc kubenswrapper[4949]: I1001 17:18:01.530108 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jrjmr\" (UniqueName: \"kubernetes.io/projected/5b95a0aa-aec2-4a1d-ae11-14485377a8e5-kube-api-access-jrjmr\") on node \"crc\" DevicePath \"\"" Oct 01 17:18:01 crc kubenswrapper[4949]: I1001 17:18:01.614511 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b95a0aa-aec2-4a1d-ae11-14485377a8e5" path="/var/lib/kubelet/pods/5b95a0aa-aec2-4a1d-ae11-14485377a8e5/volumes" Oct 01 17:18:02 crc kubenswrapper[4949]: I1001 17:18:02.206949 4949 scope.go:117] "RemoveContainer" containerID="925cfeb7095a68ef36491b415541990c31ae6336f6a03b03138dcbfb6b8fbfd2" Oct 01 17:18:02 crc kubenswrapper[4949]: I1001 17:18:02.206968 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c4svq/crc-debug-g4g2s" Oct 01 17:18:02 crc kubenswrapper[4949]: I1001 17:18:02.495279 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-c4svq/crc-debug-dbqj2"] Oct 01 17:18:02 crc kubenswrapper[4949]: E1001 17:18:02.496401 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b95a0aa-aec2-4a1d-ae11-14485377a8e5" containerName="container-00" Oct 01 17:18:02 crc kubenswrapper[4949]: I1001 17:18:02.496481 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b95a0aa-aec2-4a1d-ae11-14485377a8e5" containerName="container-00" Oct 01 17:18:02 crc kubenswrapper[4949]: E1001 17:18:02.496546 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e510e9d0-4a8a-4fe8-a83d-e0d4ebc70940" containerName="collect-profiles" Oct 01 17:18:02 crc kubenswrapper[4949]: I1001 17:18:02.496607 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="e510e9d0-4a8a-4fe8-a83d-e0d4ebc70940" containerName="collect-profiles" Oct 01 17:18:02 crc kubenswrapper[4949]: I1001 17:18:02.496879 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b95a0aa-aec2-4a1d-ae11-14485377a8e5" containerName="container-00" Oct 01 17:18:02 crc kubenswrapper[4949]: I1001 17:18:02.496952 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="e510e9d0-4a8a-4fe8-a83d-e0d4ebc70940" containerName="collect-profiles" Oct 01 17:18:02 crc kubenswrapper[4949]: I1001 17:18:02.497589 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c4svq/crc-debug-dbqj2" Oct 01 17:18:02 crc kubenswrapper[4949]: I1001 17:18:02.649999 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e0bed7d3-fe4b-47e0-a030-644f5d0889f3-host\") pod \"crc-debug-dbqj2\" (UID: \"e0bed7d3-fe4b-47e0-a030-644f5d0889f3\") " pod="openshift-must-gather-c4svq/crc-debug-dbqj2" Oct 01 17:18:02 crc kubenswrapper[4949]: I1001 17:18:02.650389 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnxmd\" (UniqueName: \"kubernetes.io/projected/e0bed7d3-fe4b-47e0-a030-644f5d0889f3-kube-api-access-pnxmd\") pod \"crc-debug-dbqj2\" (UID: \"e0bed7d3-fe4b-47e0-a030-644f5d0889f3\") " pod="openshift-must-gather-c4svq/crc-debug-dbqj2" Oct 01 17:18:02 crc kubenswrapper[4949]: I1001 17:18:02.754185 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e0bed7d3-fe4b-47e0-a030-644f5d0889f3-host\") pod \"crc-debug-dbqj2\" (UID: \"e0bed7d3-fe4b-47e0-a030-644f5d0889f3\") " pod="openshift-must-gather-c4svq/crc-debug-dbqj2" Oct 01 17:18:02 crc kubenswrapper[4949]: I1001 17:18:02.754638 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnxmd\" (UniqueName: \"kubernetes.io/projected/e0bed7d3-fe4b-47e0-a030-644f5d0889f3-kube-api-access-pnxmd\") pod \"crc-debug-dbqj2\" (UID: \"e0bed7d3-fe4b-47e0-a030-644f5d0889f3\") " pod="openshift-must-gather-c4svq/crc-debug-dbqj2" Oct 01 17:18:02 crc kubenswrapper[4949]: I1001 17:18:02.754275 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e0bed7d3-fe4b-47e0-a030-644f5d0889f3-host\") pod \"crc-debug-dbqj2\" (UID: \"e0bed7d3-fe4b-47e0-a030-644f5d0889f3\") " pod="openshift-must-gather-c4svq/crc-debug-dbqj2" Oct 01 17:18:02 crc kubenswrapper[4949]: I1001 17:18:02.777951 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnxmd\" (UniqueName: \"kubernetes.io/projected/e0bed7d3-fe4b-47e0-a030-644f5d0889f3-kube-api-access-pnxmd\") pod \"crc-debug-dbqj2\" (UID: \"e0bed7d3-fe4b-47e0-a030-644f5d0889f3\") " pod="openshift-must-gather-c4svq/crc-debug-dbqj2" Oct 01 17:18:02 crc kubenswrapper[4949]: I1001 17:18:02.813701 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c4svq/crc-debug-dbqj2" Oct 01 17:18:03 crc kubenswrapper[4949]: I1001 17:18:03.215814 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c4svq/crc-debug-dbqj2" event={"ID":"e0bed7d3-fe4b-47e0-a030-644f5d0889f3","Type":"ContainerStarted","Data":"266640fa7de8cc07187b612c6bab75cbbcb37d8e3d629862df29381953315a13"} Oct 01 17:18:03 crc kubenswrapper[4949]: I1001 17:18:03.216153 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c4svq/crc-debug-dbqj2" event={"ID":"e0bed7d3-fe4b-47e0-a030-644f5d0889f3","Type":"ContainerStarted","Data":"3c73244af6e958d3b7ed3a947e0c20f082c6e4e3f72cf7ec48ebb172f93fc66e"} Oct 01 17:18:03 crc kubenswrapper[4949]: I1001 17:18:03.230207 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-c4svq/crc-debug-dbqj2" podStartSLOduration=1.230190688 podStartE2EDuration="1.230190688s" podCreationTimestamp="2025-10-01 17:18:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-10-01 17:18:03.227512264 +0000 UTC m=+5782.533118465" watchObservedRunningTime="2025-10-01 17:18:03.230190688 +0000 UTC m=+5782.535796879" Oct 01 17:18:04 crc kubenswrapper[4949]: I1001 17:18:04.238817 4949 generic.go:334] "Generic (PLEG): container finished" podID="e0bed7d3-fe4b-47e0-a030-644f5d0889f3" containerID="266640fa7de8cc07187b612c6bab75cbbcb37d8e3d629862df29381953315a13" exitCode=0 Oct 01 17:18:04 crc kubenswrapper[4949]: I1001 17:18:04.239094 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c4svq/crc-debug-dbqj2" event={"ID":"e0bed7d3-fe4b-47e0-a030-644f5d0889f3","Type":"ContainerDied","Data":"266640fa7de8cc07187b612c6bab75cbbcb37d8e3d629862df29381953315a13"} Oct 01 17:18:04 crc kubenswrapper[4949]: I1001 17:18:04.602027 4949 scope.go:117] "RemoveContainer" containerID="430799dac3ea9ee99ed16103e75915d512c98cde0242ee3607fc1dd10b2b3cf0" Oct 01 17:18:04 crc kubenswrapper[4949]: E1001 17:18:04.602753 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 17:18:05 crc kubenswrapper[4949]: I1001 17:18:05.364161 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c4svq/crc-debug-dbqj2" Oct 01 17:18:05 crc kubenswrapper[4949]: I1001 17:18:05.508180 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pnxmd\" (UniqueName: \"kubernetes.io/projected/e0bed7d3-fe4b-47e0-a030-644f5d0889f3-kube-api-access-pnxmd\") pod \"e0bed7d3-fe4b-47e0-a030-644f5d0889f3\" (UID: \"e0bed7d3-fe4b-47e0-a030-644f5d0889f3\") " Oct 01 17:18:05 crc kubenswrapper[4949]: I1001 17:18:05.508306 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e0bed7d3-fe4b-47e0-a030-644f5d0889f3-host\") pod \"e0bed7d3-fe4b-47e0-a030-644f5d0889f3\" (UID: \"e0bed7d3-fe4b-47e0-a030-644f5d0889f3\") " Oct 01 17:18:05 crc kubenswrapper[4949]: I1001 17:18:05.508816 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0bed7d3-fe4b-47e0-a030-644f5d0889f3-host" (OuterVolumeSpecName: "host") pod "e0bed7d3-fe4b-47e0-a030-644f5d0889f3" (UID: "e0bed7d3-fe4b-47e0-a030-644f5d0889f3"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 01 17:18:05 crc kubenswrapper[4949]: I1001 17:18:05.519407 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0bed7d3-fe4b-47e0-a030-644f5d0889f3-kube-api-access-pnxmd" (OuterVolumeSpecName: "kube-api-access-pnxmd") pod "e0bed7d3-fe4b-47e0-a030-644f5d0889f3" (UID: "e0bed7d3-fe4b-47e0-a030-644f5d0889f3"). InnerVolumeSpecName "kube-api-access-pnxmd". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 17:18:05 crc kubenswrapper[4949]: I1001 17:18:05.610520 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pnxmd\" (UniqueName: \"kubernetes.io/projected/e0bed7d3-fe4b-47e0-a030-644f5d0889f3-kube-api-access-pnxmd\") on node \"crc\" DevicePath \"\"" Oct 01 17:18:05 crc kubenswrapper[4949]: I1001 17:18:05.610571 4949 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/e0bed7d3-fe4b-47e0-a030-644f5d0889f3-host\") on node \"crc\" DevicePath \"\"" Oct 01 17:18:06 crc kubenswrapper[4949]: I1001 17:18:06.256623 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c4svq/crc-debug-dbqj2" event={"ID":"e0bed7d3-fe4b-47e0-a030-644f5d0889f3","Type":"ContainerDied","Data":"3c73244af6e958d3b7ed3a947e0c20f082c6e4e3f72cf7ec48ebb172f93fc66e"} Oct 01 17:18:06 crc kubenswrapper[4949]: I1001 17:18:06.256954 4949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3c73244af6e958d3b7ed3a947e0c20f082c6e4e3f72cf7ec48ebb172f93fc66e" Oct 01 17:18:06 crc kubenswrapper[4949]: I1001 17:18:06.257016 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c4svq/crc-debug-dbqj2" Oct 01 17:18:13 crc kubenswrapper[4949]: I1001 17:18:13.447334 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-c4svq/crc-debug-dbqj2"] Oct 01 17:18:13 crc kubenswrapper[4949]: I1001 17:18:13.454573 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-c4svq/crc-debug-dbqj2"] Oct 01 17:18:13 crc kubenswrapper[4949]: I1001 17:18:13.614283 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0bed7d3-fe4b-47e0-a030-644f5d0889f3" path="/var/lib/kubelet/pods/e0bed7d3-fe4b-47e0-a030-644f5d0889f3/volumes" Oct 01 17:18:14 crc kubenswrapper[4949]: I1001 17:18:14.660762 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-c4svq/crc-debug-n998v"] Oct 01 17:18:14 crc kubenswrapper[4949]: E1001 17:18:14.661248 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0bed7d3-fe4b-47e0-a030-644f5d0889f3" containerName="container-00" Oct 01 17:18:14 crc kubenswrapper[4949]: I1001 17:18:14.661263 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0bed7d3-fe4b-47e0-a030-644f5d0889f3" containerName="container-00" Oct 01 17:18:14 crc kubenswrapper[4949]: I1001 17:18:14.661506 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0bed7d3-fe4b-47e0-a030-644f5d0889f3" containerName="container-00" Oct 01 17:18:14 crc kubenswrapper[4949]: I1001 17:18:14.662285 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c4svq/crc-debug-n998v" Oct 01 17:18:14 crc kubenswrapper[4949]: I1001 17:18:14.752219 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9d46af4a-ed56-4218-9d2d-86ade30a063f-host\") pod \"crc-debug-n998v\" (UID: \"9d46af4a-ed56-4218-9d2d-86ade30a063f\") " pod="openshift-must-gather-c4svq/crc-debug-n998v" Oct 01 17:18:14 crc kubenswrapper[4949]: I1001 17:18:14.752338 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-488jf\" (UniqueName: \"kubernetes.io/projected/9d46af4a-ed56-4218-9d2d-86ade30a063f-kube-api-access-488jf\") pod \"crc-debug-n998v\" (UID: \"9d46af4a-ed56-4218-9d2d-86ade30a063f\") " pod="openshift-must-gather-c4svq/crc-debug-n998v" Oct 01 17:18:14 crc kubenswrapper[4949]: I1001 17:18:14.854493 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9d46af4a-ed56-4218-9d2d-86ade30a063f-host\") pod \"crc-debug-n998v\" (UID: \"9d46af4a-ed56-4218-9d2d-86ade30a063f\") " pod="openshift-must-gather-c4svq/crc-debug-n998v" Oct 01 17:18:14 crc kubenswrapper[4949]: I1001 17:18:14.854647 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9d46af4a-ed56-4218-9d2d-86ade30a063f-host\") pod \"crc-debug-n998v\" (UID: \"9d46af4a-ed56-4218-9d2d-86ade30a063f\") " pod="openshift-must-gather-c4svq/crc-debug-n998v" Oct 01 17:18:14 crc kubenswrapper[4949]: I1001 17:18:14.854685 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-488jf\" (UniqueName: \"kubernetes.io/projected/9d46af4a-ed56-4218-9d2d-86ade30a063f-kube-api-access-488jf\") pod \"crc-debug-n998v\" (UID: \"9d46af4a-ed56-4218-9d2d-86ade30a063f\") " pod="openshift-must-gather-c4svq/crc-debug-n998v" Oct 01 17:18:14 crc kubenswrapper[4949]: I1001 17:18:14.879566 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-488jf\" (UniqueName: \"kubernetes.io/projected/9d46af4a-ed56-4218-9d2d-86ade30a063f-kube-api-access-488jf\") pod \"crc-debug-n998v\" (UID: \"9d46af4a-ed56-4218-9d2d-86ade30a063f\") " pod="openshift-must-gather-c4svq/crc-debug-n998v" Oct 01 17:18:14 crc kubenswrapper[4949]: I1001 17:18:14.985262 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c4svq/crc-debug-n998v" Oct 01 17:18:15 crc kubenswrapper[4949]: I1001 17:18:15.328608 4949 generic.go:334] "Generic (PLEG): container finished" podID="9d46af4a-ed56-4218-9d2d-86ade30a063f" containerID="fb8be1bd6181dff0ac245308118beb483ee0a520e344f70a3da334b182af51d3" exitCode=0 Oct 01 17:18:15 crc kubenswrapper[4949]: I1001 17:18:15.328666 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c4svq/crc-debug-n998v" event={"ID":"9d46af4a-ed56-4218-9d2d-86ade30a063f","Type":"ContainerDied","Data":"fb8be1bd6181dff0ac245308118beb483ee0a520e344f70a3da334b182af51d3"} Oct 01 17:18:15 crc kubenswrapper[4949]: I1001 17:18:15.328945 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c4svq/crc-debug-n998v" event={"ID":"9d46af4a-ed56-4218-9d2d-86ade30a063f","Type":"ContainerStarted","Data":"e8fefa99afe7451a0d2189ab25b68930e8e4d559aef733f46d8083734192c4e5"} Oct 01 17:18:15 crc kubenswrapper[4949]: I1001 17:18:15.375361 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-c4svq/crc-debug-n998v"] Oct 01 17:18:15 crc kubenswrapper[4949]: I1001 17:18:15.383441 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-c4svq/crc-debug-n998v"] Oct 01 17:18:16 crc kubenswrapper[4949]: I1001 17:18:16.445603 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c4svq/crc-debug-n998v" Oct 01 17:18:16 crc kubenswrapper[4949]: I1001 17:18:16.594630 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9d46af4a-ed56-4218-9d2d-86ade30a063f-host\") pod \"9d46af4a-ed56-4218-9d2d-86ade30a063f\" (UID: \"9d46af4a-ed56-4218-9d2d-86ade30a063f\") " Oct 01 17:18:16 crc kubenswrapper[4949]: I1001 17:18:16.594701 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-488jf\" (UniqueName: \"kubernetes.io/projected/9d46af4a-ed56-4218-9d2d-86ade30a063f-kube-api-access-488jf\") pod \"9d46af4a-ed56-4218-9d2d-86ade30a063f\" (UID: \"9d46af4a-ed56-4218-9d2d-86ade30a063f\") " Oct 01 17:18:16 crc kubenswrapper[4949]: I1001 17:18:16.594734 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d46af4a-ed56-4218-9d2d-86ade30a063f-host" (OuterVolumeSpecName: "host") pod "9d46af4a-ed56-4218-9d2d-86ade30a063f" (UID: "9d46af4a-ed56-4218-9d2d-86ade30a063f"). InnerVolumeSpecName "host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 01 17:18:16 crc kubenswrapper[4949]: I1001 17:18:16.595418 4949 reconciler_common.go:293] "Volume detached for volume \"host\" (UniqueName: \"kubernetes.io/host-path/9d46af4a-ed56-4218-9d2d-86ade30a063f-host\") on node \"crc\" DevicePath \"\"" Oct 01 17:18:16 crc kubenswrapper[4949]: I1001 17:18:16.609374 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d46af4a-ed56-4218-9d2d-86ade30a063f-kube-api-access-488jf" (OuterVolumeSpecName: "kube-api-access-488jf") pod "9d46af4a-ed56-4218-9d2d-86ade30a063f" (UID: "9d46af4a-ed56-4218-9d2d-86ade30a063f"). InnerVolumeSpecName "kube-api-access-488jf". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 17:18:16 crc kubenswrapper[4949]: I1001 17:18:16.696647 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-488jf\" (UniqueName: \"kubernetes.io/projected/9d46af4a-ed56-4218-9d2d-86ade30a063f-kube-api-access-488jf\") on node \"crc\" DevicePath \"\"" Oct 01 17:18:17 crc kubenswrapper[4949]: I1001 17:18:17.035760 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7cdf3127ac6a3925f4fb2d937c27e2845d0915122c2baebe66bf49073a6slsv_9f616ad0-6bc3-46a6-a7c9-4c77256f8660/util/0.log" Oct 01 17:18:17 crc kubenswrapper[4949]: I1001 17:18:17.170230 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7cdf3127ac6a3925f4fb2d937c27e2845d0915122c2baebe66bf49073a6slsv_9f616ad0-6bc3-46a6-a7c9-4c77256f8660/pull/0.log" Oct 01 17:18:17 crc kubenswrapper[4949]: I1001 17:18:17.190577 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7cdf3127ac6a3925f4fb2d937c27e2845d0915122c2baebe66bf49073a6slsv_9f616ad0-6bc3-46a6-a7c9-4c77256f8660/util/0.log" Oct 01 17:18:17 crc kubenswrapper[4949]: I1001 17:18:17.197960 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7cdf3127ac6a3925f4fb2d937c27e2845d0915122c2baebe66bf49073a6slsv_9f616ad0-6bc3-46a6-a7c9-4c77256f8660/pull/0.log" Oct 01 17:18:17 crc kubenswrapper[4949]: I1001 17:18:17.349510 4949 scope.go:117] "RemoveContainer" containerID="fb8be1bd6181dff0ac245308118beb483ee0a520e344f70a3da334b182af51d3" Oct 01 17:18:17 crc kubenswrapper[4949]: I1001 17:18:17.349554 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c4svq/crc-debug-n998v" Oct 01 17:18:17 crc kubenswrapper[4949]: I1001 17:18:17.376232 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7cdf3127ac6a3925f4fb2d937c27e2845d0915122c2baebe66bf49073a6slsv_9f616ad0-6bc3-46a6-a7c9-4c77256f8660/pull/0.log" Oct 01 17:18:17 crc kubenswrapper[4949]: I1001 17:18:17.429582 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7cdf3127ac6a3925f4fb2d937c27e2845d0915122c2baebe66bf49073a6slsv_9f616ad0-6bc3-46a6-a7c9-4c77256f8660/extract/0.log" Oct 01 17:18:17 crc kubenswrapper[4949]: I1001 17:18:17.444713 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_7cdf3127ac6a3925f4fb2d937c27e2845d0915122c2baebe66bf49073a6slsv_9f616ad0-6bc3-46a6-a7c9-4c77256f8660/util/0.log" Oct 01 17:18:17 crc kubenswrapper[4949]: I1001 17:18:17.588541 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6ff8b75857-2dwb2_fd0ea56f-b7dd-4d0d-b6bf-5ca133fa2ea4/kube-rbac-proxy/0.log" Oct 01 17:18:17 crc kubenswrapper[4949]: I1001 17:18:17.601316 4949 scope.go:117] "RemoveContainer" containerID="430799dac3ea9ee99ed16103e75915d512c98cde0242ee3607fc1dd10b2b3cf0" Oct 01 17:18:17 crc kubenswrapper[4949]: E1001 17:18:17.601710 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 17:18:17 crc kubenswrapper[4949]: I1001 17:18:17.627282 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_barbican-operator-controller-manager-6ff8b75857-2dwb2_fd0ea56f-b7dd-4d0d-b6bf-5ca133fa2ea4/manager/0.log" Oct 01 17:18:17 crc kubenswrapper[4949]: I1001 17:18:17.634415 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d46af4a-ed56-4218-9d2d-86ade30a063f" path="/var/lib/kubelet/pods/9d46af4a-ed56-4218-9d2d-86ade30a063f/volumes" Oct 01 17:18:17 crc kubenswrapper[4949]: I1001 17:18:17.646005 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-795d876f9c-8wvgx_e5ed691c-da8b-4bae-8d20-e92c1e062ea2/kube-rbac-proxy/0.log" Oct 01 17:18:17 crc kubenswrapper[4949]: I1001 17:18:17.795723 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_cinder-operator-controller-manager-795d876f9c-8wvgx_e5ed691c-da8b-4bae-8d20-e92c1e062ea2/manager/0.log" Oct 01 17:18:17 crc kubenswrapper[4949]: I1001 17:18:17.812737 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-84f4f7b77b-rbgd2_6bec54a7-c02d-45e5-bc1a-22ca4e0d2229/kube-rbac-proxy/0.log" Oct 01 17:18:17 crc kubenswrapper[4949]: I1001 17:18:17.840336 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_designate-operator-controller-manager-84f4f7b77b-rbgd2_6bec54a7-c02d-45e5-bc1a-22ca4e0d2229/manager/0.log" Oct 01 17:18:18 crc kubenswrapper[4949]: I1001 17:18:18.030990 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-84958c4d49-jshxk_e8658648-cf73-468c-8291-7b6ad0f265e6/kube-rbac-proxy/0.log" Oct 01 17:18:18 crc kubenswrapper[4949]: I1001 17:18:18.124388 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_glance-operator-controller-manager-84958c4d49-jshxk_e8658648-cf73-468c-8291-7b6ad0f265e6/manager/0.log" Oct 01 17:18:18 crc kubenswrapper[4949]: I1001 17:18:18.209110 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5d889d78cf-4k7z5_36350170-f4ac-4f4b-ba23-a05d9300f63a/manager/0.log" Oct 01 17:18:18 crc kubenswrapper[4949]: I1001 17:18:18.285028 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_heat-operator-controller-manager-5d889d78cf-4k7z5_36350170-f4ac-4f4b-ba23-a05d9300f63a/kube-rbac-proxy/0.log" Oct 01 17:18:18 crc kubenswrapper[4949]: I1001 17:18:18.359737 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-9f4696d94-5w8c4_ad53e0de-7d24-447d-82c6-ab0a523c913a/kube-rbac-proxy/0.log" Oct 01 17:18:18 crc kubenswrapper[4949]: I1001 17:18:18.417464 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-9f4696d94-5w8c4_ad53e0de-7d24-447d-82c6-ab0a523c913a/manager/0.log" Oct 01 17:18:18 crc kubenswrapper[4949]: I1001 17:18:18.543886 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-9d6c5db85-cg7tn_611a7063-c924-4b35-b2d7-d4d48a7e8f7a/kube-rbac-proxy/0.log" Oct 01 17:18:18 crc kubenswrapper[4949]: I1001 17:18:18.706080 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_infra-operator-controller-manager-9d6c5db85-cg7tn_611a7063-c924-4b35-b2d7-d4d48a7e8f7a/manager/0.log" Oct 01 17:18:18 crc kubenswrapper[4949]: I1001 17:18:18.732892 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5cd4858477-5pcwg_58d1b185-0213-427d-8f85-2b2636e0d121/kube-rbac-proxy/0.log" Oct 01 17:18:18 crc kubenswrapper[4949]: I1001 17:18:18.752890 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ironic-operator-controller-manager-5cd4858477-5pcwg_58d1b185-0213-427d-8f85-2b2636e0d121/manager/0.log" Oct 01 17:18:18 crc kubenswrapper[4949]: I1001 17:18:18.914055 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-5bd55b4bff-mrm9t_1f506f8f-047a-4efa-8ff2-dbe310d0e12e/kube-rbac-proxy/0.log" Oct 01 17:18:18 crc kubenswrapper[4949]: I1001 17:18:18.968510 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_keystone-operator-controller-manager-5bd55b4bff-mrm9t_1f506f8f-047a-4efa-8ff2-dbe310d0e12e/manager/0.log" Oct 01 17:18:18 crc kubenswrapper[4949]: I1001 17:18:18.994974 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6d68dbc695-mtq8j_61fc51ec-92f9-4dc7-a0ea-793712809ebc/kube-rbac-proxy/0.log" Oct 01 17:18:19 crc kubenswrapper[4949]: I1001 17:18:19.163088 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_manila-operator-controller-manager-6d68dbc695-mtq8j_61fc51ec-92f9-4dc7-a0ea-793712809ebc/manager/0.log" Oct 01 17:18:19 crc kubenswrapper[4949]: I1001 17:18:19.182787 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-88c7-98jct_6f80de4b-ac33-4b39-b105-5927fd6511fc/kube-rbac-proxy/0.log" Oct 01 17:18:19 crc kubenswrapper[4949]: I1001 17:18:19.226144 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_mariadb-operator-controller-manager-88c7-98jct_6f80de4b-ac33-4b39-b105-5927fd6511fc/manager/0.log" Oct 01 17:18:19 crc kubenswrapper[4949]: I1001 17:18:19.325163 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-849d5b9b84-lhdzd_0853e80f-a3aa-4230-b028-f8a0887afb2f/kube-rbac-proxy/0.log" Oct 01 17:18:19 crc kubenswrapper[4949]: I1001 17:18:19.385557 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_neutron-operator-controller-manager-849d5b9b84-lhdzd_0853e80f-a3aa-4230-b028-f8a0887afb2f/manager/0.log" Oct 01 17:18:19 crc kubenswrapper[4949]: I1001 17:18:19.490867 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-64cd67b5cb-9tqls_acce3e2f-7372-4703-a0e2-3dec09dc5b2d/kube-rbac-proxy/0.log" Oct 01 17:18:19 crc kubenswrapper[4949]: I1001 17:18:19.613956 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_nova-operator-controller-manager-64cd67b5cb-9tqls_acce3e2f-7372-4703-a0e2-3dec09dc5b2d/manager/0.log" Oct 01 17:18:19 crc kubenswrapper[4949]: I1001 17:18:19.642317 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7b787867f4-r5smf_39bff9fb-ac50-40cd-b23c-113763e3527e/kube-rbac-proxy/0.log" Oct 01 17:18:19 crc kubenswrapper[4949]: I1001 17:18:19.725703 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_octavia-operator-controller-manager-7b787867f4-r5smf_39bff9fb-ac50-40cd-b23c-113763e3527e/manager/0.log" Oct 01 17:18:19 crc kubenswrapper[4949]: I1001 17:18:19.781614 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-77b9676b8crvdqk_60ac792e-9135-4ea0-84f1-1708c0421e70/kube-rbac-proxy/0.log" Oct 01 17:18:19 crc kubenswrapper[4949]: I1001 17:18:19.818802 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-baremetal-operator-controller-manager-77b9676b8crvdqk_60ac792e-9135-4ea0-84f1-1708c0421e70/manager/0.log" Oct 01 17:18:19 crc kubenswrapper[4949]: I1001 17:18:19.942632 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-745f9964cd-cwnfz_e56e40fb-b1f0-4955-9949-6e06db62d247/kube-rbac-proxy/0.log" Oct 01 17:18:20 crc kubenswrapper[4949]: I1001 17:18:20.156638 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-59598b58b7-k47pt_ed7412ec-ed40-43b5-b045-b14f81da4090/kube-rbac-proxy/0.log" Oct 01 17:18:20 crc kubenswrapper[4949]: I1001 17:18:20.343622 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-operator-59598b58b7-k47pt_ed7412ec-ed40-43b5-b045-b14f81da4090/operator/0.log" Oct 01 17:18:20 crc kubenswrapper[4949]: I1001 17:18:20.442727 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-index-2hj4v_00d1abc7-5d1b-40be-8310-4d62e84f0c06/registry-server/0.log" Oct 01 17:18:20 crc kubenswrapper[4949]: I1001 17:18:20.559724 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-9976ff44c-mblm4_2d130278-c73a-4681-ae0f-76385dcf4de9/kube-rbac-proxy/0.log" Oct 01 17:18:20 crc kubenswrapper[4949]: I1001 17:18:20.684808 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-589c58c6c-62v4q_97aa883c-f9b1-4f83-884e-13fdd88beca7/kube-rbac-proxy/0.log" Oct 01 17:18:20 crc kubenswrapper[4949]: I1001 17:18:20.706937 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_ovn-operator-controller-manager-9976ff44c-mblm4_2d130278-c73a-4681-ae0f-76385dcf4de9/manager/0.log" Oct 01 17:18:20 crc kubenswrapper[4949]: I1001 17:18:20.819525 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_placement-operator-controller-manager-589c58c6c-62v4q_97aa883c-f9b1-4f83-884e-13fdd88beca7/manager/0.log" Oct 01 17:18:20 crc kubenswrapper[4949]: I1001 17:18:20.959005 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_rabbitmq-cluster-operator-manager-5f97d8c699-cnlqh_3ad42e41-782a-480a-b8f5-e449eddb1649/operator/0.log" Oct 01 17:18:21 crc kubenswrapper[4949]: I1001 17:18:21.093732 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-84d6b4b759-9qvwb_c3ab5ef5-3877-4512-a279-5d4504fd7301/kube-rbac-proxy/0.log" Oct 01 17:18:21 crc kubenswrapper[4949]: I1001 17:18:21.171318 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_swift-operator-controller-manager-84d6b4b759-9qvwb_c3ab5ef5-3877-4512-a279-5d4504fd7301/manager/0.log" Oct 01 17:18:21 crc kubenswrapper[4949]: I1001 17:18:21.228722 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-b8d54b5d7-n8frs_02d48b56-707e-4347-88c5-0429a487042c/kube-rbac-proxy/0.log" Oct 01 17:18:21 crc kubenswrapper[4949]: I1001 17:18:21.320979 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_openstack-operator-controller-manager-745f9964cd-cwnfz_e56e40fb-b1f0-4955-9949-6e06db62d247/manager/0.log" Oct 01 17:18:21 crc kubenswrapper[4949]: I1001 17:18:21.417984 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_telemetry-operator-controller-manager-b8d54b5d7-n8frs_02d48b56-707e-4347-88c5-0429a487042c/manager/0.log" Oct 01 17:18:21 crc kubenswrapper[4949]: I1001 17:18:21.428649 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-85777745bb-l7zmk_420e19a1-ab84-4852-ab58-8242a09d5621/kube-rbac-proxy/0.log" Oct 01 17:18:21 crc kubenswrapper[4949]: I1001 17:18:21.448157 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_test-operator-controller-manager-85777745bb-l7zmk_420e19a1-ab84-4852-ab58-8242a09d5621/manager/0.log" Oct 01 17:18:21 crc kubenswrapper[4949]: I1001 17:18:21.573927 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6b9957f54f-z2b6f_013782e2-98ad-4401-af7d-22ac977c0e42/kube-rbac-proxy/0.log" Oct 01 17:18:21 crc kubenswrapper[4949]: I1001 17:18:21.609601 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_watcher-operator-controller-manager-6b9957f54f-z2b6f_013782e2-98ad-4401-af7d-22ac977c0e42/manager/0.log" Oct 01 17:18:31 crc kubenswrapper[4949]: I1001 17:18:31.607097 4949 scope.go:117] "RemoveContainer" containerID="430799dac3ea9ee99ed16103e75915d512c98cde0242ee3607fc1dd10b2b3cf0" Oct 01 17:18:31 crc kubenswrapper[4949]: E1001 17:18:31.607996 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 17:18:36 crc kubenswrapper[4949]: I1001 17:18:36.514585 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-zz76h_3d1c7446-d5a2-4dd3-996c-0ca11ccbcf0b/control-plane-machine-set-operator/0.log" Oct 01 17:18:36 crc kubenswrapper[4949]: I1001 17:18:36.667729 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-t8vc2_ff9c89a1-2f76-47e5-9e37-866d1d8adef2/machine-api-operator/0.log" Oct 01 17:18:36 crc kubenswrapper[4949]: I1001 17:18:36.705853 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-t8vc2_ff9c89a1-2f76-47e5-9e37-866d1d8adef2/kube-rbac-proxy/0.log" Oct 01 17:18:46 crc kubenswrapper[4949]: I1001 17:18:46.603643 4949 scope.go:117] "RemoveContainer" containerID="430799dac3ea9ee99ed16103e75915d512c98cde0242ee3607fc1dd10b2b3cf0" Oct 01 17:18:46 crc kubenswrapper[4949]: E1001 17:18:46.604316 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 17:18:48 crc kubenswrapper[4949]: I1001 17:18:48.077165 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-5b446d88c5-7vbbz_79dab76c-1e2a-4bd3-9a7d-8efe7bfd104b/cert-manager-controller/0.log" Oct 01 17:18:48 crc kubenswrapper[4949]: I1001 17:18:48.221210 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7f985d654d-lqrkp_ea0056e5-5d6c-4039-9891-175a1352ab9d/cert-manager-cainjector/0.log" Oct 01 17:18:48 crc kubenswrapper[4949]: I1001 17:18:48.261800 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-5655c58dd6-4cjht_0bc43872-8657-4c8b-be00-679944969a4d/cert-manager-webhook/0.log" Oct 01 17:18:59 crc kubenswrapper[4949]: I1001 17:18:59.631888 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-console-plugin-6b874cbd85-b9x5t_8a0b8afa-4a92-444c-b7a1-cf9939e5d88c/nmstate-console-plugin/0.log" Oct 01 17:18:59 crc kubenswrapper[4949]: I1001 17:18:59.794082 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-handler-g5nwf_3c565206-451b-4bfc-bef6-20b6f3a33546/nmstate-handler/0.log" Oct 01 17:18:59 crc kubenswrapper[4949]: I1001 17:18:59.848866 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-bqplx_9826cb28-c103-4f8b-88a6-6476badd1cf7/kube-rbac-proxy/0.log" Oct 01 17:18:59 crc kubenswrapper[4949]: I1001 17:18:59.881763 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-metrics-fdff9cb8d-bqplx_9826cb28-c103-4f8b-88a6-6476badd1cf7/nmstate-metrics/0.log" Oct 01 17:19:00 crc kubenswrapper[4949]: I1001 17:19:00.024685 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-operator-858ddd8f98-lw9lc_1d72f965-53c3-4cf1-915b-41cec48f788b/nmstate-operator/0.log" Oct 01 17:19:00 crc kubenswrapper[4949]: I1001 17:19:00.087073 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-nmstate_nmstate-webhook-6cdbc54649-hknwz_70acd657-66a2-4cc3-90e9-1a2a1448155c/nmstate-webhook/0.log" Oct 01 17:19:00 crc kubenswrapper[4949]: I1001 17:19:00.602615 4949 scope.go:117] "RemoveContainer" containerID="430799dac3ea9ee99ed16103e75915d512c98cde0242ee3607fc1dd10b2b3cf0" Oct 01 17:19:00 crc kubenswrapper[4949]: E1001 17:19:00.602930 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 17:19:07 crc kubenswrapper[4949]: I1001 17:19:07.665422 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-546xw"] Oct 01 17:19:07 crc kubenswrapper[4949]: E1001 17:19:07.666361 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d46af4a-ed56-4218-9d2d-86ade30a063f" containerName="container-00" Oct 01 17:19:07 crc kubenswrapper[4949]: I1001 17:19:07.666378 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d46af4a-ed56-4218-9d2d-86ade30a063f" containerName="container-00" Oct 01 17:19:07 crc kubenswrapper[4949]: I1001 17:19:07.666598 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d46af4a-ed56-4218-9d2d-86ade30a063f" containerName="container-00" Oct 01 17:19:07 crc kubenswrapper[4949]: I1001 17:19:07.667914 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-546xw" Oct 01 17:19:07 crc kubenswrapper[4949]: I1001 17:19:07.675082 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-546xw"] Oct 01 17:19:07 crc kubenswrapper[4949]: I1001 17:19:07.816030 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7fffbe7-02bc-4bdb-92cc-e908cd1282c7-catalog-content\") pod \"redhat-marketplace-546xw\" (UID: \"c7fffbe7-02bc-4bdb-92cc-e908cd1282c7\") " pod="openshift-marketplace/redhat-marketplace-546xw" Oct 01 17:19:07 crc kubenswrapper[4949]: I1001 17:19:07.816154 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xskdn\" (UniqueName: \"kubernetes.io/projected/c7fffbe7-02bc-4bdb-92cc-e908cd1282c7-kube-api-access-xskdn\") pod \"redhat-marketplace-546xw\" (UID: \"c7fffbe7-02bc-4bdb-92cc-e908cd1282c7\") " pod="openshift-marketplace/redhat-marketplace-546xw" Oct 01 17:19:07 crc kubenswrapper[4949]: I1001 17:19:07.816522 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7fffbe7-02bc-4bdb-92cc-e908cd1282c7-utilities\") pod \"redhat-marketplace-546xw\" (UID: \"c7fffbe7-02bc-4bdb-92cc-e908cd1282c7\") " pod="openshift-marketplace/redhat-marketplace-546xw" Oct 01 17:19:07 crc kubenswrapper[4949]: I1001 17:19:07.918290 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7fffbe7-02bc-4bdb-92cc-e908cd1282c7-catalog-content\") pod \"redhat-marketplace-546xw\" (UID: \"c7fffbe7-02bc-4bdb-92cc-e908cd1282c7\") " pod="openshift-marketplace/redhat-marketplace-546xw" Oct 01 17:19:07 crc kubenswrapper[4949]: I1001 17:19:07.918575 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xskdn\" (UniqueName: \"kubernetes.io/projected/c7fffbe7-02bc-4bdb-92cc-e908cd1282c7-kube-api-access-xskdn\") pod \"redhat-marketplace-546xw\" (UID: \"c7fffbe7-02bc-4bdb-92cc-e908cd1282c7\") " pod="openshift-marketplace/redhat-marketplace-546xw" Oct 01 17:19:07 crc kubenswrapper[4949]: I1001 17:19:07.918629 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7fffbe7-02bc-4bdb-92cc-e908cd1282c7-utilities\") pod \"redhat-marketplace-546xw\" (UID: \"c7fffbe7-02bc-4bdb-92cc-e908cd1282c7\") " pod="openshift-marketplace/redhat-marketplace-546xw" Oct 01 17:19:07 crc kubenswrapper[4949]: I1001 17:19:07.919016 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7fffbe7-02bc-4bdb-92cc-e908cd1282c7-catalog-content\") pod \"redhat-marketplace-546xw\" (UID: \"c7fffbe7-02bc-4bdb-92cc-e908cd1282c7\") " pod="openshift-marketplace/redhat-marketplace-546xw" Oct 01 17:19:07 crc kubenswrapper[4949]: I1001 17:19:07.920374 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7fffbe7-02bc-4bdb-92cc-e908cd1282c7-utilities\") pod \"redhat-marketplace-546xw\" (UID: \"c7fffbe7-02bc-4bdb-92cc-e908cd1282c7\") " pod="openshift-marketplace/redhat-marketplace-546xw" Oct 01 17:19:07 crc kubenswrapper[4949]: I1001 17:19:07.956315 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xskdn\" (UniqueName: \"kubernetes.io/projected/c7fffbe7-02bc-4bdb-92cc-e908cd1282c7-kube-api-access-xskdn\") pod \"redhat-marketplace-546xw\" (UID: \"c7fffbe7-02bc-4bdb-92cc-e908cd1282c7\") " pod="openshift-marketplace/redhat-marketplace-546xw" Oct 01 17:19:07 crc kubenswrapper[4949]: I1001 17:19:07.989592 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-546xw" Oct 01 17:19:08 crc kubenswrapper[4949]: I1001 17:19:08.445525 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-546xw"] Oct 01 17:19:08 crc kubenswrapper[4949]: I1001 17:19:08.894459 4949 generic.go:334] "Generic (PLEG): container finished" podID="c7fffbe7-02bc-4bdb-92cc-e908cd1282c7" containerID="e764a3451f31309c9ffc1e55d2b56a707a35ac1ffcfb9e82b1e448f4998f799b" exitCode=0 Oct 01 17:19:08 crc kubenswrapper[4949]: I1001 17:19:08.894552 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-546xw" event={"ID":"c7fffbe7-02bc-4bdb-92cc-e908cd1282c7","Type":"ContainerDied","Data":"e764a3451f31309c9ffc1e55d2b56a707a35ac1ffcfb9e82b1e448f4998f799b"} Oct 01 17:19:08 crc kubenswrapper[4949]: I1001 17:19:08.894795 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-546xw" event={"ID":"c7fffbe7-02bc-4bdb-92cc-e908cd1282c7","Type":"ContainerStarted","Data":"94b6af0d0dc0b55fcce6516299193541dcc8a311816e183249de195843dc0da5"} Oct 01 17:19:08 crc kubenswrapper[4949]: I1001 17:19:08.896427 4949 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Oct 01 17:19:10 crc kubenswrapper[4949]: I1001 17:19:10.916069 4949 generic.go:334] "Generic (PLEG): container finished" podID="c7fffbe7-02bc-4bdb-92cc-e908cd1282c7" containerID="bc45d039cdbc12af2901ba997d91d584c6fea85c023dd2cb23cc5c4b758784c3" exitCode=0 Oct 01 17:19:10 crc kubenswrapper[4949]: I1001 17:19:10.916145 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-546xw" event={"ID":"c7fffbe7-02bc-4bdb-92cc-e908cd1282c7","Type":"ContainerDied","Data":"bc45d039cdbc12af2901ba997d91d584c6fea85c023dd2cb23cc5c4b758784c3"} Oct 01 17:19:11 crc kubenswrapper[4949]: I1001 17:19:11.928426 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-546xw" event={"ID":"c7fffbe7-02bc-4bdb-92cc-e908cd1282c7","Type":"ContainerStarted","Data":"eeaabfd2ded1c8146b31309e6e75f43bd6911947db728fe74036510d848cc225"} Oct 01 17:19:11 crc kubenswrapper[4949]: I1001 17:19:11.950336 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-546xw" podStartSLOduration=2.424185653 podStartE2EDuration="4.95029516s" podCreationTimestamp="2025-10-01 17:19:07 +0000 UTC" firstStartedPulling="2025-10-01 17:19:08.896201118 +0000 UTC m=+5848.201807309" lastFinishedPulling="2025-10-01 17:19:11.422310595 +0000 UTC m=+5850.727916816" observedRunningTime="2025-10-01 17:19:11.945408345 +0000 UTC m=+5851.251014536" watchObservedRunningTime="2025-10-01 17:19:11.95029516 +0000 UTC m=+5851.255901351" Oct 01 17:19:13 crc kubenswrapper[4949]: I1001 17:19:13.707781 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-pf22v_b6da831f-cb7b-40a9-bf29-b340db8658a0/kube-rbac-proxy/0.log" Oct 01 17:19:13 crc kubenswrapper[4949]: I1001 17:19:13.793690 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-68d546b9d8-pf22v_b6da831f-cb7b-40a9-bf29-b340db8658a0/controller/0.log" Oct 01 17:19:13 crc kubenswrapper[4949]: I1001 17:19:13.891993 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gg5zc_655538b2-2c0d-48b6-a5e3-ada4d1150dbf/cp-frr-files/0.log" Oct 01 17:19:14 crc kubenswrapper[4949]: I1001 17:19:14.083391 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gg5zc_655538b2-2c0d-48b6-a5e3-ada4d1150dbf/cp-metrics/0.log" Oct 01 17:19:14 crc kubenswrapper[4949]: I1001 17:19:14.084360 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gg5zc_655538b2-2c0d-48b6-a5e3-ada4d1150dbf/cp-frr-files/0.log" Oct 01 17:19:14 crc kubenswrapper[4949]: I1001 17:19:14.108859 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gg5zc_655538b2-2c0d-48b6-a5e3-ada4d1150dbf/cp-reloader/0.log" Oct 01 17:19:14 crc kubenswrapper[4949]: I1001 17:19:14.118292 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gg5zc_655538b2-2c0d-48b6-a5e3-ada4d1150dbf/cp-reloader/0.log" Oct 01 17:19:14 crc kubenswrapper[4949]: I1001 17:19:14.238004 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gg5zc_655538b2-2c0d-48b6-a5e3-ada4d1150dbf/cp-frr-files/0.log" Oct 01 17:19:14 crc kubenswrapper[4949]: I1001 17:19:14.309808 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gg5zc_655538b2-2c0d-48b6-a5e3-ada4d1150dbf/cp-metrics/0.log" Oct 01 17:19:14 crc kubenswrapper[4949]: I1001 17:19:14.309812 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gg5zc_655538b2-2c0d-48b6-a5e3-ada4d1150dbf/cp-reloader/0.log" Oct 01 17:19:14 crc kubenswrapper[4949]: I1001 17:19:14.343821 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gg5zc_655538b2-2c0d-48b6-a5e3-ada4d1150dbf/cp-metrics/0.log" Oct 01 17:19:14 crc kubenswrapper[4949]: I1001 17:19:14.440665 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gg5zc_655538b2-2c0d-48b6-a5e3-ada4d1150dbf/cp-frr-files/0.log" Oct 01 17:19:14 crc kubenswrapper[4949]: I1001 17:19:14.513267 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gg5zc_655538b2-2c0d-48b6-a5e3-ada4d1150dbf/controller/0.log" Oct 01 17:19:14 crc kubenswrapper[4949]: I1001 17:19:14.520550 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gg5zc_655538b2-2c0d-48b6-a5e3-ada4d1150dbf/cp-reloader/0.log" Oct 01 17:19:14 crc kubenswrapper[4949]: I1001 17:19:14.524417 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gg5zc_655538b2-2c0d-48b6-a5e3-ada4d1150dbf/cp-metrics/0.log" Oct 01 17:19:14 crc kubenswrapper[4949]: I1001 17:19:14.702240 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gg5zc_655538b2-2c0d-48b6-a5e3-ada4d1150dbf/kube-rbac-proxy-frr/0.log" Oct 01 17:19:14 crc kubenswrapper[4949]: I1001 17:19:14.708975 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gg5zc_655538b2-2c0d-48b6-a5e3-ada4d1150dbf/frr-metrics/0.log" Oct 01 17:19:14 crc kubenswrapper[4949]: I1001 17:19:14.728427 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gg5zc_655538b2-2c0d-48b6-a5e3-ada4d1150dbf/kube-rbac-proxy/0.log" Oct 01 17:19:14 crc kubenswrapper[4949]: I1001 17:19:14.925265 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-64bf5d555-zhn42_6223d63f-7f8d-429c-8526-d0c4d21798cd/frr-k8s-webhook-server/0.log" Oct 01 17:19:14 crc kubenswrapper[4949]: I1001 17:19:14.956799 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gg5zc_655538b2-2c0d-48b6-a5e3-ada4d1150dbf/reloader/0.log" Oct 01 17:19:15 crc kubenswrapper[4949]: I1001 17:19:15.132898 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-cf5f54bf-mpgzz_127d231a-5ecc-4d28-b1eb-9ac562730952/manager/0.log" Oct 01 17:19:15 crc kubenswrapper[4949]: I1001 17:19:15.254168 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-gg5zc_655538b2-2c0d-48b6-a5e3-ada4d1150dbf/frr/0.log" Oct 01 17:19:15 crc kubenswrapper[4949]: I1001 17:19:15.333669 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-557dffd7fc-wmwrc_cd4c2c4f-770f-409c-88c3-f7c05f5be013/webhook-server/0.log" Oct 01 17:19:15 crc kubenswrapper[4949]: I1001 17:19:15.381061 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-mlbq6_9f15709f-4836-45cd-a3c1-1fba0a51817e/kube-rbac-proxy/0.log" Oct 01 17:19:15 crc kubenswrapper[4949]: I1001 17:19:15.602452 4949 scope.go:117] "RemoveContainer" containerID="430799dac3ea9ee99ed16103e75915d512c98cde0242ee3607fc1dd10b2b3cf0" Oct 01 17:19:15 crc kubenswrapper[4949]: E1001 17:19:15.602755 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 17:19:15 crc kubenswrapper[4949]: I1001 17:19:15.822046 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-mlbq6_9f15709f-4836-45cd-a3c1-1fba0a51817e/speaker/0.log" Oct 01 17:19:17 crc kubenswrapper[4949]: I1001 17:19:17.992225 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-546xw" Oct 01 17:19:17 crc kubenswrapper[4949]: I1001 17:19:17.992891 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-546xw" Oct 01 17:19:18 crc kubenswrapper[4949]: I1001 17:19:18.060276 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-546xw" Oct 01 17:19:19 crc kubenswrapper[4949]: I1001 17:19:19.045810 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-546xw" Oct 01 17:19:19 crc kubenswrapper[4949]: I1001 17:19:19.096586 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-546xw"] Oct 01 17:19:21 crc kubenswrapper[4949]: I1001 17:19:21.000092 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-546xw" podUID="c7fffbe7-02bc-4bdb-92cc-e908cd1282c7" containerName="registry-server" containerID="cri-o://eeaabfd2ded1c8146b31309e6e75f43bd6911947db728fe74036510d848cc225" gracePeriod=2 Oct 01 17:19:21 crc kubenswrapper[4949]: I1001 17:19:21.557447 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-546xw" Oct 01 17:19:21 crc kubenswrapper[4949]: I1001 17:19:21.678196 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xskdn\" (UniqueName: \"kubernetes.io/projected/c7fffbe7-02bc-4bdb-92cc-e908cd1282c7-kube-api-access-xskdn\") pod \"c7fffbe7-02bc-4bdb-92cc-e908cd1282c7\" (UID: \"c7fffbe7-02bc-4bdb-92cc-e908cd1282c7\") " Oct 01 17:19:21 crc kubenswrapper[4949]: I1001 17:19:21.678387 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7fffbe7-02bc-4bdb-92cc-e908cd1282c7-utilities\") pod \"c7fffbe7-02bc-4bdb-92cc-e908cd1282c7\" (UID: \"c7fffbe7-02bc-4bdb-92cc-e908cd1282c7\") " Oct 01 17:19:21 crc kubenswrapper[4949]: I1001 17:19:21.678459 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7fffbe7-02bc-4bdb-92cc-e908cd1282c7-catalog-content\") pod \"c7fffbe7-02bc-4bdb-92cc-e908cd1282c7\" (UID: \"c7fffbe7-02bc-4bdb-92cc-e908cd1282c7\") " Oct 01 17:19:21 crc kubenswrapper[4949]: I1001 17:19:21.679411 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7fffbe7-02bc-4bdb-92cc-e908cd1282c7-utilities" (OuterVolumeSpecName: "utilities") pod "c7fffbe7-02bc-4bdb-92cc-e908cd1282c7" (UID: "c7fffbe7-02bc-4bdb-92cc-e908cd1282c7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 17:19:21 crc kubenswrapper[4949]: I1001 17:19:21.690436 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c7fffbe7-02bc-4bdb-92cc-e908cd1282c7-kube-api-access-xskdn" (OuterVolumeSpecName: "kube-api-access-xskdn") pod "c7fffbe7-02bc-4bdb-92cc-e908cd1282c7" (UID: "c7fffbe7-02bc-4bdb-92cc-e908cd1282c7"). InnerVolumeSpecName "kube-api-access-xskdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 17:19:21 crc kubenswrapper[4949]: I1001 17:19:21.691388 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c7fffbe7-02bc-4bdb-92cc-e908cd1282c7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c7fffbe7-02bc-4bdb-92cc-e908cd1282c7" (UID: "c7fffbe7-02bc-4bdb-92cc-e908cd1282c7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 17:19:21 crc kubenswrapper[4949]: I1001 17:19:21.781321 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xskdn\" (UniqueName: \"kubernetes.io/projected/c7fffbe7-02bc-4bdb-92cc-e908cd1282c7-kube-api-access-xskdn\") on node \"crc\" DevicePath \"\"" Oct 01 17:19:21 crc kubenswrapper[4949]: I1001 17:19:21.781360 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c7fffbe7-02bc-4bdb-92cc-e908cd1282c7-utilities\") on node \"crc\" DevicePath \"\"" Oct 01 17:19:21 crc kubenswrapper[4949]: I1001 17:19:21.781370 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c7fffbe7-02bc-4bdb-92cc-e908cd1282c7-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 01 17:19:22 crc kubenswrapper[4949]: I1001 17:19:22.012333 4949 generic.go:334] "Generic (PLEG): container finished" podID="c7fffbe7-02bc-4bdb-92cc-e908cd1282c7" containerID="eeaabfd2ded1c8146b31309e6e75f43bd6911947db728fe74036510d848cc225" exitCode=0 Oct 01 17:19:22 crc kubenswrapper[4949]: I1001 17:19:22.012417 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-546xw" Oct 01 17:19:22 crc kubenswrapper[4949]: I1001 17:19:22.012417 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-546xw" event={"ID":"c7fffbe7-02bc-4bdb-92cc-e908cd1282c7","Type":"ContainerDied","Data":"eeaabfd2ded1c8146b31309e6e75f43bd6911947db728fe74036510d848cc225"} Oct 01 17:19:22 crc kubenswrapper[4949]: I1001 17:19:22.012896 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-546xw" event={"ID":"c7fffbe7-02bc-4bdb-92cc-e908cd1282c7","Type":"ContainerDied","Data":"94b6af0d0dc0b55fcce6516299193541dcc8a311816e183249de195843dc0da5"} Oct 01 17:19:22 crc kubenswrapper[4949]: I1001 17:19:22.012925 4949 scope.go:117] "RemoveContainer" containerID="eeaabfd2ded1c8146b31309e6e75f43bd6911947db728fe74036510d848cc225" Oct 01 17:19:22 crc kubenswrapper[4949]: I1001 17:19:22.052279 4949 scope.go:117] "RemoveContainer" containerID="bc45d039cdbc12af2901ba997d91d584c6fea85c023dd2cb23cc5c4b758784c3" Oct 01 17:19:22 crc kubenswrapper[4949]: I1001 17:19:22.055667 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-546xw"] Oct 01 17:19:22 crc kubenswrapper[4949]: I1001 17:19:22.075236 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-546xw"] Oct 01 17:19:22 crc kubenswrapper[4949]: I1001 17:19:22.098769 4949 scope.go:117] "RemoveContainer" containerID="e764a3451f31309c9ffc1e55d2b56a707a35ac1ffcfb9e82b1e448f4998f799b" Oct 01 17:19:22 crc kubenswrapper[4949]: I1001 17:19:22.136531 4949 scope.go:117] "RemoveContainer" containerID="eeaabfd2ded1c8146b31309e6e75f43bd6911947db728fe74036510d848cc225" Oct 01 17:19:22 crc kubenswrapper[4949]: E1001 17:19:22.137255 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eeaabfd2ded1c8146b31309e6e75f43bd6911947db728fe74036510d848cc225\": container with ID starting with eeaabfd2ded1c8146b31309e6e75f43bd6911947db728fe74036510d848cc225 not found: ID does not exist" containerID="eeaabfd2ded1c8146b31309e6e75f43bd6911947db728fe74036510d848cc225" Oct 01 17:19:22 crc kubenswrapper[4949]: I1001 17:19:22.137304 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eeaabfd2ded1c8146b31309e6e75f43bd6911947db728fe74036510d848cc225"} err="failed to get container status \"eeaabfd2ded1c8146b31309e6e75f43bd6911947db728fe74036510d848cc225\": rpc error: code = NotFound desc = could not find container \"eeaabfd2ded1c8146b31309e6e75f43bd6911947db728fe74036510d848cc225\": container with ID starting with eeaabfd2ded1c8146b31309e6e75f43bd6911947db728fe74036510d848cc225 not found: ID does not exist" Oct 01 17:19:22 crc kubenswrapper[4949]: I1001 17:19:22.137339 4949 scope.go:117] "RemoveContainer" containerID="bc45d039cdbc12af2901ba997d91d584c6fea85c023dd2cb23cc5c4b758784c3" Oct 01 17:19:22 crc kubenswrapper[4949]: E1001 17:19:22.137745 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc45d039cdbc12af2901ba997d91d584c6fea85c023dd2cb23cc5c4b758784c3\": container with ID starting with bc45d039cdbc12af2901ba997d91d584c6fea85c023dd2cb23cc5c4b758784c3 not found: ID does not exist" containerID="bc45d039cdbc12af2901ba997d91d584c6fea85c023dd2cb23cc5c4b758784c3" Oct 01 17:19:22 crc kubenswrapper[4949]: I1001 17:19:22.137784 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc45d039cdbc12af2901ba997d91d584c6fea85c023dd2cb23cc5c4b758784c3"} err="failed to get container status \"bc45d039cdbc12af2901ba997d91d584c6fea85c023dd2cb23cc5c4b758784c3\": rpc error: code = NotFound desc = could not find container \"bc45d039cdbc12af2901ba997d91d584c6fea85c023dd2cb23cc5c4b758784c3\": container with ID starting with bc45d039cdbc12af2901ba997d91d584c6fea85c023dd2cb23cc5c4b758784c3 not found: ID does not exist" Oct 01 17:19:22 crc kubenswrapper[4949]: I1001 17:19:22.137810 4949 scope.go:117] "RemoveContainer" containerID="e764a3451f31309c9ffc1e55d2b56a707a35ac1ffcfb9e82b1e448f4998f799b" Oct 01 17:19:22 crc kubenswrapper[4949]: E1001 17:19:22.138099 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e764a3451f31309c9ffc1e55d2b56a707a35ac1ffcfb9e82b1e448f4998f799b\": container with ID starting with e764a3451f31309c9ffc1e55d2b56a707a35ac1ffcfb9e82b1e448f4998f799b not found: ID does not exist" containerID="e764a3451f31309c9ffc1e55d2b56a707a35ac1ffcfb9e82b1e448f4998f799b" Oct 01 17:19:22 crc kubenswrapper[4949]: I1001 17:19:22.138152 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e764a3451f31309c9ffc1e55d2b56a707a35ac1ffcfb9e82b1e448f4998f799b"} err="failed to get container status \"e764a3451f31309c9ffc1e55d2b56a707a35ac1ffcfb9e82b1e448f4998f799b\": rpc error: code = NotFound desc = could not find container \"e764a3451f31309c9ffc1e55d2b56a707a35ac1ffcfb9e82b1e448f4998f799b\": container with ID starting with e764a3451f31309c9ffc1e55d2b56a707a35ac1ffcfb9e82b1e448f4998f799b not found: ID does not exist" Oct 01 17:19:23 crc kubenswrapper[4949]: I1001 17:19:23.611911 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c7fffbe7-02bc-4bdb-92cc-e908cd1282c7" path="/var/lib/kubelet/pods/c7fffbe7-02bc-4bdb-92cc-e908cd1282c7/volumes" Oct 01 17:19:27 crc kubenswrapper[4949]: I1001 17:19:27.419755 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2p4dwt_5c4b466f-a6c5-447e-84ca-70e154cd29c7/util/0.log" Oct 01 17:19:27 crc kubenswrapper[4949]: I1001 17:19:27.569376 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2p4dwt_5c4b466f-a6c5-447e-84ca-70e154cd29c7/util/0.log" Oct 01 17:19:27 crc kubenswrapper[4949]: I1001 17:19:27.608964 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2p4dwt_5c4b466f-a6c5-447e-84ca-70e154cd29c7/pull/0.log" Oct 01 17:19:27 crc kubenswrapper[4949]: I1001 17:19:27.664086 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2p4dwt_5c4b466f-a6c5-447e-84ca-70e154cd29c7/pull/0.log" Oct 01 17:19:27 crc kubenswrapper[4949]: I1001 17:19:27.834903 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2p4dwt_5c4b466f-a6c5-447e-84ca-70e154cd29c7/extract/0.log" Oct 01 17:19:27 crc kubenswrapper[4949]: I1001 17:19:27.835578 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2p4dwt_5c4b466f-a6c5-447e-84ca-70e154cd29c7/util/0.log" Oct 01 17:19:27 crc kubenswrapper[4949]: I1001 17:19:27.862090 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8f2f4ee801e5826a37d84a7b1fc4ccbf6b79de668302737d0f1152d8d2p4dwt_5c4b466f-a6c5-447e-84ca-70e154cd29c7/pull/0.log" Oct 01 17:19:28 crc kubenswrapper[4949]: I1001 17:19:28.032061 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc2nd7q_9795b76c-919f-480c-8208-922a235c602a/util/0.log" Oct 01 17:19:28 crc kubenswrapper[4949]: I1001 17:19:28.206074 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc2nd7q_9795b76c-919f-480c-8208-922a235c602a/util/0.log" Oct 01 17:19:28 crc kubenswrapper[4949]: I1001 17:19:28.220058 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc2nd7q_9795b76c-919f-480c-8208-922a235c602a/pull/0.log" Oct 01 17:19:28 crc kubenswrapper[4949]: I1001 17:19:28.253661 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc2nd7q_9795b76c-919f-480c-8208-922a235c602a/pull/0.log" Oct 01 17:19:28 crc kubenswrapper[4949]: I1001 17:19:28.367621 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc2nd7q_9795b76c-919f-480c-8208-922a235c602a/extract/0.log" Oct 01 17:19:28 crc kubenswrapper[4949]: I1001 17:19:28.417683 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc2nd7q_9795b76c-919f-480c-8208-922a235c602a/pull/0.log" Oct 01 17:19:28 crc kubenswrapper[4949]: I1001 17:19:28.423191 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_9a6e092ce660f08e14c0b0ceab3711fa43f2b70244f9df8a7a069040bc2nd7q_9795b76c-919f-480c-8208-922a235c602a/util/0.log" Oct 01 17:19:28 crc kubenswrapper[4949]: I1001 17:19:28.570732 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pznv4_f0d47724-591c-4862-9e3f-9c190aada131/extract-utilities/0.log" Oct 01 17:19:28 crc kubenswrapper[4949]: I1001 17:19:28.602238 4949 scope.go:117] "RemoveContainer" containerID="430799dac3ea9ee99ed16103e75915d512c98cde0242ee3607fc1dd10b2b3cf0" Oct 01 17:19:28 crc kubenswrapper[4949]: E1001 17:19:28.602544 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 17:19:28 crc kubenswrapper[4949]: I1001 17:19:28.704106 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pznv4_f0d47724-591c-4862-9e3f-9c190aada131/extract-content/0.log" Oct 01 17:19:28 crc kubenswrapper[4949]: I1001 17:19:28.712035 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pznv4_f0d47724-591c-4862-9e3f-9c190aada131/extract-utilities/0.log" Oct 01 17:19:28 crc kubenswrapper[4949]: I1001 17:19:28.754537 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pznv4_f0d47724-591c-4862-9e3f-9c190aada131/extract-content/0.log" Oct 01 17:19:28 crc kubenswrapper[4949]: I1001 17:19:28.883428 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pznv4_f0d47724-591c-4862-9e3f-9c190aada131/extract-utilities/0.log" Oct 01 17:19:28 crc kubenswrapper[4949]: I1001 17:19:28.896151 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pznv4_f0d47724-591c-4862-9e3f-9c190aada131/extract-content/0.log" Oct 01 17:19:29 crc kubenswrapper[4949]: I1001 17:19:29.119808 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-phqz6_15e92864-5078-4780-ac9c-a5064ba66ada/extract-utilities/0.log" Oct 01 17:19:29 crc kubenswrapper[4949]: I1001 17:19:29.319588 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-phqz6_15e92864-5078-4780-ac9c-a5064ba66ada/extract-content/0.log" Oct 01 17:19:29 crc kubenswrapper[4949]: I1001 17:19:29.325099 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-phqz6_15e92864-5078-4780-ac9c-a5064ba66ada/extract-content/0.log" Oct 01 17:19:29 crc kubenswrapper[4949]: I1001 17:19:29.414248 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-phqz6_15e92864-5078-4780-ac9c-a5064ba66ada/extract-utilities/0.log" Oct 01 17:19:29 crc kubenswrapper[4949]: I1001 17:19:29.450573 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-pznv4_f0d47724-591c-4862-9e3f-9c190aada131/registry-server/0.log" Oct 01 17:19:29 crc kubenswrapper[4949]: I1001 17:19:29.596299 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-phqz6_15e92864-5078-4780-ac9c-a5064ba66ada/extract-utilities/0.log" Oct 01 17:19:29 crc kubenswrapper[4949]: I1001 17:19:29.647070 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-phqz6_15e92864-5078-4780-ac9c-a5064ba66ada/extract-content/0.log" Oct 01 17:19:29 crc kubenswrapper[4949]: I1001 17:19:29.820930 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96rvf5l_34c10916-fa5b-41e3-82f3-6263afd45c83/util/0.log" Oct 01 17:19:30 crc kubenswrapper[4949]: I1001 17:19:30.045537 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96rvf5l_34c10916-fa5b-41e3-82f3-6263afd45c83/pull/0.log" Oct 01 17:19:30 crc kubenswrapper[4949]: I1001 17:19:30.087966 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96rvf5l_34c10916-fa5b-41e3-82f3-6263afd45c83/util/0.log" Oct 01 17:19:30 crc kubenswrapper[4949]: I1001 17:19:30.100617 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96rvf5l_34c10916-fa5b-41e3-82f3-6263afd45c83/pull/0.log" Oct 01 17:19:30 crc kubenswrapper[4949]: I1001 17:19:30.309585 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96rvf5l_34c10916-fa5b-41e3-82f3-6263afd45c83/util/0.log" Oct 01 17:19:30 crc kubenswrapper[4949]: I1001 17:19:30.310720 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96rvf5l_34c10916-fa5b-41e3-82f3-6263afd45c83/pull/0.log" Oct 01 17:19:30 crc kubenswrapper[4949]: I1001 17:19:30.340680 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_f29efc416ca216184f30dbb4b19e0f463bdcecc8ef634322abbad88d96rvf5l_34c10916-fa5b-41e3-82f3-6263afd45c83/extract/0.log" Oct 01 17:19:30 crc kubenswrapper[4949]: I1001 17:19:30.360968 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-phqz6_15e92864-5078-4780-ac9c-a5064ba66ada/registry-server/0.log" Oct 01 17:19:30 crc kubenswrapper[4949]: I1001 17:19:30.495554 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbf2qs_bcdec972-172f-4ddb-83ed-e421a89a9a15/util/0.log" Oct 01 17:19:30 crc kubenswrapper[4949]: I1001 17:19:30.669951 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbf2qs_bcdec972-172f-4ddb-83ed-e421a89a9a15/util/0.log" Oct 01 17:19:30 crc kubenswrapper[4949]: I1001 17:19:30.678830 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbf2qs_bcdec972-172f-4ddb-83ed-e421a89a9a15/pull/0.log" Oct 01 17:19:30 crc kubenswrapper[4949]: I1001 17:19:30.697964 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbf2qs_bcdec972-172f-4ddb-83ed-e421a89a9a15/pull/0.log" Oct 01 17:19:30 crc kubenswrapper[4949]: I1001 17:19:30.869644 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbf2qs_bcdec972-172f-4ddb-83ed-e421a89a9a15/pull/0.log" Oct 01 17:19:30 crc kubenswrapper[4949]: I1001 17:19:30.884830 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbf2qs_bcdec972-172f-4ddb-83ed-e421a89a9a15/util/0.log" Oct 01 17:19:30 crc kubenswrapper[4949]: I1001 17:19:30.885443 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_fa9831ede5d93c33d525b70ce6ddf94e500d80992af75a3305fe98835cbf2qs_bcdec972-172f-4ddb-83ed-e421a89a9a15/extract/0.log" Oct 01 17:19:31 crc kubenswrapper[4949]: I1001 17:19:31.032361 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-smlrq_46f6bb2f-7a68-4a9a-a9c9-0cb9a60d959b/marketplace-operator/0.log" Oct 01 17:19:31 crc kubenswrapper[4949]: I1001 17:19:31.085000 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4wgtk_00c41ac2-4e6b-4656-b240-03036d30778d/extract-utilities/0.log" Oct 01 17:19:31 crc kubenswrapper[4949]: I1001 17:19:31.245264 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4wgtk_00c41ac2-4e6b-4656-b240-03036d30778d/extract-content/0.log" Oct 01 17:19:31 crc kubenswrapper[4949]: I1001 17:19:31.261804 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4wgtk_00c41ac2-4e6b-4656-b240-03036d30778d/extract-utilities/0.log" Oct 01 17:19:31 crc kubenswrapper[4949]: I1001 17:19:31.261923 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4wgtk_00c41ac2-4e6b-4656-b240-03036d30778d/extract-content/0.log" Oct 01 17:19:31 crc kubenswrapper[4949]: I1001 17:19:31.473975 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4wgtk_00c41ac2-4e6b-4656-b240-03036d30778d/extract-content/0.log" Oct 01 17:19:31 crc kubenswrapper[4949]: I1001 17:19:31.482275 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4wgtk_00c41ac2-4e6b-4656-b240-03036d30778d/extract-utilities/0.log" Oct 01 17:19:31 crc kubenswrapper[4949]: I1001 17:19:31.484604 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-btlcx_55f19eb7-e134-48c0-96b1-4b7801cefa6b/extract-utilities/0.log" Oct 01 17:19:31 crc kubenswrapper[4949]: I1001 17:19:31.657296 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-btlcx_55f19eb7-e134-48c0-96b1-4b7801cefa6b/extract-utilities/0.log" Oct 01 17:19:31 crc kubenswrapper[4949]: I1001 17:19:31.685754 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-btlcx_55f19eb7-e134-48c0-96b1-4b7801cefa6b/extract-content/0.log" Oct 01 17:19:31 crc kubenswrapper[4949]: I1001 17:19:31.741328 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-btlcx_55f19eb7-e134-48c0-96b1-4b7801cefa6b/extract-content/0.log" Oct 01 17:19:31 crc kubenswrapper[4949]: I1001 17:19:31.773587 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-4wgtk_00c41ac2-4e6b-4656-b240-03036d30778d/registry-server/0.log" Oct 01 17:19:31 crc kubenswrapper[4949]: I1001 17:19:31.905890 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-btlcx_55f19eb7-e134-48c0-96b1-4b7801cefa6b/extract-content/0.log" Oct 01 17:19:31 crc kubenswrapper[4949]: I1001 17:19:31.910355 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-btlcx_55f19eb7-e134-48c0-96b1-4b7801cefa6b/extract-utilities/0.log" Oct 01 17:19:32 crc kubenswrapper[4949]: I1001 17:19:32.341111 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-btlcx_55f19eb7-e134-48c0-96b1-4b7801cefa6b/registry-server/0.log" Oct 01 17:19:41 crc kubenswrapper[4949]: I1001 17:19:41.609495 4949 scope.go:117] "RemoveContainer" containerID="430799dac3ea9ee99ed16103e75915d512c98cde0242ee3607fc1dd10b2b3cf0" Oct 01 17:19:41 crc kubenswrapper[4949]: E1001 17:19:41.610140 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 17:19:52 crc kubenswrapper[4949]: I1001 17:19:52.601487 4949 scope.go:117] "RemoveContainer" containerID="430799dac3ea9ee99ed16103e75915d512c98cde0242ee3607fc1dd10b2b3cf0" Oct 01 17:19:52 crc kubenswrapper[4949]: E1001 17:19:52.602298 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 17:20:05 crc kubenswrapper[4949]: I1001 17:20:05.601415 4949 scope.go:117] "RemoveContainer" containerID="430799dac3ea9ee99ed16103e75915d512c98cde0242ee3607fc1dd10b2b3cf0" Oct 01 17:20:05 crc kubenswrapper[4949]: E1001 17:20:05.601983 4949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-l6287_openshift-machine-config-operator(0e15cd67-d4ad-49b8-96a6-da114105e558)\"" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" Oct 01 17:20:19 crc kubenswrapper[4949]: I1001 17:20:19.601686 4949 scope.go:117] "RemoveContainer" containerID="430799dac3ea9ee99ed16103e75915d512c98cde0242ee3607fc1dd10b2b3cf0" Oct 01 17:20:20 crc kubenswrapper[4949]: I1001 17:20:20.536051 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" event={"ID":"0e15cd67-d4ad-49b8-96a6-da114105e558","Type":"ContainerStarted","Data":"41caeb7faa29cac2601895c29779f001ea635f6bdd8eac6167ddb436f370ac42"} Oct 01 17:21:40 crc kubenswrapper[4949]: I1001 17:21:40.371681 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-q4g5q"] Oct 01 17:21:40 crc kubenswrapper[4949]: E1001 17:21:40.372563 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7fffbe7-02bc-4bdb-92cc-e908cd1282c7" containerName="extract-utilities" Oct 01 17:21:40 crc kubenswrapper[4949]: I1001 17:21:40.372580 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7fffbe7-02bc-4bdb-92cc-e908cd1282c7" containerName="extract-utilities" Oct 01 17:21:40 crc kubenswrapper[4949]: E1001 17:21:40.372608 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7fffbe7-02bc-4bdb-92cc-e908cd1282c7" containerName="registry-server" Oct 01 17:21:40 crc kubenswrapper[4949]: I1001 17:21:40.372614 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7fffbe7-02bc-4bdb-92cc-e908cd1282c7" containerName="registry-server" Oct 01 17:21:40 crc kubenswrapper[4949]: E1001 17:21:40.372623 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c7fffbe7-02bc-4bdb-92cc-e908cd1282c7" containerName="extract-content" Oct 01 17:21:40 crc kubenswrapper[4949]: I1001 17:21:40.372628 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="c7fffbe7-02bc-4bdb-92cc-e908cd1282c7" containerName="extract-content" Oct 01 17:21:40 crc kubenswrapper[4949]: I1001 17:21:40.372849 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="c7fffbe7-02bc-4bdb-92cc-e908cd1282c7" containerName="registry-server" Oct 01 17:21:40 crc kubenswrapper[4949]: I1001 17:21:40.374203 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q4g5q" Oct 01 17:21:40 crc kubenswrapper[4949]: I1001 17:21:40.387285 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q4g5q"] Oct 01 17:21:40 crc kubenswrapper[4949]: I1001 17:21:40.539617 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z5tsx\" (UniqueName: \"kubernetes.io/projected/e7187f3a-14f6-45c9-abce-5425b523af54-kube-api-access-z5tsx\") pod \"certified-operators-q4g5q\" (UID: \"e7187f3a-14f6-45c9-abce-5425b523af54\") " pod="openshift-marketplace/certified-operators-q4g5q" Oct 01 17:21:40 crc kubenswrapper[4949]: I1001 17:21:40.539680 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7187f3a-14f6-45c9-abce-5425b523af54-catalog-content\") pod \"certified-operators-q4g5q\" (UID: \"e7187f3a-14f6-45c9-abce-5425b523af54\") " pod="openshift-marketplace/certified-operators-q4g5q" Oct 01 17:21:40 crc kubenswrapper[4949]: I1001 17:21:40.540455 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7187f3a-14f6-45c9-abce-5425b523af54-utilities\") pod \"certified-operators-q4g5q\" (UID: \"e7187f3a-14f6-45c9-abce-5425b523af54\") " pod="openshift-marketplace/certified-operators-q4g5q" Oct 01 17:21:40 crc kubenswrapper[4949]: I1001 17:21:40.645393 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z5tsx\" (UniqueName: \"kubernetes.io/projected/e7187f3a-14f6-45c9-abce-5425b523af54-kube-api-access-z5tsx\") pod \"certified-operators-q4g5q\" (UID: \"e7187f3a-14f6-45c9-abce-5425b523af54\") " pod="openshift-marketplace/certified-operators-q4g5q" Oct 01 17:21:40 crc kubenswrapper[4949]: I1001 17:21:40.645442 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7187f3a-14f6-45c9-abce-5425b523af54-catalog-content\") pod \"certified-operators-q4g5q\" (UID: \"e7187f3a-14f6-45c9-abce-5425b523af54\") " pod="openshift-marketplace/certified-operators-q4g5q" Oct 01 17:21:40 crc kubenswrapper[4949]: I1001 17:21:40.645578 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7187f3a-14f6-45c9-abce-5425b523af54-utilities\") pod \"certified-operators-q4g5q\" (UID: \"e7187f3a-14f6-45c9-abce-5425b523af54\") " pod="openshift-marketplace/certified-operators-q4g5q" Oct 01 17:21:40 crc kubenswrapper[4949]: I1001 17:21:40.653356 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7187f3a-14f6-45c9-abce-5425b523af54-utilities\") pod \"certified-operators-q4g5q\" (UID: \"e7187f3a-14f6-45c9-abce-5425b523af54\") " pod="openshift-marketplace/certified-operators-q4g5q" Oct 01 17:21:40 crc kubenswrapper[4949]: I1001 17:21:40.653418 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7187f3a-14f6-45c9-abce-5425b523af54-catalog-content\") pod \"certified-operators-q4g5q\" (UID: \"e7187f3a-14f6-45c9-abce-5425b523af54\") " pod="openshift-marketplace/certified-operators-q4g5q" Oct 01 17:21:40 crc kubenswrapper[4949]: I1001 17:21:40.669503 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z5tsx\" (UniqueName: \"kubernetes.io/projected/e7187f3a-14f6-45c9-abce-5425b523af54-kube-api-access-z5tsx\") pod \"certified-operators-q4g5q\" (UID: \"e7187f3a-14f6-45c9-abce-5425b523af54\") " pod="openshift-marketplace/certified-operators-q4g5q" Oct 01 17:21:40 crc kubenswrapper[4949]: I1001 17:21:40.694586 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q4g5q" Oct 01 17:21:41 crc kubenswrapper[4949]: I1001 17:21:41.231420 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-q4g5q"] Oct 01 17:21:41 crc kubenswrapper[4949]: I1001 17:21:41.337827 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q4g5q" event={"ID":"e7187f3a-14f6-45c9-abce-5425b523af54","Type":"ContainerStarted","Data":"ccad977e46362c471cc162bcbb2805e5d192addc1a0da9b989503dc0864cc196"} Oct 01 17:21:42 crc kubenswrapper[4949]: I1001 17:21:42.351115 4949 generic.go:334] "Generic (PLEG): container finished" podID="e7187f3a-14f6-45c9-abce-5425b523af54" containerID="021fc58b48bedf2c713ced761fdd60c6bf1c0cf6c5ab6aedf9d9b702576e98ed" exitCode=0 Oct 01 17:21:42 crc kubenswrapper[4949]: I1001 17:21:42.351264 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q4g5q" event={"ID":"e7187f3a-14f6-45c9-abce-5425b523af54","Type":"ContainerDied","Data":"021fc58b48bedf2c713ced761fdd60c6bf1c0cf6c5ab6aedf9d9b702576e98ed"} Oct 01 17:21:44 crc kubenswrapper[4949]: I1001 17:21:44.388764 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q4g5q" event={"ID":"e7187f3a-14f6-45c9-abce-5425b523af54","Type":"ContainerStarted","Data":"43b9ce7f1d4a44858dbd5017e6f818fa747da9754f3b3ab61f7218a66d38e459"} Oct 01 17:21:45 crc kubenswrapper[4949]: I1001 17:21:45.401530 4949 generic.go:334] "Generic (PLEG): container finished" podID="e7187f3a-14f6-45c9-abce-5425b523af54" containerID="43b9ce7f1d4a44858dbd5017e6f818fa747da9754f3b3ab61f7218a66d38e459" exitCode=0 Oct 01 17:21:45 crc kubenswrapper[4949]: I1001 17:21:45.401812 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q4g5q" event={"ID":"e7187f3a-14f6-45c9-abce-5425b523af54","Type":"ContainerDied","Data":"43b9ce7f1d4a44858dbd5017e6f818fa747da9754f3b3ab61f7218a66d38e459"} Oct 01 17:21:46 crc kubenswrapper[4949]: I1001 17:21:46.419074 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q4g5q" event={"ID":"e7187f3a-14f6-45c9-abce-5425b523af54","Type":"ContainerStarted","Data":"278d7524b70c132c860ebf0f51ef72adc923c4ddfb4090b7519d7f154c8bf1c8"} Oct 01 17:21:46 crc kubenswrapper[4949]: I1001 17:21:46.463265 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-q4g5q" podStartSLOduration=2.86238442 podStartE2EDuration="6.458708691s" podCreationTimestamp="2025-10-01 17:21:40 +0000 UTC" firstStartedPulling="2025-10-01 17:21:42.353118283 +0000 UTC m=+6001.658724474" lastFinishedPulling="2025-10-01 17:21:45.949442544 +0000 UTC m=+6005.255048745" observedRunningTime="2025-10-01 17:21:46.441457404 +0000 UTC m=+6005.747063605" watchObservedRunningTime="2025-10-01 17:21:46.458708691 +0000 UTC m=+6005.764314892" Oct 01 17:21:50 crc kubenswrapper[4949]: I1001 17:21:50.695903 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-q4g5q" Oct 01 17:21:50 crc kubenswrapper[4949]: I1001 17:21:50.696862 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-q4g5q" Oct 01 17:21:50 crc kubenswrapper[4949]: I1001 17:21:50.765168 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-q4g5q" Oct 01 17:21:51 crc kubenswrapper[4949]: I1001 17:21:51.514772 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-q4g5q" Oct 01 17:21:51 crc kubenswrapper[4949]: I1001 17:21:51.591694 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q4g5q"] Oct 01 17:21:53 crc kubenswrapper[4949]: I1001 17:21:53.497451 4949 generic.go:334] "Generic (PLEG): container finished" podID="a49cf713-89e1-4955-bd33-3b894ebdd935" containerID="669778ae6302344e4d24e22f9a3547684ba4721eb7acf6f01c8eebfbc31109f5" exitCode=0 Oct 01 17:21:53 crc kubenswrapper[4949]: I1001 17:21:53.498448 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-q4g5q" podUID="e7187f3a-14f6-45c9-abce-5425b523af54" containerName="registry-server" containerID="cri-o://278d7524b70c132c860ebf0f51ef72adc923c4ddfb4090b7519d7f154c8bf1c8" gracePeriod=2 Oct 01 17:21:53 crc kubenswrapper[4949]: I1001 17:21:53.497554 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-c4svq/must-gather-dqdst" event={"ID":"a49cf713-89e1-4955-bd33-3b894ebdd935","Type":"ContainerDied","Data":"669778ae6302344e4d24e22f9a3547684ba4721eb7acf6f01c8eebfbc31109f5"} Oct 01 17:21:53 crc kubenswrapper[4949]: I1001 17:21:53.499849 4949 scope.go:117] "RemoveContainer" containerID="669778ae6302344e4d24e22f9a3547684ba4721eb7acf6f01c8eebfbc31109f5" Oct 01 17:21:53 crc kubenswrapper[4949]: I1001 17:21:53.957355 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q4g5q" Oct 01 17:21:54 crc kubenswrapper[4949]: I1001 17:21:54.144564 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5tsx\" (UniqueName: \"kubernetes.io/projected/e7187f3a-14f6-45c9-abce-5425b523af54-kube-api-access-z5tsx\") pod \"e7187f3a-14f6-45c9-abce-5425b523af54\" (UID: \"e7187f3a-14f6-45c9-abce-5425b523af54\") " Oct 01 17:21:54 crc kubenswrapper[4949]: I1001 17:21:54.144725 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7187f3a-14f6-45c9-abce-5425b523af54-utilities\") pod \"e7187f3a-14f6-45c9-abce-5425b523af54\" (UID: \"e7187f3a-14f6-45c9-abce-5425b523af54\") " Oct 01 17:21:54 crc kubenswrapper[4949]: I1001 17:21:54.144759 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7187f3a-14f6-45c9-abce-5425b523af54-catalog-content\") pod \"e7187f3a-14f6-45c9-abce-5425b523af54\" (UID: \"e7187f3a-14f6-45c9-abce-5425b523af54\") " Oct 01 17:21:54 crc kubenswrapper[4949]: I1001 17:21:54.145936 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7187f3a-14f6-45c9-abce-5425b523af54-utilities" (OuterVolumeSpecName: "utilities") pod "e7187f3a-14f6-45c9-abce-5425b523af54" (UID: "e7187f3a-14f6-45c9-abce-5425b523af54"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 17:21:54 crc kubenswrapper[4949]: I1001 17:21:54.165310 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7187f3a-14f6-45c9-abce-5425b523af54-kube-api-access-z5tsx" (OuterVolumeSpecName: "kube-api-access-z5tsx") pod "e7187f3a-14f6-45c9-abce-5425b523af54" (UID: "e7187f3a-14f6-45c9-abce-5425b523af54"). InnerVolumeSpecName "kube-api-access-z5tsx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 17:21:54 crc kubenswrapper[4949]: I1001 17:21:54.237571 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e7187f3a-14f6-45c9-abce-5425b523af54-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e7187f3a-14f6-45c9-abce-5425b523af54" (UID: "e7187f3a-14f6-45c9-abce-5425b523af54"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 17:21:54 crc kubenswrapper[4949]: I1001 17:21:54.247174 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z5tsx\" (UniqueName: \"kubernetes.io/projected/e7187f3a-14f6-45c9-abce-5425b523af54-kube-api-access-z5tsx\") on node \"crc\" DevicePath \"\"" Oct 01 17:21:54 crc kubenswrapper[4949]: I1001 17:21:54.247216 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e7187f3a-14f6-45c9-abce-5425b523af54-utilities\") on node \"crc\" DevicePath \"\"" Oct 01 17:21:54 crc kubenswrapper[4949]: I1001 17:21:54.247227 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e7187f3a-14f6-45c9-abce-5425b523af54-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 01 17:21:54 crc kubenswrapper[4949]: I1001 17:21:54.368236 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-c4svq_must-gather-dqdst_a49cf713-89e1-4955-bd33-3b894ebdd935/gather/0.log" Oct 01 17:21:54 crc kubenswrapper[4949]: I1001 17:21:54.509413 4949 generic.go:334] "Generic (PLEG): container finished" podID="e7187f3a-14f6-45c9-abce-5425b523af54" containerID="278d7524b70c132c860ebf0f51ef72adc923c4ddfb4090b7519d7f154c8bf1c8" exitCode=0 Oct 01 17:21:54 crc kubenswrapper[4949]: I1001 17:21:54.509487 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q4g5q" event={"ID":"e7187f3a-14f6-45c9-abce-5425b523af54","Type":"ContainerDied","Data":"278d7524b70c132c860ebf0f51ef72adc923c4ddfb4090b7519d7f154c8bf1c8"} Oct 01 17:21:54 crc kubenswrapper[4949]: I1001 17:21:54.509549 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-q4g5q" event={"ID":"e7187f3a-14f6-45c9-abce-5425b523af54","Type":"ContainerDied","Data":"ccad977e46362c471cc162bcbb2805e5d192addc1a0da9b989503dc0864cc196"} Oct 01 17:21:54 crc kubenswrapper[4949]: I1001 17:21:54.509559 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-q4g5q" Oct 01 17:21:54 crc kubenswrapper[4949]: I1001 17:21:54.509581 4949 scope.go:117] "RemoveContainer" containerID="278d7524b70c132c860ebf0f51ef72adc923c4ddfb4090b7519d7f154c8bf1c8" Oct 01 17:21:54 crc kubenswrapper[4949]: I1001 17:21:54.538309 4949 scope.go:117] "RemoveContainer" containerID="43b9ce7f1d4a44858dbd5017e6f818fa747da9754f3b3ab61f7218a66d38e459" Oct 01 17:21:54 crc kubenswrapper[4949]: I1001 17:21:54.571851 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-q4g5q"] Oct 01 17:21:54 crc kubenswrapper[4949]: I1001 17:21:54.579766 4949 scope.go:117] "RemoveContainer" containerID="021fc58b48bedf2c713ced761fdd60c6bf1c0cf6c5ab6aedf9d9b702576e98ed" Oct 01 17:21:54 crc kubenswrapper[4949]: I1001 17:21:54.579938 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-q4g5q"] Oct 01 17:21:54 crc kubenswrapper[4949]: I1001 17:21:54.623719 4949 scope.go:117] "RemoveContainer" containerID="278d7524b70c132c860ebf0f51ef72adc923c4ddfb4090b7519d7f154c8bf1c8" Oct 01 17:21:54 crc kubenswrapper[4949]: E1001 17:21:54.624560 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"278d7524b70c132c860ebf0f51ef72adc923c4ddfb4090b7519d7f154c8bf1c8\": container with ID starting with 278d7524b70c132c860ebf0f51ef72adc923c4ddfb4090b7519d7f154c8bf1c8 not found: ID does not exist" containerID="278d7524b70c132c860ebf0f51ef72adc923c4ddfb4090b7519d7f154c8bf1c8" Oct 01 17:21:54 crc kubenswrapper[4949]: I1001 17:21:54.624610 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"278d7524b70c132c860ebf0f51ef72adc923c4ddfb4090b7519d7f154c8bf1c8"} err="failed to get container status \"278d7524b70c132c860ebf0f51ef72adc923c4ddfb4090b7519d7f154c8bf1c8\": rpc error: code = NotFound desc = could not find container \"278d7524b70c132c860ebf0f51ef72adc923c4ddfb4090b7519d7f154c8bf1c8\": container with ID starting with 278d7524b70c132c860ebf0f51ef72adc923c4ddfb4090b7519d7f154c8bf1c8 not found: ID does not exist" Oct 01 17:21:54 crc kubenswrapper[4949]: I1001 17:21:54.624644 4949 scope.go:117] "RemoveContainer" containerID="43b9ce7f1d4a44858dbd5017e6f818fa747da9754f3b3ab61f7218a66d38e459" Oct 01 17:21:54 crc kubenswrapper[4949]: E1001 17:21:54.625162 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"43b9ce7f1d4a44858dbd5017e6f818fa747da9754f3b3ab61f7218a66d38e459\": container with ID starting with 43b9ce7f1d4a44858dbd5017e6f818fa747da9754f3b3ab61f7218a66d38e459 not found: ID does not exist" containerID="43b9ce7f1d4a44858dbd5017e6f818fa747da9754f3b3ab61f7218a66d38e459" Oct 01 17:21:54 crc kubenswrapper[4949]: I1001 17:21:54.625194 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"43b9ce7f1d4a44858dbd5017e6f818fa747da9754f3b3ab61f7218a66d38e459"} err="failed to get container status \"43b9ce7f1d4a44858dbd5017e6f818fa747da9754f3b3ab61f7218a66d38e459\": rpc error: code = NotFound desc = could not find container \"43b9ce7f1d4a44858dbd5017e6f818fa747da9754f3b3ab61f7218a66d38e459\": container with ID starting with 43b9ce7f1d4a44858dbd5017e6f818fa747da9754f3b3ab61f7218a66d38e459 not found: ID does not exist" Oct 01 17:21:54 crc kubenswrapper[4949]: I1001 17:21:54.625217 4949 scope.go:117] "RemoveContainer" containerID="021fc58b48bedf2c713ced761fdd60c6bf1c0cf6c5ab6aedf9d9b702576e98ed" Oct 01 17:21:54 crc kubenswrapper[4949]: E1001 17:21:54.626165 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"021fc58b48bedf2c713ced761fdd60c6bf1c0cf6c5ab6aedf9d9b702576e98ed\": container with ID starting with 021fc58b48bedf2c713ced761fdd60c6bf1c0cf6c5ab6aedf9d9b702576e98ed not found: ID does not exist" containerID="021fc58b48bedf2c713ced761fdd60c6bf1c0cf6c5ab6aedf9d9b702576e98ed" Oct 01 17:21:54 crc kubenswrapper[4949]: I1001 17:21:54.626193 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"021fc58b48bedf2c713ced761fdd60c6bf1c0cf6c5ab6aedf9d9b702576e98ed"} err="failed to get container status \"021fc58b48bedf2c713ced761fdd60c6bf1c0cf6c5ab6aedf9d9b702576e98ed\": rpc error: code = NotFound desc = could not find container \"021fc58b48bedf2c713ced761fdd60c6bf1c0cf6c5ab6aedf9d9b702576e98ed\": container with ID starting with 021fc58b48bedf2c713ced761fdd60c6bf1c0cf6c5ab6aedf9d9b702576e98ed not found: ID does not exist" Oct 01 17:21:55 crc kubenswrapper[4949]: I1001 17:21:55.613432 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7187f3a-14f6-45c9-abce-5425b523af54" path="/var/lib/kubelet/pods/e7187f3a-14f6-45c9-abce-5425b523af54/volumes" Oct 01 17:22:02 crc kubenswrapper[4949]: I1001 17:22:02.567998 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-c4svq/must-gather-dqdst"] Oct 01 17:22:02 crc kubenswrapper[4949]: I1001 17:22:02.568783 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-c4svq/must-gather-dqdst" podUID="a49cf713-89e1-4955-bd33-3b894ebdd935" containerName="copy" containerID="cri-o://1cd6f8311f70e08803c88c149023082f889731d69316539ee1b4a5c9bbcf8c9b" gracePeriod=2 Oct 01 17:22:02 crc kubenswrapper[4949]: I1001 17:22:02.575842 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-c4svq/must-gather-dqdst"] Oct 01 17:22:03 crc kubenswrapper[4949]: I1001 17:22:03.050720 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-c4svq_must-gather-dqdst_a49cf713-89e1-4955-bd33-3b894ebdd935/copy/0.log" Oct 01 17:22:03 crc kubenswrapper[4949]: I1001 17:22:03.051460 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c4svq/must-gather-dqdst" Oct 01 17:22:03 crc kubenswrapper[4949]: I1001 17:22:03.239696 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d62sj\" (UniqueName: \"kubernetes.io/projected/a49cf713-89e1-4955-bd33-3b894ebdd935-kube-api-access-d62sj\") pod \"a49cf713-89e1-4955-bd33-3b894ebdd935\" (UID: \"a49cf713-89e1-4955-bd33-3b894ebdd935\") " Oct 01 17:22:03 crc kubenswrapper[4949]: I1001 17:22:03.239916 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a49cf713-89e1-4955-bd33-3b894ebdd935-must-gather-output\") pod \"a49cf713-89e1-4955-bd33-3b894ebdd935\" (UID: \"a49cf713-89e1-4955-bd33-3b894ebdd935\") " Oct 01 17:22:03 crc kubenswrapper[4949]: I1001 17:22:03.254325 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a49cf713-89e1-4955-bd33-3b894ebdd935-kube-api-access-d62sj" (OuterVolumeSpecName: "kube-api-access-d62sj") pod "a49cf713-89e1-4955-bd33-3b894ebdd935" (UID: "a49cf713-89e1-4955-bd33-3b894ebdd935"). InnerVolumeSpecName "kube-api-access-d62sj". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 17:22:03 crc kubenswrapper[4949]: I1001 17:22:03.343042 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d62sj\" (UniqueName: \"kubernetes.io/projected/a49cf713-89e1-4955-bd33-3b894ebdd935-kube-api-access-d62sj\") on node \"crc\" DevicePath \"\"" Oct 01 17:22:03 crc kubenswrapper[4949]: I1001 17:22:03.424393 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a49cf713-89e1-4955-bd33-3b894ebdd935-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "a49cf713-89e1-4955-bd33-3b894ebdd935" (UID: "a49cf713-89e1-4955-bd33-3b894ebdd935"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 17:22:03 crc kubenswrapper[4949]: I1001 17:22:03.444384 4949 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/a49cf713-89e1-4955-bd33-3b894ebdd935-must-gather-output\") on node \"crc\" DevicePath \"\"" Oct 01 17:22:03 crc kubenswrapper[4949]: I1001 17:22:03.611375 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a49cf713-89e1-4955-bd33-3b894ebdd935" path="/var/lib/kubelet/pods/a49cf713-89e1-4955-bd33-3b894ebdd935/volumes" Oct 01 17:22:03 crc kubenswrapper[4949]: I1001 17:22:03.618356 4949 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-c4svq_must-gather-dqdst_a49cf713-89e1-4955-bd33-3b894ebdd935/copy/0.log" Oct 01 17:22:03 crc kubenswrapper[4949]: I1001 17:22:03.618677 4949 generic.go:334] "Generic (PLEG): container finished" podID="a49cf713-89e1-4955-bd33-3b894ebdd935" containerID="1cd6f8311f70e08803c88c149023082f889731d69316539ee1b4a5c9bbcf8c9b" exitCode=143 Oct 01 17:22:03 crc kubenswrapper[4949]: I1001 17:22:03.618730 4949 scope.go:117] "RemoveContainer" containerID="1cd6f8311f70e08803c88c149023082f889731d69316539ee1b4a5c9bbcf8c9b" Oct 01 17:22:03 crc kubenswrapper[4949]: I1001 17:22:03.618758 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-c4svq/must-gather-dqdst" Oct 01 17:22:03 crc kubenswrapper[4949]: I1001 17:22:03.652527 4949 scope.go:117] "RemoveContainer" containerID="669778ae6302344e4d24e22f9a3547684ba4721eb7acf6f01c8eebfbc31109f5" Oct 01 17:22:03 crc kubenswrapper[4949]: I1001 17:22:03.738525 4949 scope.go:117] "RemoveContainer" containerID="1cd6f8311f70e08803c88c149023082f889731d69316539ee1b4a5c9bbcf8c9b" Oct 01 17:22:03 crc kubenswrapper[4949]: E1001 17:22:03.739101 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1cd6f8311f70e08803c88c149023082f889731d69316539ee1b4a5c9bbcf8c9b\": container with ID starting with 1cd6f8311f70e08803c88c149023082f889731d69316539ee1b4a5c9bbcf8c9b not found: ID does not exist" containerID="1cd6f8311f70e08803c88c149023082f889731d69316539ee1b4a5c9bbcf8c9b" Oct 01 17:22:03 crc kubenswrapper[4949]: I1001 17:22:03.739172 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1cd6f8311f70e08803c88c149023082f889731d69316539ee1b4a5c9bbcf8c9b"} err="failed to get container status \"1cd6f8311f70e08803c88c149023082f889731d69316539ee1b4a5c9bbcf8c9b\": rpc error: code = NotFound desc = could not find container \"1cd6f8311f70e08803c88c149023082f889731d69316539ee1b4a5c9bbcf8c9b\": container with ID starting with 1cd6f8311f70e08803c88c149023082f889731d69316539ee1b4a5c9bbcf8c9b not found: ID does not exist" Oct 01 17:22:03 crc kubenswrapper[4949]: I1001 17:22:03.739206 4949 scope.go:117] "RemoveContainer" containerID="669778ae6302344e4d24e22f9a3547684ba4721eb7acf6f01c8eebfbc31109f5" Oct 01 17:22:03 crc kubenswrapper[4949]: E1001 17:22:03.739538 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"669778ae6302344e4d24e22f9a3547684ba4721eb7acf6f01c8eebfbc31109f5\": container with ID starting with 669778ae6302344e4d24e22f9a3547684ba4721eb7acf6f01c8eebfbc31109f5 not found: ID does not exist" containerID="669778ae6302344e4d24e22f9a3547684ba4721eb7acf6f01c8eebfbc31109f5" Oct 01 17:22:03 crc kubenswrapper[4949]: I1001 17:22:03.739575 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"669778ae6302344e4d24e22f9a3547684ba4721eb7acf6f01c8eebfbc31109f5"} err="failed to get container status \"669778ae6302344e4d24e22f9a3547684ba4721eb7acf6f01c8eebfbc31109f5\": rpc error: code = NotFound desc = could not find container \"669778ae6302344e4d24e22f9a3547684ba4721eb7acf6f01c8eebfbc31109f5\": container with ID starting with 669778ae6302344e4d24e22f9a3547684ba4721eb7acf6f01c8eebfbc31109f5 not found: ID does not exist" Oct 01 17:22:04 crc kubenswrapper[4949]: I1001 17:22:04.207825 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-cwwh4"] Oct 01 17:22:04 crc kubenswrapper[4949]: E1001 17:22:04.212285 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7187f3a-14f6-45c9-abce-5425b523af54" containerName="registry-server" Oct 01 17:22:04 crc kubenswrapper[4949]: I1001 17:22:04.212317 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7187f3a-14f6-45c9-abce-5425b523af54" containerName="registry-server" Oct 01 17:22:04 crc kubenswrapper[4949]: E1001 17:22:04.212347 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a49cf713-89e1-4955-bd33-3b894ebdd935" containerName="gather" Oct 01 17:22:04 crc kubenswrapper[4949]: I1001 17:22:04.212355 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="a49cf713-89e1-4955-bd33-3b894ebdd935" containerName="gather" Oct 01 17:22:04 crc kubenswrapper[4949]: E1001 17:22:04.212371 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7187f3a-14f6-45c9-abce-5425b523af54" containerName="extract-utilities" Oct 01 17:22:04 crc kubenswrapper[4949]: I1001 17:22:04.212377 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7187f3a-14f6-45c9-abce-5425b523af54" containerName="extract-utilities" Oct 01 17:22:04 crc kubenswrapper[4949]: E1001 17:22:04.212402 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e7187f3a-14f6-45c9-abce-5425b523af54" containerName="extract-content" Oct 01 17:22:04 crc kubenswrapper[4949]: I1001 17:22:04.212407 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="e7187f3a-14f6-45c9-abce-5425b523af54" containerName="extract-content" Oct 01 17:22:04 crc kubenswrapper[4949]: E1001 17:22:04.212425 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a49cf713-89e1-4955-bd33-3b894ebdd935" containerName="copy" Oct 01 17:22:04 crc kubenswrapper[4949]: I1001 17:22:04.212432 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="a49cf713-89e1-4955-bd33-3b894ebdd935" containerName="copy" Oct 01 17:22:04 crc kubenswrapper[4949]: I1001 17:22:04.218875 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="e7187f3a-14f6-45c9-abce-5425b523af54" containerName="registry-server" Oct 01 17:22:04 crc kubenswrapper[4949]: I1001 17:22:04.218919 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="a49cf713-89e1-4955-bd33-3b894ebdd935" containerName="gather" Oct 01 17:22:04 crc kubenswrapper[4949]: I1001 17:22:04.218944 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="a49cf713-89e1-4955-bd33-3b894ebdd935" containerName="copy" Oct 01 17:22:04 crc kubenswrapper[4949]: I1001 17:22:04.222575 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cwwh4" Oct 01 17:22:04 crc kubenswrapper[4949]: I1001 17:22:04.244303 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cwwh4"] Oct 01 17:22:04 crc kubenswrapper[4949]: I1001 17:22:04.269615 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cc37490-116e-48a0-bc8f-80cf30d2b1e9-utilities\") pod \"redhat-operators-cwwh4\" (UID: \"2cc37490-116e-48a0-bc8f-80cf30d2b1e9\") " pod="openshift-marketplace/redhat-operators-cwwh4" Oct 01 17:22:04 crc kubenswrapper[4949]: I1001 17:22:04.269659 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cc37490-116e-48a0-bc8f-80cf30d2b1e9-catalog-content\") pod \"redhat-operators-cwwh4\" (UID: \"2cc37490-116e-48a0-bc8f-80cf30d2b1e9\") " pod="openshift-marketplace/redhat-operators-cwwh4" Oct 01 17:22:04 crc kubenswrapper[4949]: I1001 17:22:04.269700 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t6z4s\" (UniqueName: \"kubernetes.io/projected/2cc37490-116e-48a0-bc8f-80cf30d2b1e9-kube-api-access-t6z4s\") pod \"redhat-operators-cwwh4\" (UID: \"2cc37490-116e-48a0-bc8f-80cf30d2b1e9\") " pod="openshift-marketplace/redhat-operators-cwwh4" Oct 01 17:22:04 crc kubenswrapper[4949]: I1001 17:22:04.371281 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cc37490-116e-48a0-bc8f-80cf30d2b1e9-utilities\") pod \"redhat-operators-cwwh4\" (UID: \"2cc37490-116e-48a0-bc8f-80cf30d2b1e9\") " pod="openshift-marketplace/redhat-operators-cwwh4" Oct 01 17:22:04 crc kubenswrapper[4949]: I1001 17:22:04.371626 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cc37490-116e-48a0-bc8f-80cf30d2b1e9-catalog-content\") pod \"redhat-operators-cwwh4\" (UID: \"2cc37490-116e-48a0-bc8f-80cf30d2b1e9\") " pod="openshift-marketplace/redhat-operators-cwwh4" Oct 01 17:22:04 crc kubenswrapper[4949]: I1001 17:22:04.371694 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t6z4s\" (UniqueName: \"kubernetes.io/projected/2cc37490-116e-48a0-bc8f-80cf30d2b1e9-kube-api-access-t6z4s\") pod \"redhat-operators-cwwh4\" (UID: \"2cc37490-116e-48a0-bc8f-80cf30d2b1e9\") " pod="openshift-marketplace/redhat-operators-cwwh4" Oct 01 17:22:04 crc kubenswrapper[4949]: I1001 17:22:04.371789 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2cc37490-116e-48a0-bc8f-80cf30d2b1e9-utilities\") pod \"redhat-operators-cwwh4\" (UID: \"2cc37490-116e-48a0-bc8f-80cf30d2b1e9\") " pod="openshift-marketplace/redhat-operators-cwwh4" Oct 01 17:22:04 crc kubenswrapper[4949]: I1001 17:22:04.372051 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2cc37490-116e-48a0-bc8f-80cf30d2b1e9-catalog-content\") pod \"redhat-operators-cwwh4\" (UID: \"2cc37490-116e-48a0-bc8f-80cf30d2b1e9\") " pod="openshift-marketplace/redhat-operators-cwwh4" Oct 01 17:22:04 crc kubenswrapper[4949]: I1001 17:22:04.390527 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t6z4s\" (UniqueName: \"kubernetes.io/projected/2cc37490-116e-48a0-bc8f-80cf30d2b1e9-kube-api-access-t6z4s\") pod \"redhat-operators-cwwh4\" (UID: \"2cc37490-116e-48a0-bc8f-80cf30d2b1e9\") " pod="openshift-marketplace/redhat-operators-cwwh4" Oct 01 17:22:04 crc kubenswrapper[4949]: I1001 17:22:04.558599 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-cwwh4" Oct 01 17:22:05 crc kubenswrapper[4949]: I1001 17:22:05.089072 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cwwh4"] Oct 01 17:22:05 crc kubenswrapper[4949]: I1001 17:22:05.673553 4949 generic.go:334] "Generic (PLEG): container finished" podID="2cc37490-116e-48a0-bc8f-80cf30d2b1e9" containerID="cd8495f0c17daeceb355b32e5984d7c2f9b5e1ada4b6f58d9347ae12b93583b1" exitCode=0 Oct 01 17:22:05 crc kubenswrapper[4949]: I1001 17:22:05.673697 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwwh4" event={"ID":"2cc37490-116e-48a0-bc8f-80cf30d2b1e9","Type":"ContainerDied","Data":"cd8495f0c17daeceb355b32e5984d7c2f9b5e1ada4b6f58d9347ae12b93583b1"} Oct 01 17:22:05 crc kubenswrapper[4949]: I1001 17:22:05.673869 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwwh4" event={"ID":"2cc37490-116e-48a0-bc8f-80cf30d2b1e9","Type":"ContainerStarted","Data":"679b72d806f13b5391fa54cc81366bdc2621bf622bf70f1186c45a3da93d851c"} Oct 01 17:22:14 crc kubenswrapper[4949]: I1001 17:22:14.765420 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwwh4" event={"ID":"2cc37490-116e-48a0-bc8f-80cf30d2b1e9","Type":"ContainerStarted","Data":"cd41f656c063b05d4a860f9b1b2b725910f46f68c94ea2be7b495769f69e09eb"} Oct 01 17:22:16 crc kubenswrapper[4949]: I1001 17:22:16.790973 4949 generic.go:334] "Generic (PLEG): container finished" podID="2cc37490-116e-48a0-bc8f-80cf30d2b1e9" containerID="cd41f656c063b05d4a860f9b1b2b725910f46f68c94ea2be7b495769f69e09eb" exitCode=0 Oct 01 17:22:16 crc kubenswrapper[4949]: I1001 17:22:16.790994 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwwh4" event={"ID":"2cc37490-116e-48a0-bc8f-80cf30d2b1e9","Type":"ContainerDied","Data":"cd41f656c063b05d4a860f9b1b2b725910f46f68c94ea2be7b495769f69e09eb"} Oct 01 17:22:17 crc kubenswrapper[4949]: I1001 17:22:17.801254 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-cwwh4" event={"ID":"2cc37490-116e-48a0-bc8f-80cf30d2b1e9","Type":"ContainerStarted","Data":"d18f1d1144b3e9ba5c3cc382b3023f2bbab489a2142ada7fc6644e1d03fd4afe"} Oct 01 17:22:17 crc kubenswrapper[4949]: I1001 17:22:17.824076 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-cwwh4" podStartSLOduration=2.261085803 podStartE2EDuration="13.824053357s" podCreationTimestamp="2025-10-01 17:22:04 +0000 UTC" firstStartedPulling="2025-10-01 17:22:05.676563965 +0000 UTC m=+6024.982170156" lastFinishedPulling="2025-10-01 17:22:17.239531519 +0000 UTC m=+6036.545137710" observedRunningTime="2025-10-01 17:22:17.821182248 +0000 UTC m=+6037.126788439" watchObservedRunningTime="2025-10-01 17:22:17.824053357 +0000 UTC m=+6037.129659558" Oct 01 17:22:24 crc kubenswrapper[4949]: I1001 17:22:24.559323 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-cwwh4" Oct 01 17:22:24 crc kubenswrapper[4949]: I1001 17:22:24.559859 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-cwwh4" Oct 01 17:22:24 crc kubenswrapper[4949]: I1001 17:22:24.619476 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-cwwh4" Oct 01 17:22:24 crc kubenswrapper[4949]: I1001 17:22:24.923011 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-cwwh4" Oct 01 17:22:24 crc kubenswrapper[4949]: I1001 17:22:24.993450 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-cwwh4"] Oct 01 17:22:25 crc kubenswrapper[4949]: I1001 17:22:25.036402 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-btlcx"] Oct 01 17:22:25 crc kubenswrapper[4949]: I1001 17:22:25.036678 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-btlcx" podUID="55f19eb7-e134-48c0-96b1-4b7801cefa6b" containerName="registry-server" containerID="cri-o://086e4ca5f5275f659c2c0afd3a569127804f4c5d38c35cdcc4126e8af29141f8" gracePeriod=2 Oct 01 17:22:25 crc kubenswrapper[4949]: I1001 17:22:25.526077 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-btlcx" Oct 01 17:22:25 crc kubenswrapper[4949]: I1001 17:22:25.587279 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55f19eb7-e134-48c0-96b1-4b7801cefa6b-catalog-content\") pod \"55f19eb7-e134-48c0-96b1-4b7801cefa6b\" (UID: \"55f19eb7-e134-48c0-96b1-4b7801cefa6b\") " Oct 01 17:22:25 crc kubenswrapper[4949]: I1001 17:22:25.587351 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55f19eb7-e134-48c0-96b1-4b7801cefa6b-utilities\") pod \"55f19eb7-e134-48c0-96b1-4b7801cefa6b\" (UID: \"55f19eb7-e134-48c0-96b1-4b7801cefa6b\") " Oct 01 17:22:25 crc kubenswrapper[4949]: I1001 17:22:25.587420 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4jq7\" (UniqueName: \"kubernetes.io/projected/55f19eb7-e134-48c0-96b1-4b7801cefa6b-kube-api-access-d4jq7\") pod \"55f19eb7-e134-48c0-96b1-4b7801cefa6b\" (UID: \"55f19eb7-e134-48c0-96b1-4b7801cefa6b\") " Oct 01 17:22:25 crc kubenswrapper[4949]: I1001 17:22:25.588067 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55f19eb7-e134-48c0-96b1-4b7801cefa6b-utilities" (OuterVolumeSpecName: "utilities") pod "55f19eb7-e134-48c0-96b1-4b7801cefa6b" (UID: "55f19eb7-e134-48c0-96b1-4b7801cefa6b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 17:22:25 crc kubenswrapper[4949]: I1001 17:22:25.605828 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55f19eb7-e134-48c0-96b1-4b7801cefa6b-kube-api-access-d4jq7" (OuterVolumeSpecName: "kube-api-access-d4jq7") pod "55f19eb7-e134-48c0-96b1-4b7801cefa6b" (UID: "55f19eb7-e134-48c0-96b1-4b7801cefa6b"). InnerVolumeSpecName "kube-api-access-d4jq7". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 17:22:25 crc kubenswrapper[4949]: I1001 17:22:25.674327 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/55f19eb7-e134-48c0-96b1-4b7801cefa6b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "55f19eb7-e134-48c0-96b1-4b7801cefa6b" (UID: "55f19eb7-e134-48c0-96b1-4b7801cefa6b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 17:22:25 crc kubenswrapper[4949]: I1001 17:22:25.689479 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/55f19eb7-e134-48c0-96b1-4b7801cefa6b-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 01 17:22:25 crc kubenswrapper[4949]: I1001 17:22:25.689512 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/55f19eb7-e134-48c0-96b1-4b7801cefa6b-utilities\") on node \"crc\" DevicePath \"\"" Oct 01 17:22:25 crc kubenswrapper[4949]: I1001 17:22:25.689521 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4jq7\" (UniqueName: \"kubernetes.io/projected/55f19eb7-e134-48c0-96b1-4b7801cefa6b-kube-api-access-d4jq7\") on node \"crc\" DevicePath \"\"" Oct 01 17:22:25 crc kubenswrapper[4949]: I1001 17:22:25.887114 4949 generic.go:334] "Generic (PLEG): container finished" podID="55f19eb7-e134-48c0-96b1-4b7801cefa6b" containerID="086e4ca5f5275f659c2c0afd3a569127804f4c5d38c35cdcc4126e8af29141f8" exitCode=0 Oct 01 17:22:25 crc kubenswrapper[4949]: I1001 17:22:25.887782 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-btlcx" event={"ID":"55f19eb7-e134-48c0-96b1-4b7801cefa6b","Type":"ContainerDied","Data":"086e4ca5f5275f659c2c0afd3a569127804f4c5d38c35cdcc4126e8af29141f8"} Oct 01 17:22:25 crc kubenswrapper[4949]: I1001 17:22:25.887826 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-btlcx" Oct 01 17:22:25 crc kubenswrapper[4949]: I1001 17:22:25.887862 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-btlcx" event={"ID":"55f19eb7-e134-48c0-96b1-4b7801cefa6b","Type":"ContainerDied","Data":"0304078d3a21784e7183ba3737211e02ab1f727eb9ef97913dddf0e49dc60dbc"} Oct 01 17:22:25 crc kubenswrapper[4949]: I1001 17:22:25.887892 4949 scope.go:117] "RemoveContainer" containerID="086e4ca5f5275f659c2c0afd3a569127804f4c5d38c35cdcc4126e8af29141f8" Oct 01 17:22:25 crc kubenswrapper[4949]: I1001 17:22:25.914979 4949 scope.go:117] "RemoveContainer" containerID="2ffdd252deeaab42252747fc8ac59dee6cac6c5c2d5f5ef0256704364424700b" Oct 01 17:22:25 crc kubenswrapper[4949]: I1001 17:22:25.931530 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-btlcx"] Oct 01 17:22:25 crc kubenswrapper[4949]: I1001 17:22:25.942702 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-btlcx"] Oct 01 17:22:25 crc kubenswrapper[4949]: I1001 17:22:25.953565 4949 scope.go:117] "RemoveContainer" containerID="7cd485211d5f5b2eb7865bf2f0b8bd8caa91fbf12bc8d86f1dde1d25342f32bb" Oct 01 17:22:25 crc kubenswrapper[4949]: I1001 17:22:25.980173 4949 scope.go:117] "RemoveContainer" containerID="086e4ca5f5275f659c2c0afd3a569127804f4c5d38c35cdcc4126e8af29141f8" Oct 01 17:22:25 crc kubenswrapper[4949]: E1001 17:22:25.980596 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"086e4ca5f5275f659c2c0afd3a569127804f4c5d38c35cdcc4126e8af29141f8\": container with ID starting with 086e4ca5f5275f659c2c0afd3a569127804f4c5d38c35cdcc4126e8af29141f8 not found: ID does not exist" containerID="086e4ca5f5275f659c2c0afd3a569127804f4c5d38c35cdcc4126e8af29141f8" Oct 01 17:22:25 crc kubenswrapper[4949]: I1001 17:22:25.980629 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"086e4ca5f5275f659c2c0afd3a569127804f4c5d38c35cdcc4126e8af29141f8"} err="failed to get container status \"086e4ca5f5275f659c2c0afd3a569127804f4c5d38c35cdcc4126e8af29141f8\": rpc error: code = NotFound desc = could not find container \"086e4ca5f5275f659c2c0afd3a569127804f4c5d38c35cdcc4126e8af29141f8\": container with ID starting with 086e4ca5f5275f659c2c0afd3a569127804f4c5d38c35cdcc4126e8af29141f8 not found: ID does not exist" Oct 01 17:22:25 crc kubenswrapper[4949]: I1001 17:22:25.980649 4949 scope.go:117] "RemoveContainer" containerID="2ffdd252deeaab42252747fc8ac59dee6cac6c5c2d5f5ef0256704364424700b" Oct 01 17:22:25 crc kubenswrapper[4949]: E1001 17:22:25.980991 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2ffdd252deeaab42252747fc8ac59dee6cac6c5c2d5f5ef0256704364424700b\": container with ID starting with 2ffdd252deeaab42252747fc8ac59dee6cac6c5c2d5f5ef0256704364424700b not found: ID does not exist" containerID="2ffdd252deeaab42252747fc8ac59dee6cac6c5c2d5f5ef0256704364424700b" Oct 01 17:22:25 crc kubenswrapper[4949]: I1001 17:22:25.981020 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2ffdd252deeaab42252747fc8ac59dee6cac6c5c2d5f5ef0256704364424700b"} err="failed to get container status \"2ffdd252deeaab42252747fc8ac59dee6cac6c5c2d5f5ef0256704364424700b\": rpc error: code = NotFound desc = could not find container \"2ffdd252deeaab42252747fc8ac59dee6cac6c5c2d5f5ef0256704364424700b\": container with ID starting with 2ffdd252deeaab42252747fc8ac59dee6cac6c5c2d5f5ef0256704364424700b not found: ID does not exist" Oct 01 17:22:25 crc kubenswrapper[4949]: I1001 17:22:25.981034 4949 scope.go:117] "RemoveContainer" containerID="7cd485211d5f5b2eb7865bf2f0b8bd8caa91fbf12bc8d86f1dde1d25342f32bb" Oct 01 17:22:25 crc kubenswrapper[4949]: E1001 17:22:25.983939 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7cd485211d5f5b2eb7865bf2f0b8bd8caa91fbf12bc8d86f1dde1d25342f32bb\": container with ID starting with 7cd485211d5f5b2eb7865bf2f0b8bd8caa91fbf12bc8d86f1dde1d25342f32bb not found: ID does not exist" containerID="7cd485211d5f5b2eb7865bf2f0b8bd8caa91fbf12bc8d86f1dde1d25342f32bb" Oct 01 17:22:25 crc kubenswrapper[4949]: I1001 17:22:25.983976 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7cd485211d5f5b2eb7865bf2f0b8bd8caa91fbf12bc8d86f1dde1d25342f32bb"} err="failed to get container status \"7cd485211d5f5b2eb7865bf2f0b8bd8caa91fbf12bc8d86f1dde1d25342f32bb\": rpc error: code = NotFound desc = could not find container \"7cd485211d5f5b2eb7865bf2f0b8bd8caa91fbf12bc8d86f1dde1d25342f32bb\": container with ID starting with 7cd485211d5f5b2eb7865bf2f0b8bd8caa91fbf12bc8d86f1dde1d25342f32bb not found: ID does not exist" Oct 01 17:22:27 crc kubenswrapper[4949]: I1001 17:22:27.613576 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55f19eb7-e134-48c0-96b1-4b7801cefa6b" path="/var/lib/kubelet/pods/55f19eb7-e134-48c0-96b1-4b7801cefa6b/volumes" Oct 01 17:22:48 crc kubenswrapper[4949]: I1001 17:22:48.038174 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 17:22:48 crc kubenswrapper[4949]: I1001 17:22:48.038785 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 17:22:59 crc kubenswrapper[4949]: I1001 17:22:59.101319 4949 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-bmhq6"] Oct 01 17:22:59 crc kubenswrapper[4949]: E1001 17:22:59.102258 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55f19eb7-e134-48c0-96b1-4b7801cefa6b" containerName="extract-utilities" Oct 01 17:22:59 crc kubenswrapper[4949]: I1001 17:22:59.102282 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="55f19eb7-e134-48c0-96b1-4b7801cefa6b" containerName="extract-utilities" Oct 01 17:22:59 crc kubenswrapper[4949]: E1001 17:22:59.102293 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55f19eb7-e134-48c0-96b1-4b7801cefa6b" containerName="registry-server" Oct 01 17:22:59 crc kubenswrapper[4949]: I1001 17:22:59.102299 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="55f19eb7-e134-48c0-96b1-4b7801cefa6b" containerName="registry-server" Oct 01 17:22:59 crc kubenswrapper[4949]: E1001 17:22:59.102313 4949 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="55f19eb7-e134-48c0-96b1-4b7801cefa6b" containerName="extract-content" Oct 01 17:22:59 crc kubenswrapper[4949]: I1001 17:22:59.102320 4949 state_mem.go:107] "Deleted CPUSet assignment" podUID="55f19eb7-e134-48c0-96b1-4b7801cefa6b" containerName="extract-content" Oct 01 17:22:59 crc kubenswrapper[4949]: I1001 17:22:59.102494 4949 memory_manager.go:354] "RemoveStaleState removing state" podUID="55f19eb7-e134-48c0-96b1-4b7801cefa6b" containerName="registry-server" Oct 01 17:22:59 crc kubenswrapper[4949]: I1001 17:22:59.103913 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bmhq6" Oct 01 17:22:59 crc kubenswrapper[4949]: I1001 17:22:59.114473 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bmhq6"] Oct 01 17:22:59 crc kubenswrapper[4949]: I1001 17:22:59.237887 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d11a7509-f83e-4c7f-a491-e59fbee0025f-utilities\") pod \"community-operators-bmhq6\" (UID: \"d11a7509-f83e-4c7f-a491-e59fbee0025f\") " pod="openshift-marketplace/community-operators-bmhq6" Oct 01 17:22:59 crc kubenswrapper[4949]: I1001 17:22:59.238320 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d11a7509-f83e-4c7f-a491-e59fbee0025f-catalog-content\") pod \"community-operators-bmhq6\" (UID: \"d11a7509-f83e-4c7f-a491-e59fbee0025f\") " pod="openshift-marketplace/community-operators-bmhq6" Oct 01 17:22:59 crc kubenswrapper[4949]: I1001 17:22:59.238488 4949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpkbz\" (UniqueName: \"kubernetes.io/projected/d11a7509-f83e-4c7f-a491-e59fbee0025f-kube-api-access-lpkbz\") pod \"community-operators-bmhq6\" (UID: \"d11a7509-f83e-4c7f-a491-e59fbee0025f\") " pod="openshift-marketplace/community-operators-bmhq6" Oct 01 17:22:59 crc kubenswrapper[4949]: I1001 17:22:59.340100 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d11a7509-f83e-4c7f-a491-e59fbee0025f-catalog-content\") pod \"community-operators-bmhq6\" (UID: \"d11a7509-f83e-4c7f-a491-e59fbee0025f\") " pod="openshift-marketplace/community-operators-bmhq6" Oct 01 17:22:59 crc kubenswrapper[4949]: I1001 17:22:59.340198 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lpkbz\" (UniqueName: \"kubernetes.io/projected/d11a7509-f83e-4c7f-a491-e59fbee0025f-kube-api-access-lpkbz\") pod \"community-operators-bmhq6\" (UID: \"d11a7509-f83e-4c7f-a491-e59fbee0025f\") " pod="openshift-marketplace/community-operators-bmhq6" Oct 01 17:22:59 crc kubenswrapper[4949]: I1001 17:22:59.340240 4949 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d11a7509-f83e-4c7f-a491-e59fbee0025f-utilities\") pod \"community-operators-bmhq6\" (UID: \"d11a7509-f83e-4c7f-a491-e59fbee0025f\") " pod="openshift-marketplace/community-operators-bmhq6" Oct 01 17:22:59 crc kubenswrapper[4949]: I1001 17:22:59.340726 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d11a7509-f83e-4c7f-a491-e59fbee0025f-catalog-content\") pod \"community-operators-bmhq6\" (UID: \"d11a7509-f83e-4c7f-a491-e59fbee0025f\") " pod="openshift-marketplace/community-operators-bmhq6" Oct 01 17:22:59 crc kubenswrapper[4949]: I1001 17:22:59.340739 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d11a7509-f83e-4c7f-a491-e59fbee0025f-utilities\") pod \"community-operators-bmhq6\" (UID: \"d11a7509-f83e-4c7f-a491-e59fbee0025f\") " pod="openshift-marketplace/community-operators-bmhq6" Oct 01 17:22:59 crc kubenswrapper[4949]: I1001 17:22:59.362412 4949 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpkbz\" (UniqueName: \"kubernetes.io/projected/d11a7509-f83e-4c7f-a491-e59fbee0025f-kube-api-access-lpkbz\") pod \"community-operators-bmhq6\" (UID: \"d11a7509-f83e-4c7f-a491-e59fbee0025f\") " pod="openshift-marketplace/community-operators-bmhq6" Oct 01 17:22:59 crc kubenswrapper[4949]: I1001 17:22:59.431966 4949 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bmhq6" Oct 01 17:23:00 crc kubenswrapper[4949]: I1001 17:23:00.002075 4949 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-bmhq6"] Oct 01 17:23:00 crc kubenswrapper[4949]: I1001 17:23:00.213918 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bmhq6" event={"ID":"d11a7509-f83e-4c7f-a491-e59fbee0025f","Type":"ContainerStarted","Data":"e2613b8dbb5c3f7944cb1b1ab48bf46fa1408091177f54783c0a53c2687b7c34"} Oct 01 17:23:01 crc kubenswrapper[4949]: I1001 17:23:01.222761 4949 generic.go:334] "Generic (PLEG): container finished" podID="d11a7509-f83e-4c7f-a491-e59fbee0025f" containerID="534a0a64b3873207cb2932acc8f37047eeebb9eceae6a6e6127c92d87fe6b6e6" exitCode=0 Oct 01 17:23:01 crc kubenswrapper[4949]: I1001 17:23:01.223040 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bmhq6" event={"ID":"d11a7509-f83e-4c7f-a491-e59fbee0025f","Type":"ContainerDied","Data":"534a0a64b3873207cb2932acc8f37047eeebb9eceae6a6e6127c92d87fe6b6e6"} Oct 01 17:23:02 crc kubenswrapper[4949]: I1001 17:23:02.232505 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bmhq6" event={"ID":"d11a7509-f83e-4c7f-a491-e59fbee0025f","Type":"ContainerStarted","Data":"fc2047703abd6076043e2610b5142ea6a28e9878b4c5e43ad27a76c6ff85a1e8"} Oct 01 17:23:04 crc kubenswrapper[4949]: I1001 17:23:04.251776 4949 generic.go:334] "Generic (PLEG): container finished" podID="d11a7509-f83e-4c7f-a491-e59fbee0025f" containerID="fc2047703abd6076043e2610b5142ea6a28e9878b4c5e43ad27a76c6ff85a1e8" exitCode=0 Oct 01 17:23:04 crc kubenswrapper[4949]: I1001 17:23:04.251836 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bmhq6" event={"ID":"d11a7509-f83e-4c7f-a491-e59fbee0025f","Type":"ContainerDied","Data":"fc2047703abd6076043e2610b5142ea6a28e9878b4c5e43ad27a76c6ff85a1e8"} Oct 01 17:23:05 crc kubenswrapper[4949]: I1001 17:23:05.266456 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bmhq6" event={"ID":"d11a7509-f83e-4c7f-a491-e59fbee0025f","Type":"ContainerStarted","Data":"eaf5e52579a6d6f55594b91d375b5b90e03b1a97c0a3dc96a4957150f0630a46"} Oct 01 17:23:05 crc kubenswrapper[4949]: I1001 17:23:05.300945 4949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-bmhq6" podStartSLOduration=2.673120428 podStartE2EDuration="6.300923921s" podCreationTimestamp="2025-10-01 17:22:59 +0000 UTC" firstStartedPulling="2025-10-01 17:23:01.226234936 +0000 UTC m=+6080.531841127" lastFinishedPulling="2025-10-01 17:23:04.854038389 +0000 UTC m=+6084.159644620" observedRunningTime="2025-10-01 17:23:05.293772223 +0000 UTC m=+6084.599378414" watchObservedRunningTime="2025-10-01 17:23:05.300923921 +0000 UTC m=+6084.606530112" Oct 01 17:23:09 crc kubenswrapper[4949]: I1001 17:23:09.433359 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-bmhq6" Oct 01 17:23:09 crc kubenswrapper[4949]: I1001 17:23:09.433943 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-bmhq6" Oct 01 17:23:09 crc kubenswrapper[4949]: I1001 17:23:09.482939 4949 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-bmhq6" Oct 01 17:23:10 crc kubenswrapper[4949]: I1001 17:23:10.357215 4949 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-bmhq6" Oct 01 17:23:12 crc kubenswrapper[4949]: I1001 17:23:12.103238 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bmhq6"] Oct 01 17:23:12 crc kubenswrapper[4949]: I1001 17:23:12.328117 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-bmhq6" podUID="d11a7509-f83e-4c7f-a491-e59fbee0025f" containerName="registry-server" containerID="cri-o://eaf5e52579a6d6f55594b91d375b5b90e03b1a97c0a3dc96a4957150f0630a46" gracePeriod=2 Oct 01 17:23:12 crc kubenswrapper[4949]: I1001 17:23:12.775275 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bmhq6" Oct 01 17:23:12 crc kubenswrapper[4949]: I1001 17:23:12.943213 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d11a7509-f83e-4c7f-a491-e59fbee0025f-catalog-content\") pod \"d11a7509-f83e-4c7f-a491-e59fbee0025f\" (UID: \"d11a7509-f83e-4c7f-a491-e59fbee0025f\") " Oct 01 17:23:12 crc kubenswrapper[4949]: I1001 17:23:12.943368 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpkbz\" (UniqueName: \"kubernetes.io/projected/d11a7509-f83e-4c7f-a491-e59fbee0025f-kube-api-access-lpkbz\") pod \"d11a7509-f83e-4c7f-a491-e59fbee0025f\" (UID: \"d11a7509-f83e-4c7f-a491-e59fbee0025f\") " Oct 01 17:23:12 crc kubenswrapper[4949]: I1001 17:23:12.943427 4949 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d11a7509-f83e-4c7f-a491-e59fbee0025f-utilities\") pod \"d11a7509-f83e-4c7f-a491-e59fbee0025f\" (UID: \"d11a7509-f83e-4c7f-a491-e59fbee0025f\") " Oct 01 17:23:12 crc kubenswrapper[4949]: I1001 17:23:12.945012 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d11a7509-f83e-4c7f-a491-e59fbee0025f-utilities" (OuterVolumeSpecName: "utilities") pod "d11a7509-f83e-4c7f-a491-e59fbee0025f" (UID: "d11a7509-f83e-4c7f-a491-e59fbee0025f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 17:23:12 crc kubenswrapper[4949]: I1001 17:23:12.951939 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d11a7509-f83e-4c7f-a491-e59fbee0025f-kube-api-access-lpkbz" (OuterVolumeSpecName: "kube-api-access-lpkbz") pod "d11a7509-f83e-4c7f-a491-e59fbee0025f" (UID: "d11a7509-f83e-4c7f-a491-e59fbee0025f"). InnerVolumeSpecName "kube-api-access-lpkbz". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 01 17:23:13 crc kubenswrapper[4949]: I1001 17:23:13.045841 4949 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lpkbz\" (UniqueName: \"kubernetes.io/projected/d11a7509-f83e-4c7f-a491-e59fbee0025f-kube-api-access-lpkbz\") on node \"crc\" DevicePath \"\"" Oct 01 17:23:13 crc kubenswrapper[4949]: I1001 17:23:13.046106 4949 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d11a7509-f83e-4c7f-a491-e59fbee0025f-utilities\") on node \"crc\" DevicePath \"\"" Oct 01 17:23:13 crc kubenswrapper[4949]: I1001 17:23:13.181628 4949 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d11a7509-f83e-4c7f-a491-e59fbee0025f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d11a7509-f83e-4c7f-a491-e59fbee0025f" (UID: "d11a7509-f83e-4c7f-a491-e59fbee0025f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Oct 01 17:23:13 crc kubenswrapper[4949]: I1001 17:23:13.251261 4949 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d11a7509-f83e-4c7f-a491-e59fbee0025f-catalog-content\") on node \"crc\" DevicePath \"\"" Oct 01 17:23:13 crc kubenswrapper[4949]: I1001 17:23:13.338828 4949 generic.go:334] "Generic (PLEG): container finished" podID="d11a7509-f83e-4c7f-a491-e59fbee0025f" containerID="eaf5e52579a6d6f55594b91d375b5b90e03b1a97c0a3dc96a4957150f0630a46" exitCode=0 Oct 01 17:23:13 crc kubenswrapper[4949]: I1001 17:23:13.338873 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bmhq6" event={"ID":"d11a7509-f83e-4c7f-a491-e59fbee0025f","Type":"ContainerDied","Data":"eaf5e52579a6d6f55594b91d375b5b90e03b1a97c0a3dc96a4957150f0630a46"} Oct 01 17:23:13 crc kubenswrapper[4949]: I1001 17:23:13.338884 4949 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-bmhq6" Oct 01 17:23:13 crc kubenswrapper[4949]: I1001 17:23:13.338901 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-bmhq6" event={"ID":"d11a7509-f83e-4c7f-a491-e59fbee0025f","Type":"ContainerDied","Data":"e2613b8dbb5c3f7944cb1b1ab48bf46fa1408091177f54783c0a53c2687b7c34"} Oct 01 17:23:13 crc kubenswrapper[4949]: I1001 17:23:13.338917 4949 scope.go:117] "RemoveContainer" containerID="eaf5e52579a6d6f55594b91d375b5b90e03b1a97c0a3dc96a4957150f0630a46" Oct 01 17:23:13 crc kubenswrapper[4949]: I1001 17:23:13.374915 4949 scope.go:117] "RemoveContainer" containerID="fc2047703abd6076043e2610b5142ea6a28e9878b4c5e43ad27a76c6ff85a1e8" Oct 01 17:23:13 crc kubenswrapper[4949]: I1001 17:23:13.381527 4949 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-bmhq6"] Oct 01 17:23:13 crc kubenswrapper[4949]: I1001 17:23:13.392676 4949 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-bmhq6"] Oct 01 17:23:13 crc kubenswrapper[4949]: I1001 17:23:13.417972 4949 scope.go:117] "RemoveContainer" containerID="534a0a64b3873207cb2932acc8f37047eeebb9eceae6a6e6127c92d87fe6b6e6" Oct 01 17:23:13 crc kubenswrapper[4949]: I1001 17:23:13.437473 4949 scope.go:117] "RemoveContainer" containerID="eaf5e52579a6d6f55594b91d375b5b90e03b1a97c0a3dc96a4957150f0630a46" Oct 01 17:23:13 crc kubenswrapper[4949]: E1001 17:23:13.438591 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eaf5e52579a6d6f55594b91d375b5b90e03b1a97c0a3dc96a4957150f0630a46\": container with ID starting with eaf5e52579a6d6f55594b91d375b5b90e03b1a97c0a3dc96a4957150f0630a46 not found: ID does not exist" containerID="eaf5e52579a6d6f55594b91d375b5b90e03b1a97c0a3dc96a4957150f0630a46" Oct 01 17:23:13 crc kubenswrapper[4949]: I1001 17:23:13.438681 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eaf5e52579a6d6f55594b91d375b5b90e03b1a97c0a3dc96a4957150f0630a46"} err="failed to get container status \"eaf5e52579a6d6f55594b91d375b5b90e03b1a97c0a3dc96a4957150f0630a46\": rpc error: code = NotFound desc = could not find container \"eaf5e52579a6d6f55594b91d375b5b90e03b1a97c0a3dc96a4957150f0630a46\": container with ID starting with eaf5e52579a6d6f55594b91d375b5b90e03b1a97c0a3dc96a4957150f0630a46 not found: ID does not exist" Oct 01 17:23:13 crc kubenswrapper[4949]: I1001 17:23:13.438734 4949 scope.go:117] "RemoveContainer" containerID="fc2047703abd6076043e2610b5142ea6a28e9878b4c5e43ad27a76c6ff85a1e8" Oct 01 17:23:13 crc kubenswrapper[4949]: E1001 17:23:13.439209 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc2047703abd6076043e2610b5142ea6a28e9878b4c5e43ad27a76c6ff85a1e8\": container with ID starting with fc2047703abd6076043e2610b5142ea6a28e9878b4c5e43ad27a76c6ff85a1e8 not found: ID does not exist" containerID="fc2047703abd6076043e2610b5142ea6a28e9878b4c5e43ad27a76c6ff85a1e8" Oct 01 17:23:13 crc kubenswrapper[4949]: I1001 17:23:13.439250 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc2047703abd6076043e2610b5142ea6a28e9878b4c5e43ad27a76c6ff85a1e8"} err="failed to get container status \"fc2047703abd6076043e2610b5142ea6a28e9878b4c5e43ad27a76c6ff85a1e8\": rpc error: code = NotFound desc = could not find container \"fc2047703abd6076043e2610b5142ea6a28e9878b4c5e43ad27a76c6ff85a1e8\": container with ID starting with fc2047703abd6076043e2610b5142ea6a28e9878b4c5e43ad27a76c6ff85a1e8 not found: ID does not exist" Oct 01 17:23:13 crc kubenswrapper[4949]: I1001 17:23:13.439340 4949 scope.go:117] "RemoveContainer" containerID="534a0a64b3873207cb2932acc8f37047eeebb9eceae6a6e6127c92d87fe6b6e6" Oct 01 17:23:13 crc kubenswrapper[4949]: E1001 17:23:13.439789 4949 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"534a0a64b3873207cb2932acc8f37047eeebb9eceae6a6e6127c92d87fe6b6e6\": container with ID starting with 534a0a64b3873207cb2932acc8f37047eeebb9eceae6a6e6127c92d87fe6b6e6 not found: ID does not exist" containerID="534a0a64b3873207cb2932acc8f37047eeebb9eceae6a6e6127c92d87fe6b6e6" Oct 01 17:23:13 crc kubenswrapper[4949]: I1001 17:23:13.439836 4949 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"534a0a64b3873207cb2932acc8f37047eeebb9eceae6a6e6127c92d87fe6b6e6"} err="failed to get container status \"534a0a64b3873207cb2932acc8f37047eeebb9eceae6a6e6127c92d87fe6b6e6\": rpc error: code = NotFound desc = could not find container \"534a0a64b3873207cb2932acc8f37047eeebb9eceae6a6e6127c92d87fe6b6e6\": container with ID starting with 534a0a64b3873207cb2932acc8f37047eeebb9eceae6a6e6127c92d87fe6b6e6 not found: ID does not exist" Oct 01 17:23:13 crc kubenswrapper[4949]: E1001 17:23:13.495882 4949 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd11a7509_f83e_4c7f_a491_e59fbee0025f.slice\": RecentStats: unable to find data in memory cache], [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd11a7509_f83e_4c7f_a491_e59fbee0025f.slice/crio-e2613b8dbb5c3f7944cb1b1ab48bf46fa1408091177f54783c0a53c2687b7c34\": RecentStats: unable to find data in memory cache]" Oct 01 17:23:13 crc kubenswrapper[4949]: I1001 17:23:13.613791 4949 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d11a7509-f83e-4c7f-a491-e59fbee0025f" path="/var/lib/kubelet/pods/d11a7509-f83e-4c7f-a491-e59fbee0025f/volumes" Oct 01 17:23:18 crc kubenswrapper[4949]: I1001 17:23:18.038278 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 17:23:18 crc kubenswrapper[4949]: I1001 17:23:18.038721 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 17:23:48 crc kubenswrapper[4949]: I1001 17:23:48.038197 4949 patch_prober.go:28] interesting pod/machine-config-daemon-l6287 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Oct 01 17:23:48 crc kubenswrapper[4949]: I1001 17:23:48.038696 4949 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Oct 01 17:23:48 crc kubenswrapper[4949]: I1001 17:23:48.038738 4949 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-l6287" Oct 01 17:23:48 crc kubenswrapper[4949]: I1001 17:23:48.039531 4949 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"41caeb7faa29cac2601895c29779f001ea635f6bdd8eac6167ddb436f370ac42"} pod="openshift-machine-config-operator/machine-config-daemon-l6287" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Oct 01 17:23:48 crc kubenswrapper[4949]: I1001 17:23:48.039609 4949 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-l6287" podUID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerName="machine-config-daemon" containerID="cri-o://41caeb7faa29cac2601895c29779f001ea635f6bdd8eac6167ddb436f370ac42" gracePeriod=600 Oct 01 17:23:48 crc kubenswrapper[4949]: I1001 17:23:48.695913 4949 generic.go:334] "Generic (PLEG): container finished" podID="0e15cd67-d4ad-49b8-96a6-da114105e558" containerID="41caeb7faa29cac2601895c29779f001ea635f6bdd8eac6167ddb436f370ac42" exitCode=0 Oct 01 17:23:48 crc kubenswrapper[4949]: I1001 17:23:48.696251 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" event={"ID":"0e15cd67-d4ad-49b8-96a6-da114105e558","Type":"ContainerDied","Data":"41caeb7faa29cac2601895c29779f001ea635f6bdd8eac6167ddb436f370ac42"} Oct 01 17:23:48 crc kubenswrapper[4949]: I1001 17:23:48.696291 4949 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-l6287" event={"ID":"0e15cd67-d4ad-49b8-96a6-da114105e558","Type":"ContainerStarted","Data":"30ba526d806877c9c4d80a0198bd5c68b3a284eea8e14af25e5adc1988897226"} Oct 01 17:23:48 crc kubenswrapper[4949]: I1001 17:23:48.696312 4949 scope.go:117] "RemoveContainer" containerID="430799dac3ea9ee99ed16103e75915d512c98cde0242ee3607fc1dd10b2b3cf0"